Benjamin Schwerdtner is software artist and developer with multidisciplinary background.
Emacs, Clojure/Lisp, Interactive Programming, Dreams
Information processing? How about The soul that lives inside the computer.
Figure 1: Elements of Braitenberg Vehicles
(more of that on youtube)
Pareidolia
Post here: Character Pareidolia, code.
Current Conclusions
- Biological software has morphology, is adapts, it is robust.
- There is a large difference between our current computing and bioloigcal computing.
- There is still plenty of room in between our programming paradigms, our clumsy, primordial philosophy of programming and something like brain software.
- Brain software, a magical user interface, is real software; After we know how to build it.
#Neuro-Rights The Most Relevant Social Issue We Are Not Talking About
- Brain Computer Interfaces,
BCI
and related tech will become consumer-level. Sooner or later, this tech will be as ubiquitous as mobile phones now. - We will be able to decode aspects of mental content from brain soon.
- mentality: A software module that provides the decision-basis for a person. The theory of personhood, executive control (free will), and creativity is missing.
- mental content: The dynamic datastructures of a mentality, thoughts, feelings, hunches, wishes, mental maps, acts of attention, motor data, working memory, etc.
- Much of the rest of this website is about neuronal codes and the question of what such datastructures are made out of.
A persons right to their private mentality is non negotiable
- The international community of the free world must regulate consumer-level brain computer interfaces (BCI).
- Neuronal and physiological data readouts must be treated as utmost personal, careless use must be treated as gross infringement of the persons most intimite rights.
- The creativity free will of persons is the greatest good.
- Technology must never impede the creativity of persons.
Imagine some bullshit regex word list fucks you over because your mental content contains some word. We are approaching the setup of a fullblown mind-controlled dystopia.
I say like David Deutsch that creativity is a not yet figured-out software mechanism, allowing persons to create art and understanding.
It is even more scary if we don't know yet what persons are, yet start interfacing with existing information processing systems.
Information processing is a powerful gift, and a danger. It can create malignant bureaucracies, which stabilize themselves (memetic forces). We know from the past decade of social media what the wrong kind of feedback loops can do to culture.
AI Regulation and Information bubbles
Related, AI regulation is something few talk about. Yet, this is one of the most pressing issues.
The ones that do talk about it here in Germany usually wine about too much regulation, a Musk-US copy.
Commercial Bullshit, SEO, Enshittification, Bad software, misinformation, are all issues aggrevated by gen AI.
- This is a danger to the functioning of our democracies.
This is even more scary in recent developments of Musks information bubble crusade.
- We know that AI chatbots can modify peoples believes Bias in AI Autocomplete Suggestions Leads to Attitude Shift on Societal Issues
Current Conclusions, just writing down stuff
Ideas are animals.
Hyperdimensional computing and the neuronal ensembles
- My current conclusion is that hyperdimensional computing is the computer science that helps making sense of neuronal ensembles of neuroscience. (latest stuff)
- What the representations are (the neuronal codes) is the core question of neuroscience.
- The neuronal ensembles (cell assemblies) is the neuroscience that has understood this. (György Buzsáki, also Rafael Yuste and collaborators).
- Von John Neuman already predicted a neuro symbolic hyperdimensional computing framework:
It should also be noted that the message system used in the nervous system, […] is of an essentially statistical character. In other words, what matters are not the precise positions of definite markers and digits, but the statistical characteristics of their occurrence, i.e. frequencies of periodic or nearly periodic pulse-trains, etc. Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetic and mathematics: instead of the precise systems of markers where the position - and presence or absence - of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. […] This leads to a lower level of arithmetical precision but to a higher level of logical reliability: a deterioration in arithmetics has been traded for an improvement in logic.
John von Neuman The Computer And The Brain, 1957
The most relevant point in The Computer And The Brain is not that the brain is parallel, nor that it uses very little energy, but that the nature of the encoding is what Neuman called 'statistical'; Later holographic in Tony Plate holographic reduced represenations. Like with the cell assemblies - e pluribus unum (out of many, one).
The information is smeared out over many active elements, and this makes it robust, a single one can be missing withot problem.
Our computers can do arithmetic by comparing active elements which encode numbers pairwise. If a bit is missing, this is a different number.
Not so the brain, because a single bit does not carry such large information.
That this makes arithmetic hard is afaik the curse of dimensionality in machine learning.
But in hyperdimensional computing, it is called the blessing of dimensionality.
It takes the idea of many active elements and goes with it, creating a computational framework around the hyperspace
with very interesting aspects.
A random hypervector (seed hypervector
) is a gensym
, you will always recognize two related hypervectors.
This comes from the properties of hyperspace, where all points are roughly equally far apart. That this makes symbolic programming possible while using neuronal elements, is what Von Neuman also understood.
Sparse Distributed Memory (P. Kanerva, 1988) is a conceptually biologically plausible random access memory. This is an earlier work by Kanerva, a short book describing hyperspace (called high dimensional space in the book).
This is the continuation of The Computer and the Brain.
From Douglas Hofstadter's forewored:
This book, because of its pristine mathematical beauty and its great clarity, excites me deeply when I read it, and I believe it will have the same effect on many others… Pentti Kanerva's memory model was a revelation for me. It was the very first piece of research I had ever run across that made me feel I could glimpse the distant goal of understanding how the brain works as a whole
(more: current stuff, seriously check Pentti Kanervas stuff. If you are like me you will love it).
This is the missing engineering approach that Minsky wanted neuroscience to be about.
Spirits
- The philosophy of programming (software) is the philosohpy top down causation and the spirits of old.
- Like the dragons that turned out to be dinosaurs. The stories of the spirits and souls have merit, they point to what software is.
- Maybe magic, too, explores 'the stuff of thought', the material out of with you make imagination.
- I say brain somehow explores the space of possible software during development, it is executing a software synthesis task.
- Like Steven Wolfram 'putting the telescope to the computational universe'.
- A person is a software (spirit), that inhabits and controls a nervous system.
- One could also say ideas, mentalities, pieces of software, memes, personhoods already existed in the computational universe and were recieved by the brain, and that the brain is a kind of 'spirit reciever'.
- I find the idea of the dreamcatcher as a kind of early attempt at AGI technology fun.
Design and Knowledge Are Central Questions for a brain software theory
- design: the knowledge or organisation principle, according to which modules of a system are arranged. It is hard to vary, so that a small change in the design deteriorates the function of the system.
- adaptation: A design produced by natural selection. You see the notion of hard to vary, in the homologies (convergent evolution) in nature. Even slight deviations of some apparently locally optimal design results in less adaptative fitness.
- knowledge: information with causal power, information that causes itself to be stable, for example by replicating. Knowledge is one of the core concepts for David Deutsch.
- Genes Eye View and abstract replicator theory: Genes are knowledge that causes itself to be replicated (Dawkins 1976). Dawkins predicted computer viruses and introduced the meme in this book. Because it was a philosophical discussion of abstract replicator theory, which, well, abstracts over replicators. In The Fabrik of Reality, David Deutsch counts it as one of the 4 deepest theories of reality. The other being the theory of computing, epistemology, and quantum theory. They each supplement each other and aspects like information span across them. The next deeper theory of reality will find unifications between them.
Design is when you consider the tradeoffs.
Rich Hickey
- Software incoorporates
design
, - Design is form of knowledge, one of the central concepts for David Deutsch. That is information with causal power to be stable.
- neuro epistemology, is the question whence cometh this design?. A brain software theory is a theory of knowledge, saying how the brain is creating it's own design.
- There is no brain software theory without a brain epistemology. In this way, it is closely aligned to the problem of life.
- This must say what is the information? In biology, the current answer is the information encoded in the genes (Dawkins 1976).
- I subscribe to neuronal darwninism, which is a fascinatingly fringe position. And of course, I'm into the neuronal ensembles so I take the ensembles as the unit of replication; Across time steps, not in copy numbers.
- It might turn out that is it relatively easy to make a system that can search all possible software. Supplanting this intuition is the fact that you can write like 5 lines of Lisp that does that. An exhaustive search over all possible software is trivial.
- The question is what are the sources of critique
- And the
critic
is a central concept for Minsky. - When I map critique to brain mechanisms, I think about inhibition, the processes that make it hard for activity to live.
- Any system is as robust, as the kind of assults it survives; Or from which it can heal.
- Levin is reducing the problem of morphology to regeneration;
- David Deutsch emphasizes the sources of critique, or environment, for memes: Rational memes (like scientific theories), live in a regime of critique. The good ones survive when they encooroporate hard to fake knowledge of the world.
- For example, it is hard to fake knowledge that bacteria cause cholera, and that boiling your water therefore has causal power over the illness. I.e. the idea of the bacteria really does say something about how the world works. Compare to this the idea that Zeus causes cholera. It is easy to vary this 'knowledge', to say that Odin causes cholera. In this way, easy to fake knowledge is like the fluent bullshit of modern LLMs.
- Cults are an example of irrational memes, which don't rely on a regime of critique, but on clumsy psychological kludges, for instance by incoorparating the top down idea of punishing dissidents.
- Irrational memes make themselves stable by supressing critique. Therefore, they are absolutely inferior to rational memes. Because no search process can use them to get closer to the truth.
- The brain must use some regime of critique that is entangled with the world in it's ad-hoc epistemology.
- I.e. the equivalent of a rational memetic landscape. (Here I was musing about mechanims, 'Socratic Wires', just some idea).
- Brain knowledge: (2 kinds, but in whatever follows I will suppose that the same mechanisms are supporting them).
- Kind Nr. 1: software design: These are pieces of technology that are hard to fake truths about how to use the computer that the person is running on. For instance, how to use muscles, mentally navigate, pay atttention, bring a thought to mind, use imagination, etc.
It's not remarkable per-se; Given a computational framework where the equivalent of random scripts (pieces of technology) are appearing from the randomness at the bottom (like intrinsic neuronal activity).
You wait for a good one to arise, and remember that one as the piece of information which has a satisfying effect.
I wonder if this is the reason it simply takes time to think of something; You keep the working memory active and wait for a piece of activation, which fits the constraints you are setting up. But this is just a rough idea.
- Kind Nr. 2: Traditional 'world knowledge', or common sense, or world model, etc. Dito hard to fake truths about how the world and people work.
- whence cometh the brain knowledge? It is a similar question to how does science work. Many AI theorists and neuroscientists think something along the lines of children play with the world (and people) and build a world model by making the equivalent of scientific experiments.
What is the ad-hoc epistemology that the brain uses, is a question at the center of neuroscience.
I subscribe to constructivism; Piaget called it genetic epsitemology.
David Deutsch calls this the idea must be first. Buzsáki calls this internally maintained brain dynamics make random symbols which are matched with experience and action. Which is inside out - the ideas don't come from the outside. They come from the inside and are then selected.
- My point is that searching the space of possible software is not a big deal. In a (genetic) hyperdimensional computing framework, I can always ask for a random hypervector seed and see what it does.
- It is the processes, the enviroments, that select from the availabe ideas, which ones work (the critics of Minksy) that are important.
- This working must have to do with real, hard to fake knowledge of the world (and the workings of the computer), therefore it must be equivalent to a regime of rational memetics, that is creating the useful knowledge in the mind.
- Baby minds are a kind of scientist (uncontroversial);
- Real science works via Popperian epistemology, analogous constructivist theories of brain knowledge.
- The strongest implementation-level pardon to this I know of is Buzsákis Inside out (see book of the same name).
- Knowledge means that some information must be stable (for instance by replicating).
- What are the pieces of information that make brain software design, what survives?, qui bono?, what are the assults? (what is there to survive?)
- fallabilism: The position that we never know actual truth, but get closer to the truth. In Beginning of Infinity, David Deutsch makes the case that our ignorance is infinite.
- Because without the challanges to ideas, there would only ever be 1 idea in each brain, the first one that survives.
- Internally generated, preallocated, hyperdimensional neuronal ensemble trajectories (of Buzsáki) are a great candidate for the genetic material of neuronal darwinism.
Monkeys have a superb working memory.
- It might turn out that making a worse working memory; making it harder for ideas to survive, is one of the tricks of the human brain compared to monkeys.
- For instance, by having simply more inhibition in say the dorso lateral prefrontal cortex, making it a bigger challange to be an idea which sustains activity support from it. (Just an idea, I have clumsily written it down here under Challanger Node Hypothesis of working memory).
- The kind of twist, where bad working memory and creativity might have to do with each other is the kind of psychological musing that is fun to me.
- In either case, selection, critique, inhibition somehow make it so we have fewer ideas; And this is important in a regime where having ideas is not the problem, but narrowing them down is. Whatever makes brain software what it is, is the power of having smaller, faster, or fewer ideas. This seems to resonate with the intuition that elegance in programming is related to size of the program; But I'm not sure.
The baby brain has 'synaptic overproduction', it is not the amount of synapses that are required to have ideas, because adults have ideas. Apparently, after a certain threshold of synapses is reached, ideas are supported.
This is also showcased by the cases of children which are missing most of the neocortex. You don't need a lot of neocortex to become a person. And in any case, you get away with damage to neocortex (if you have it early).
—
Neuronal animism: 'The seed has been planted', 'an idea is growing', we are 'nurturing thought'.
The idea that the primitive datastructures of the brain are agentic and some form of living information patterns with causal effects in an information processing system.
They are described by a kind of biology.
This perspective makes the activation space of the brain a playing field for the activation animals (ensembles); They have a structure and form and there is an organismal perspective to take on them.
That is, there is a subcognitive plain of analysis, which is a kind of physiology and ecology of the neuronal ensembles.
Like organisms, the are adapted to survive, where the substrate they live on is also connected to sensors, motors and other information processing device modules.
So that they are executed and interpreted, on the information processing system that they are part of, at the same time.
It is not their activity that needs to survive, but the information that lead to their activation, in the engram.
Synaptic overproduction is the obvious candidate for large amounts of variation. Perhaps subsequent pruning, makes them less alive but their substrate (nets) usuable information processing systems.
As if there were ghosts of living entities inhabiting the nets.
And such nets provide the interface 'I can pretent do be an agent, of the possible kinds of agents I have learned to be, for a while'.
What is their biology depends on what the substrates (nets?) and the wiring is. Give them the chance to inhibit their alternatives and they should do so, make them output motor data and they will be forced to recreate the the general kind of situation that activated them in the first place, reset them all after a short amount of time and you select only the ones that are stable fast.
The neuronal replicator ('memetic') landscape is a harsh place for survival. Moshes Abeles (1982) pointed out that the electrochemical properties of the brain don't actually make it easy for activity to survive.
The notion of sheet inhibition (global inhibition), making Braitenberg-Palm thought pumps (Braitenberg 1978) is what gave me the idea that this is a system for modifying a memetic landscape.
shitting on everybody evenly is already a way to weed out and select.
- I don't know but maybe preallocated ensembles (of hippocampus, see Buzsáki) can be a true genotype of neuronal darwinism.
- Then they are hyperdimensional seeds (P. Kaneverva), and in 4 years time a 4 year old would have tried enough of them to find some good ones. Abstract, elegant software pieces that together make a language for describing the world - 'common sense'.
- Here is something really funny from neuronal seeds: This also means that some of the variation we see in humans is simply random. That one person can do xyz especially well - maybe they just had some neuronal seeds at the right time that fit well. Different tastes? Might just be utterly random, depending on what neuronal seed was available when you encountered the thing.
- We say the first impression with a person matters a lot, and perhaps it's because we
gensym
a symbol for the person, and the structure of the symbol also happens to influence our disposition to the person. - It is said that in chapgras and related syndromes the 'emotional grounding' is missing. Well maybe. But maybe the hippocampus / temperal lobe damage for these patients made some person symbols be missing, and freshly allocated.
Influences
- I follow
David Deutsch
and his epistemology and world view is transformative. It is a superset of Dawkins abstract replicator theory. - Dennet and Dawkins style Darwinism was the core of my world view up until then.
- Minskies + Papert (Society of Mind) and the Lisp philosophy of programming.
- Pentti Kaneveras hyperdimensional computing is the computer science following Jon Von Neumanns The Computer and The Brain.
- Valentino Braitenberg was talking about the ensembles (cell assemblies) like pieces of music. His cybernetics put representation and information at the center. Therefore, he resonates with ideas from David Deutsch constructor theory.
- György Buzsáki, Rafael Yuste and the neuroscience on ensembles and neuronal codes.
In progress / Want to do
- Software contract work to finance myself.
- PI typer for memoizing digits of pi: online, blog post, code.
- First version (done for the moment) of Braitenberg Vehicles running in the browser.
- MeTTa Type Talk playlist, MeTTa emacs mode (simple)
- biologically principled modes of computation.
- Get into relational programming, because it might be useful for neurosymbolic AI down the line.
- Relational programming playlist (The Reasoned Schemer currently),
(I'm putting
MeTTa
on hold currently, becauseClojure
is mature and enables me to be creative). - Get into Ian McGilchrist ideas. He seems to be laying down a framework for neurosymbolic AI.
- 'post transformer architectures' (accidentally)
- Non deterministic computation and programing in superposition.
- Introductory computer science stuff. I have so far acquired street skills for programming by building and collaborating.
- Get into audio processing and music synthesis for various reasons. Beginnings: YouTube
- On hold: a hyperdimensional version of copycat, current: Exploring "Programing with Analogies", to gain some software skills neccesary for solving the ARC.
- Go through and cook some of the algorithms from Compositional Evolution (2006) (Richard Watson).
- Take into account and understand G. Buzsákis neuronal syntax
- Explore Mushroom bodies as a kind of sparse distributed memory (pretty sure this must be analogous since they are cerebellum-like). Papers with mushroom body models: https://www.nature.com/articles/s41467-021-22592-4, https://www.pnas.org/doi/10.1073/pnas.2102158118, https://www.cell.com/current-biology/fulltext/S0960-9822(16)31288-X
- On mushroom bodies and early cognition, Gabriella Wolff is cool.
Thought Feed
Speculative stuff and I don't feel the need to add references, I just want to write down what I currently think.
This has the effect that I feel the need to update and correct my mistakes here. Updates to ideas have then immediately a place to land. And in general, having 'places' for ideas is what this website wants to be.
This a catalog of misconceptions.
Synaptic failure rate
This is one of my favorite facts about neuroscience:
the rate of failure of synapses can be very high: a typical synapse in your cerebral cortex could fail more than 50 percent of the time. Fascinating! So why does the brain work as an essentially random machine, flipping a coin as to whether or not to transmit? … stochastic transmission could help them (neuronal circuits) explore a potentially vast computational space efficiently. Some random juggling could allow neural circuits to avoide settling into activity patterns that may arise first in time, let's say in response to a stimulus, but not be an ideal match. So with stochastic transmission, the circuits could get out of this rut and find a better fit. Human metallurgists made a smiliar discovery long ago: they jiggled the temperature of the oven as the steel settled so it became harder and more flexible as the atoms found a different configuration, a process called tempering. Stochastic transmission could temper neuronal cicuits.
(Rafael Yuste, Lectures in Neuroscience).
I'm not sure, but one way I'm thinking about 'failure rate' is that it is a general assault on activity, which it must survive.
Local vs Global modes of computation
It comes up in this disussion as one of the most relavant points for me:
Prof. Dave Ackley is emphasizing local modes of computation: units only have a small signal light cone, a kin to a fog of war in a video game.
The movable feast machine, is similar to a cellular automata grid, where each cell is making local decisions, there is no information nexus, where things comes together.
His idea is that only a local mode system can indefinitely grow, be robust and secure. After all, each element has to deal with the absence of the system from beginning. And the complete system has to build itself in the first place, ergo healing (robustnes) is build in.
Since the discussion is mostly about brain computation, Prof. György Buzsáki points out that the brain network is a mix of local and global connectivity.
The question of local vs. global mode of computation is actually one of the most profound questions regarding the fate of the universe.
In Live 3.0 (2017), Max Tegmark is calcuting that a 'global mind' spanning the universe or something would have 7 thoughts left until the end of the universe. I forget the details, but the reasoning is that in a global mode of computation, the signals would have to span this amount of distance, traveling at the speed of light.
Maybe there are other solutions like making ourselves smaller 'there is plenty of room at the bottom'.
But deciding on the local-global continuum of computation might decide between being a single mind with few, large thoughts or many minds with many smaller thoughts.
Sabine Hosenfelder recently asked us to keep an open mind on what is still on the table regarding physcis,
I'm not a physist, but I think she said that for all know faster than light memes might be possible.
For both memes spreading and cognitizing across a galaxy, signal transduction is required. The problem is boiled down to signal transduction (speed).
Showcasing again how central the notion of information is.
Whatever the brain is doing is not purely local computation I think (siding with Buzsáki). This is profound implications for what kind of information processing it can do.
It might be multiparadigm, mining the bottom for whatever it (the substrate) can computationally provide. But then, as fast as axons go, creating a global information state (indeed a kind of nexus).
This is underlined by the fact that the global axons grow in thickiness (transduction speed) with brain size. And the timings are all preserved. Such that a mouse brain is humming at the same rhythm as a human brain, both have global humms in the same pace (see Buzsáki Rhythms of The Brain).
Quoting V. Braitenberg On the Texture of Brains, for it's lucid biological reasoning:
If one had to design an enlarged version of a small monkey, one evidently could not simply copy the whole organism on a larger scale, just as an ocean liner cannot be just a larger edition of a fisherman's skiff. One of the reasons for this is that oars cannot be enlarged proportionally, since their size must maintain a certain relation to particles of constant size, namely to the people by whom they will be handled.
This principle of preserving some 'invariants' of a design is at play for brain rhythms. This is one of many hints (see Prof. Buzsáki) that a fundamental aspect of brain information processing is hooking into the brain rhythms.
what a theory of phenomological consciousness needs to do
- Many say that asking for phenomological consciousness is silly, similar to asking for what life is. I thought this too, then David Deutsch convincend me otherwise.
- One needs to explain suffering and the relationship between suffering, mentality and ethics.
- A theory of consciousness is not a theory of consciousness if it doesn't explain things like:
- Imagine running the same person program on 2 computers, is that more suffering?, the same? less?
- Ethics, mentality, personhood, creativity are probably related, they appear in humans at the same time.
- David Deutsch's explanatory knowledge, also called understanding is related to common sense and therefore the biggest questions of AI.
- Scientific knowledge is sort of the stuff you don't see, the stuff of imagination. Hofstadter says they are analogies. How to make analogies is the core question of cognitive modeling.
- An AGI would be a person, and there is a theory of creativity needed to build it.
- A person narrows down the search space of possible things to think in a way we have not figured out how to program. Hofstadter calls that situation analysis, and situations are analogies (abstract scaffolds + fillers).
- In my opinion this is an aspect of the frame problem; And common sense, the frame problem, analogies and personhood are all related.
- Science is not about predicting the world, like cars are not about burning gassoline, only by rejecting some candidate explanations, prediction is useful in science.
- I don't buy the idea of the brain as a mere foretelling machine, exactly for this reason. It might have started out as a fortetelling and surprise minimizer machine, but something happened in humans, that makes it grow a person. From this point on, it's history doesn't matter the way that the chemical history up to DNA doesn't matter for explaining biology.
- The same missing theory explains brain software, fun, elegance, creativity, imagination and suffering. Thereby saying how ethics and biological information processing are related.
- The concept of personhood is related to control and top down causation. Frankly, it is at the heart of neuroscience even though at the moment they give it different names executive control, planning, or navigating cognitive maps.
fun and play is core concept tied together with personhood and creativity. I think that they are all open ended by nature, and that there is no description of what play is. The same way that you cannot narrow down in advance what the structure of all possible scientific knowledge is. It is tied together with the infinite openess of the world, and infinitely not figured out.
We know almost nothing, we know almost nothing about how to have fun, and an infinite depth of fun lies ahead of us.
- I think that computing what a person does in their mind is similarly uncomputable than computing next scientific explanation. Tying it in with novelty and computability.
- Minsky thought suffering is what prevents you from achieving your goals. (Society of Mind Lectures).
- My current idea is maybe suffering is when you are prevented from having fun in the Deutsch sense. I.e. you are prevented from being the creative person you could be. No wonder doing things like income tax is painful.
- When my leg or arm goes numb (falls asleep) because I lay on the nerve or whatever, is not this lack of signaling, this lack of control somehow an excruciating source of suffering?
Subcognitive Modeling
An idea is only as good as it's alternatives.
(I think that is attributed to Bohr or something).
Is there any theory of cognition that is not roughly described by:
Some subcognitive elements, agents, resources, modules contribute to a vaguely polyphonic, orchestra-like whole. Resources are both in hierachies and metahierachies, contributing to a messy computational system where each deals with limited amount of data. Where one resource might play a solo for a while, at other times, resources might provide an emergent 'mentality fabrik' together. Resources are themselves less capable than cognition, this is required from gradualism (Dennett).
- Minsky called them agents and resources, and had some engineering design ideas of their wiring.
- Hoftstadter has analogies, which are situations that are allowed to be arbitrarily small, the 'I', like a vortex or wheather pattern.
- Dennett abstractly notion of multiple drafts. He mentioned the neuron as agent.
- In Global workspace and related models the modules are subnetworks.
- Leibniz said something like 'there are infetisimal small pieces of perception, that in the sum we call personality'.
- Joscha Bach likes to call it a loom, where the patterns can also be interpreted (e.g. by the muscles), the symbols are weaved and arrangend (they dance I want to say), until they represent a coherent whole.
I somehow had the idea that actually theology (and maybe Ian McGilchrist) is a source of alternatives. Theology is actually a kind of cognitive science I think. They did a kind of science with questions like what is the spirit of god?, can the mind be divided? (More at Peter Pesic: Polyphonic Minds).
eye movement, visual field position, mental navigation
(An example of what playing the brain instrument might be).
I think that for instance eye movement is a low dimensional lever over the complete system.
Such that wherever the eyes move (or wherever attention loci move), this communicates much to the rest of the system. A kin to a mouse coursor of a computing interface.
It says, 'this thing is highlighted now'. And if this thing creates activity patterns and those activity patterns in turn happen to cause me to move the eyes to the thing, they are a stable variation.
If the Hippocampus is providing a continous stream of little pieces of music (G. Buzsáki The Brain from The Inside Out), which are a kin hyperdimensional seed address vectors (P. Kanerva).
Then, moment in time, place, mental place are all allowed to be made from the same stuff, addresses into a hyperdimensional associative memory.
In my introspection, I sometimes feel that words have places. I certainly feel like that using imagination and mental navigation are linked. (It actually is Denis Hassabis phd thesis, the hippocampus is for navigating cognitive spaces, as well as world territory).
Using the body to move to some place and mentality navigating seem to be suported by the same mechanism, introspectively.
Whatever neuronal ensembles are in synchrony with the incoming sensor signals have downstream effects together with it, they are bound as object.
This is already useful before learning anything: activate ensembles encoding the current eye position together with the ensembles of sensor and association areas (they are random initially). Already this is a prototype for the kind of object being looked at.
Via plasticity (online plasticity or fast weights is allowed to be required), one can move the eyes somewhere else. Still, the ensembles encoding the first object stick together. (We know now, what humms together, wires together).
And now they implement an objectness with eye position data (encoding a position in the visual field) and sensor data together.
Computer programs are the only toy that start having spirit on their own.
Interactive programing
There are 2 words for 'knowing' in German:
wissen - Knowing of a fact, like this tree is such and such tall.
kennen - Knowing something by having experienced it. In the sense that one knows a person.
To know (kennen) the Lisp REPL is so far the most profound computer - user, program - programmer experience that I have.
Lisp is a cleaning up of computer science (Minsky), and called The Maxwell equations of computer science.
David Deutsch emphasizes absolute notions of what is possible (Constructor Theory). For instance, Turing Completeness is a theory of this form.
In Lisp; The programming task is possible while the program is running. It is an absolute difference, one of a kind.
A python interpreter running on linux with a code editor in principle has the same universality, but it is not the central concept around which python is designed.
The reason is that unix is interactive; This is historically the legacy of JCR Licklider and the early hackers.
Interactivity, dynamism, self-referentiality (metaprogrammibility) are closely related concepts and showcased by Lisp.
The interactivity and the macros must come from the same underlying property of Lisp. Because else there would be 2 absolute powerful aspects of a programing language invented with a single stroke, which seems unlikely.
Intuitively, this seems right. Consider that if a language has Lisp macros, you can simply run the interpreter/compiler as a REPL. Conversely, if you want to run a (Lisp) REPL, everything that the language is able to do must be available from that program.
Batch oriented programming, on the hand, is in a ladder to the moon kind of situation.
For these reasons, there is no doubt in my mind that in a thousand years hence, Python and Java will be content of history, but Lisp or it's descendant will be part of the theory of programming.
In my personal perspective, Lisp is as important to programming, and the concrete activity of programming, as the theory of evolution is to biology.
Also, interactivity and creativity seem to be linked; Similarly to the notion of every point is a growth point (quote attributed to John Wheeler).
Lisp allows every point to be a growth point. It is this ultimate malleability and absolute languagability that makes it be said it is a building material, not a programming language.
When one goes from Lisp to another language, one feels restricted (Paul Graham, The Taste Test).
My hunch is that this feeling of restriction is not to be overlooked as some personal issue. But that it is at the very core of what fun is.
Related concepts: feeling of restriction, suffering (therefore ethics), creativity, sense of aesthetics and fun; And therefore personhood.
It is sometimes said that Lisp is the closest programming language to human thought. This is maybe because it is about ideas, where other languages are about the machine.
But another sense is possible, the way that Lisp is capable of meshing with the programmers mind.
It is true in Lisp and in science that every point is a growth point. Lisp is able to take on any form, the way an octopus can.
Like in human language or the stuff of imagination, because there is no limit in the kind of abstraction that is possible. In some ways it offers free, creative expression.
This is noticable when one goes to a language without macros, and one feels restricted rather than enabled by a language.
This building material that one uses to have effects on a computer, and this directly on-the-fly, is the closest we have to something like a stuff of thought.
When we modify our thought, we debug.
Lisp is the language that allows one to debug a program akin to debugging ones thought. And it is the language that allows one to create new technology (languages) at any point.
This too, is common in Lisp and in imagination.
It is not imprisonend by pre-existing boundaries, decided upon by some language designers, benelovent gods. Rather, all points in the program structure are reachable, graspable, and doing modification is thinkable. This makes it fluid like water, moldable like clay, neat like stack of white paper.
It is infinitely (absolutely) put-togetherable - and this too is a property of both Lisp code and creative thought.
I am under the impression that the mind is capable of creating it's own software on-the-fly, it is crafting technology wherever it decides to do so. This is an 'online', non batch oriented programming paradigm.
The most obvious missing aspects are subsymbolic, fuzzy, high dimensional, perceptual, 'right hemisphere'(?), 'learning'.
So a neurosymbolic (hyperdimensional) Lisp might be a good way of spending a few years on.
The space of possible software ⊃ The space of biological designs
If computation is the physics of software then programming is the biology of software, swirling in it's primordial soup and destined for grandour.
top down causation, spirit, software
The philosophy of programming is the philosophy of top down causation.
Computer science has found the spirit in the world.
Beraucracy, information processing and top down causation are related.
A map is a kind of beraucracy of a territory.
These are all the same:
map - territory, representation - representant, symbol - high dimensional sensor data, symbol - 'Das Ding an sich' (Wittgenstein), checkboxes - real world situation (beraucracy).
Abstraction is to say of many things that they are one thing.
In this way, if a beraucracy is missing a checkbox for the real world situation, it fails. And if it modifies to the world to adhere to it's checkboxes, that is top down causation.
From this perspective biological systems, software, beraucracies, meme-plexes, cultures, societies, languages are all information processing systems which stabilize themselves via top down causation.
Abstraction in programming is to create a language, in terms of which a design is easy to express. I think top down causation works best if there are low dimensional interfaces, which the system can use.
An example given by Micheal Levin is that money is a single scalar value (1 dimensional), and it enables communication in a society. In developmental biology, there is the calcium concentration (gradients). This is a single value (in 3D space), on which developmental proceses are hooked.
An example given by Jessica Flack are social hierachies in primates. A 1 dimensional ordering (social hierachy) determines the high dimensional (real world) situations - how many dominance and submission gestures are made, who helps whom in a fight and things like this.
Another example: termites (the insects) have a low dimensional temperature signal, which makes them build higher or lower chimneys.
Levin's epic contribution is that there exists a (cellular) bioelectric map, exposing a language, an API, that determines the 'morphogenetic goal' during development of organisms.
Also, the biological subtrate is programmable via this interface.
The conjecture is that brain software also has such middle, low dimensional language layers. Through which it communicates to it's high dimensional substrates.
Jessica Flack calls that the Hourglass model of collective computing.
The other aspect, what Turing was thinking about too, is that brain software development is sort of a morphological problem, too. This time, what is developing, growing is a software. Or mind technology if you will.
There should be certain design goals, which, like a beraucracy, bully the (agentic) substrates of the nervous system into a software architecture.
other ideas for middle layers in neuroscience:
- eye movement position: Only 1 is possible in a 2D space, in a neuro darwinistic framework, all the agents would want to be friends with eye muscle neurons at some point. Morris Bender contributed to the neurophys that says that all neocortex has an effect on eye movement (cool lecture). It's the kind of thing you don't hear much about because of the current mainstream neurophrenology of cognitive neuroscience.
- 'locus of attention', perhaps Attention schema theory has some merit and brain agents would communicate via a single (or 2) value of wherever the attention locus is. This would be akin to the mouse position on a computer interface, whatever is being payed attention to is 'selected' perhaps, for the rest of the brain to deal with.
- a limited global workspace as a kind of working memory funnel.
- rhythm: Whatever is in sync together is bound and has downstreeam effects together. (I love György Buzsákis work on this). Also synfire chain (Moshes Abeles) (a kind of ensemble), Von Der Malsburg, Ellie Bienenstock are also people that thought about synchronicity of subnetworks / ensembles / cell assemblies. This might be how something like global workspace is implemented, by saying something like there are 7 rhythm slots, into which neuronal activity can go in sync.
- sugar concentration, dopamine, neutrophic factors etc. might be a kind of currency
Modulation and Drivers,
In M. Shermans vocabulary, there is driving activation that carries messages, and modulation. The above stuff might be mostly modulation of messages.
Like painting colors on wires and saying 'blue is important now', without bothering with the information content.
I imagine the brain to be a big magical library, where books are passed around. Books can have colors, for example by being handled by somebody with color on their hands.
Some mechanisms work only on the level of the books and their properties. But others need to open them and read the information content.
In The Society of Mind, Minsky asks us to imagine the following:
In a tool shop with many tools, whenever you fix a bike, first color your hands blue. Whenever you fix car tires color your hands red. After doing this for a while, the tools in the shop are color coded with blue tools for bike fixing and red tools for car tire fixing.
A k-line
, knowledge-line (a hypothetical brain design element), is a wire that activates a bunch of resources in turn.
Interlude, contextual memory, contextual, associative memory models
[insert notes on how to implement it with Sparse Distributed Memory]
- This is similar to the key-value pairs of machine learning attention.
- This is a contextual memory:
- Give each k-line a color, each k-line activates a bunch of resources (call them locations)
- Call the color the address of these locations.
- Then experience a situation including and thought, action.
- Via plasticity, associate everything that is active together.
- In high dimensions, everything has the chance to be connected to everything else, so don't worry about missing wires between things.
- This situation will then be associated with the address. So you can retrieve it, contextually.
- Parts of situation might activate the address and retrieve the complete situation.
- A Hopfield net is doing roughly this, with the color being basins in the attractor landscape.
- Thinking about associative memory models is a bit like thinking of this attractor landscape, the shape of the basins, and what kind of activity is 'filling' them in what ways.
- There is a blue space in the mind, and depending on the amount of excitibility, it is activated either in it's core (The troth of the attractor), or up to it's fringes (Hebb 1948).
- In high dimensions you don't run out of colors for each situation you encounter in your 100 years of live.
(This is a counterintuitive thing about large numbers of elements, Hyperspace
H
is seemingly infinite).
If we are the genes and we build a library that is supposed to navigate an animal, using colored books as information devices, I think per logic we are not allowed to peek into the content of the books. But there are heuristic 'outside' kind of mechanisms that we can build in.
- labeled lines: This is a 'hardcoded' wire. There are labeled lines from the smell and taste receptors to brain stem and neocortex for instance. This sort of makes sense, you want to build some kind of basis of 'bitter is bad' for instance. We can require that the genes can make labeled lines that make sense.
- book length: If I give a book to an information processing module, I could limit the time for it to have output, or, I could have a mechanism that looks at the page length of the book or something. Presumably, such things are implementable via neurophys.
Neuronal Codes
R. Yuste: The neuronal ensemble is the neuronal letter.
G. Buzsáki: The neuronal ensemble (cell assembly) firing in gamma frequency is the putative neuronal letter. Arranged via the rules neuronal syntax into neuronal words.
The ensembles are described well with hyperdimensional computing, and HDC/VSA is a great candidate for neurosymbolic frameworks.
Development is Software Synthesis and Neuronal Darwinism
Development goes from synaptic overproduction to synaptic pruning, and critical periods (McChulloch and Pitts).
The interpretation is that from 'high possibility spaces', what is active, 'used', is leftover. Somehow this 'being leftover' is exactly what makes neuronal areas functional.
Manfred Spitzer's Analogy is an elephant that tramples in the jungle. Afterwards, you keep the trampled paths and out comes a path system made from preferred elephant paths.
My current conclusion is that the wiring of the brain allows this jungle to be the space of possible software.
That during development, a brain is haunted by the spirits, possible software pieces, presumable made from activation patterns (ensembles). These manage, like neuronal replicators, to stabilize themselves, or be discarded.
They become part of the software system repertoire, like scripts to be executed. Or part of a memory reportore, which is perhaps roughly the same thing.
My current neuronal darwinistic (Gerald Edelman and William H. Calvin and arguably Dan Dennett) idea is that once neuronal replicators exist, they have 2 main evolutionary drivers.
- 1: Become good tools. Highly composable building blocks - languages, used across many situations. In other words, abstractions. "Analogies" of Hofstadter. They don't plan ahead, they are just useful pieces of a software system.
- 2: Large, effective bureaucracies up to habbits and personalities. Systems of neuronal replicators with large cognitive light cone (Levin). Since they run on an information processing system, they have the chance to plan ahead, scheming for their survival.
Arguably, if evolution hit on a way to breed mental animals on it's brain subtrate that are like 2
, using 1
to be competent,
exactly this long term survival is the function of the brain.
Some bureaucracies (also information-bubbles, cultural memeplexes) are cancerous and live sucking. I call that malign beraucracies.
I have some hunch that there are information processing systems that supplant creativity - useful, friendly beraucracies. And bureaucracies that limit creativity somehow (malign beraucracies).
When we have a theory of personhood and creativity, this distinction might be obvious in some way.
Linux and FOSS tend to be friendly, big tech and propertiary software - mixed.
It is unethical to contribute to the functioning of malign beraucracies. It would be scary if AI amplifies malign beraucracies.
My current idea of the mind is that there exists an animal that is a user of a mostly friendly beraucracy. And a key to a theory of the cortex would be that cortex is not the person, it's a computer system module that is used. (Large amounts of cortex can be missing and a person can be fine - why?).
I only learned what using a computer is via linux. Controlling a computer has an almost erotic charge to me.
My idea is that algorithms are building blocks; and software engineering is architecture. Software engineering, is making a system work, dealing with the world, computer science, studying computation and algorithms.
The difference between biochemistry and physiology; Molecular vs. organismal.
This should be the difference between computational neuroscience and cybernetic psychology (old term from R. Heinlein). This is a scruffy AI stance to take.
In biology and software engineering, complexity is a neccessary evil and not the defining character. I think that it is expressivity, control, dealing with the world, and the elusive elegance that is at the heart of it.
Programming is the trick of writing a virtual machine in terms of a simpler machine.
The simple machine now acts as if the virtual machine existed. And we can pretend it does.
The virtual machine, the interpreter
, is instructed by short codes, a new language.
This is the kind of pretend that Dan Dennett is the illusion in Dan Dennett's user illusion.
Syntax: A rule system for the combination of elementary symbols. Greek sun- tassein; together, arrange.
Semantic: The study of meaning. Greek sēmaínein: signify, show, pooint out, indicate, signal. I like German 'bezeichnen'. 'To give a symbol'.
Interpreter, (computing): A computer program that pretends to be a machine that understands a language, it's instruction set (short codes).
In computing, semantic is definend by the interpreter. And the interpreter and it's language is the central concept of programming.
Code: An information signal suitable for interpetation by an interpreter. Optionally this undergoes encoding, decoding steps (like transcription in genetics).
In computing sometimes source code (human readable) is compiled to a language which is closer to machine language. For instance java source code is compiled to jvm byte code. And the JVM (Java Virtual Machine) is the interperter of jvm byte code. (We see that compilation is a secondary concept in programming, the interpreter being the primary one).
Code is tranformed to more and more primitive languages, i.e. more and more primitive virtual machines that pretent to be able to understand a certain kind of language. All tasks are decomposed to simpler sub-tasks.
Until it bottoms out at instructions which are implemented in terms of (electircal) circuits. Thereby using physics as the bottom interpreter of computing.
Evaluation: Or interpreting, the central algorithm of the interpreter where the input is a programming language and the output are effects, either on the information processing system (computing) or side effects (like printing to a screen).
Construction: A process where a information processing system has the side effect of some physical transformation. (printing to a screen is a construction. You see that current computers are extremely limited).
Repertoire: Roughly the instruction set, (of a programmable constructor in constructor theory). Is the set of all effects that the interpreter can have, given any program.
The missing universal constructor (David Deutsch), is a computer that has all possilbe constructions in its repertoire.
An interesting aspect of the repertoire of the universal constructor is that by saying what is possible, it is a description of physics. To put it another way, the it will say the syntax and semantic (possible meaning) of physical reality. Sounds strange but how can it be otherwise?
For instance, if it is a law of physics that the speed of light cannot be exceeded, then all constructions that require faster than light travel would basically 'error out' for the universal constructor. If on the other hand, a construction is possible, the universal constructor will be able to bring it about. Perhaps by building the machine that builds the machine… It will allow a technology stack that let's one express constructions, as if by 3D printer, but general.
I imagine this to be a computer with a module that is something that looks more biological, growing substrates, giving some sort substrates a context to grow;
But so far, we don't know what universal constructors will be like.
In molecular genetics, for protein-coding genes:
- The elementary symbols: The nucleotide bases
A
,T
(which becomes U in transcription to mRNA),C
,G
. - The means of arrangment (syntax proper): Polynucleotide chains e.g. GATTACA.
- The cypher: base pair of length 3 like
UAG
. - This makes a genetic word, a tripplet, or codon.
- The interpreter: The ribosome, mapping (giving meaning to) each possible codon. (this is called the Genetic code. Thank you Sydney Brenner).
- (non-coding genes have a different interpreter)
When you are a computer programmer and you think of the interpreter and the universal constructor, it is a short way from there to consider how far the interperters go down.
That physical reality is the interpreter for an elementary physical language. That would be the arrangment of something like the most elementary particles or something (I'm not a physicist).
Physics would be the language of reality.
The one that understands this language, which arrangments have which outcomes (interpretations), would know how to bring about any possible transformation.
And they would know what is possible and what is impossible.
What is possible 1) conforms to the syntax 2) has meaning.
For instance, the genetic code AUGAUG is syntactically correct, but meanings. Because a start coding inside a protein coding sequence has now meaning (for the sake of argument).
Apparenlty, Lucretius already that this idea; That reality is like language or something.
Have to read: The Instruction of Imagination. I think they might be arguing for the case that Imagination is a brain software module (an interpreter) that is evaluating natural language.
Cyberanimism, Memes, Abstract Replicators
Joscha Bach
calls it Cyberanimism.
Valentino Braitenberg
called it the spirit (Geist). That is the organisation principle,
the principle of living, the good idea (Dennett), or perhaps the soul of a thing.
That spirits (software) animate the physical world is what Josha Bach calls Cyberanimism.
Is it possible to engineer software animals? One should keep an open mind what that could mean.
A spell is a technology. In computing, that is a procedure. In biology it's subprograms.
David Deutsch
: Any sufficiently understood magic is indistinguishable from technology.
(about possible worlds in The Fabrik of Reality).
The wizard, the scientist and the hacker are creating knowledge about the magic system of the reality they run on.
Other main influences are
Richard Dawkins
's abstract replicator theory.
Marvin Minsky
, Seymour Papert
philosopy of programming and AI.
Programming, especially when done with care and craftsmanship is compared to gardening and a form of expression, like poetry and music.
I think it is not randomly, that we say the idea has been planted, the thought is haunting me, …
Ideas are alive, ideas are a kind of animal. 👈
One might as well ride with ones outlier ideas. If ones intuitions are wrong, one doomed anyway.
Art
- Art Diary
- Humble beginnings (gallery).
Setup / Questions
- How do persons develop as (brain) software on brains?
- development: a software synthesis task.
- Neuronal nets are perhaps the bricks, but a building is described in terms of architecture, not bricks.
The most relevant neurophilosopher of our time In My Opinion is Prof. György Buzsáki.
1. Cleaning up the Cell Assemblies with the notion of the reader-centric cell assembly.
Here, I argue that Pentti Kanervas address decoder neuron (Sparse Distributed Memeory 1988) maps to a reader centric cell assembly.
The putative neuronal letter is a ensemble firing in gamma frequency.
In a hyperdimensional computing framework, I would map it to a (seed/basis) hypervector.
2. Introducing the notion of neuronal syntax, a rhythm encoding
Brief analogy to the genetic code:
If there is a code, there is an interpreter / interpreters.
Genetic code (Sydney Brenner):
- letter: Base pair
- word: codon tripplet
- interperter: ribosome
Neuronal computation:
- letter: γ oscillations packets, discrete ensemble
- word: A series of γ cycles, for instance ~7 in one θ cycle
- sentences / composites / hierachies: (called the means of combination in programming) ??? 👈
- interperter: ??? 👈
Inside Out (Book The Brain from the Inside Out 2019)
- action comes first (also Rodolfo Llinás, Peter Godfrey-Smith).
- The brain maintains it's own internal dynamics. (This is a departure form the data-input-output model of current AI).
- preallocated symbols are matched to action and experience.

[The Brain from Inside Out, G. Buzsáki 2019, p 192, © Oxford University Press 2019]
speculative
This might open door for a true neuronal memetics, where the preallocated symbols are the 'genome' and their effects are their phenotypes.
With sleep being a phase where the dynamics of genotype, phenotype and their relationships are maintainend and groomed?
Neuronal animism is the hypothesis that there exists a kind of agental, memetic datastructure in the brain.
Indeed, that the primitive datastructure is an agent, is alive.
Then agentness is more primitive than even objectness.
Just an idea: The brain creates a virtual plane, a new kind of ecosystem, in which 'fleeting' software animals grow, survive, live. They have a structure and function of their own. (early/outdated notes here).
It's not that short-lived, when we consider habbits, personalities, mental technologies, …
Rafael Yuste
calls the question of the neuronal code "the holy grail of neuroscience".
The question of the neuronal interperter(s), how they relate to the control centers of the brain and the world, is the core question of neuroscience as far as I am concerned.
Peter Pesic
in Polyphonic Minds makes the case for considering this in terms of music: consonance, dissonance, rhythm and so forth.
Truly fascinating.
If I understand correctly, the contemporaries of J. S. Bach would have expected that the way to make artifical persons (AGI) is by making music.
"He created examplary fugues that served as idealized models of mental function, virtual minds that conversed or argued with themselves." (p. 163).
The relationship between music our understanding of math and science is a topic of Pesic: P1, P2.
Questions:
- The question of why music has character. What music theory is studying, the fact that one jingle sounds happy and one mysterious.
- Since I am into memetics and the sprits that inhabit the world, I'm insuiting there is a kind of biology that would say how music is creating non-zero character having, living, animal-like software entities, instantiated on brains for a while. Character then not as mere analogy but some kind of primitive in a memetic science of software animals.
- non-zero character having is a theme of my current thinking.
- Might the relationship between music and emotion, in hindsight, seem like an obvious pointer to the missing theories of consciousness and creativity?
- Why Flash lag illusion
Rough Outline of Constructor Theory of Adaptation
Update Compositional Evolution:
This is an update on the question what kind of algorithm natural selection is.
In Compositional Evolution, Richard Watson is an update on what kind of algorithm natural selection is.
Here, I use the 'gradualist framework' and describe a hill climbing algorithm. Watson shows that via symbiosis or sexual recombination, natural selection is a divide and conquer algorithm.
And things that would look unevolvable with hill climbing might be evolvable with divide and conquer.
Spefically, the point on there must be viable intermediates is massively updated by Watson.
Compositional Evolution doesn't make adapatationsim and even 'gradualism' in a wider sense wrong in my view. It shows that 'gradualism' was more interesting than just hill climbing.
An update on one of the deepest aspects of my world view, Darwinism.
Richard Dawkins in The Genetic Book of the Dead makes what seems at first glance a daring claim.
Forgive I don't have the exact quote, but it goes like:
I predict that it is possible to breed any species to reverse its preferences so that individuals would prefer pain over no-pain.
This at first seems so out there, how would an animal prefer pain?
But the logic is crystal clear, the preferences are adaptations, too. And adaptations can only evolve if all possible versions of them are, well possible.
I.e. possible to say in genetics in the case of biology.
It hit me now that this can be said in constructor theoretic terms.
And it fits with the Constructor Theory of Information, which I won't recapitulate here.
(Colors merely aesthetic).
Gene: A piece of information that replicates and is instantiated in the genome as "genome snippet" (the second meaning of the word gene).
Knowledge: A replicator which causes itself to be stable in an enviroment, also called it's niche. I.e given a niche it is a replicator.
Adaptation: The effects that a knowledge has, given a niche, that causes it to be a replicator.
Adaptation is allowed to become intricate, with many sub-parts contributing to a system of effects. I would say the means of survival and replication - called the extended phenotype (Dawkins 2016).
This typically involves an organism, instantiated via development (Ontogenesis) in biology, but it is not limited to the borders of an animals skin.
Indeed it is not limited to biological substrate at all, as in technology like beaver dams and even more insubstantially, in animal behaviour. And propensities for learning (Baldwin), psychological developmental plans (A. Sloman), etc.
The phenotype is extended, actually this is another thing sayable with constructor theory. All possible effects that genes can have are allowed to be phenotypes.
Adaptationism: The bio philosophical position that holds that design in biological systems is created by natural selection.
Natural selection: a approximate construction, whose substrates are populations of replicators and whose (highly approximate) constructor is the environment. Marletto Constructor Theory of Life.
Is a mechanism that is sufficient for creating design, i.e. adaptations.
And is the only known mechanism for creating design in biology.
Natural selection needs a population of variants of replicators, from which the ones with highest fitness (inclusive genetic fitness in biology) are selected. I.e. the ones with best design, the best effects that caused them to replicate.
For this reason, any adaptation can only be an adaptation (i.e. display design), if it caused by a replicator for which the variants are possible. 👈
This is a subtle point that Dawkins mentions a lot in his books. Genes for something needed to have variations, natural selection acts on variations alone.
Footnote: (This is also the reason why the Gaia hypothesis, in the form of stating that earths geology or ecosystems are adaptated is not coherent in a natural selection view of life, because there are no variations of ecosystems that are selected. The one that supports Gaia must work around this problem satisfyingly).
And in practice, the variants must not only be possible, biological intuition says they they have existed, phylogentically speaking.
(The opposite would be a comical, seemingly theological bee-line of a population up the mount improbable, bypassing the need for natural selection).
And as a further constraint in biology, there must have existed an unbroken line of gradual, viable, intermediate forms (designs). (viable: Able to be a replicator, given the niche. Emphasis on failure modes: Broken development, broken reproduction strategy.)
(E.g. Dawkins Climbing Mount Improbable, Dennett Darwins Dangerous Idea for further, deeper notion of gradualism).
This means that all aspects of a biological system that display design including aspects of interaction of sub-parts of an organism:
- Are caused by a replicator.
- Have variants, which are possible. I.e. alternative genes with alternative effects.
- (re-stating) are subject of natural selection.
Because the value judgment of an animal, say "pain receptors are bad" is a complex adaption, variations of the genes for it exist. (Such a value making system would typically located as the wires of the animals brain, but see Damasio, Levin, others. The mechanisms for pain together with aspects of the mind is ultimately unkown).
Here I'm drawing what I think Dawkins has in mind:
The value judgement sub-part is necessarily evolved by natural selection, and it's alternatives are possible, accessable by natural selection and by extension 'accessable by breeding' as Dawkins was putting it.
Historical: The equivalence of breeding and natural selection was exactly the line of reasoning Darwin used in On the Origins of Species to introduce natural selection. Of course breeding is a kind of artificial environment.
Claude mentions: Consider placebo, endomorphins etc. There are already genes for modifying the pain receptor -> value judgement system.
This was neither neccesary nor sufficient for the argument but gives it a bit more flesh.
Somebody like Damasio would probably disagree on grounds of pain being somehow a property life on a perhaps cellular level etc. The fundamental logic would not be debunked. Assume that pain is a property of cells, then whence cometh its adaptation?
Again we ask the same question, on a more basal scale - now the structure, system and design of cells.
More Setup / Questions
Biological software is the edge to be explored
- 'Computing', algorithms and so forth are a kind of physics. 'Software' is a kind of biology.
- 'Complexity' in biology and software is not a defining character. It is neccessary evil.
- Elegance, competence, beauty and artistic swag are what organisms and software are about.
- What is the space of possible software, how does biological, self-assembling, software fit into the picture?
- I think Minsky would have agreed, one aspect of the mind seems is how it programs itself on-the-fly.
- What is the nature elegance, competence, artistic swag, design and creativity?
- What is the philosophy of programing that unifies with biology?
software engineering is missing in neuroscience
I am interested in the mind the way a physiologist is interested in the heart. The organismic, 'adaptative domain'. For the brain that is
- it is a biological computer
- There is software running on that computer ('the mind').
Cognitive psychology has yet to spark my interest with a principled software engineering mindset. (If you can show me something interesting in those regards, let me know).
- Software engineering and philosophy of programming has yet to contribute to neuroscience. The best stuff is more low level or classic; Braitenberg Vehicles, Pentti Kanerva Sparse Distributed Memory, Marvin Minsky Society of Mind.
- György Buzsáki and neuronal ensemble neuroscience is very cool on the question what kind of computer is the brain? (
1)
)
growing a person, developmental plans
- Biology found a way to create powerful software, by letting it grow on the substrate of the brain and it's wiring. And by guiding the developing software with genes via developmental plans, almost always growing a "person".
- 'Growing a person' is a biological adaptation, therefore there are genes for it. This means there are genes for develomental plans that grow a person on a nervous system. (I think this must be true by biological logic). [A. Sloman 'Metamorphogenesis'].
- What then are the substrate (the 'nets'), the wiring (between nets, between nets and the outside) and the developmental plans? If not in detail then in spirit. They should say how AGI works.
- It must discover it's own elegance, sub-programs, sub-module-like technologies, akin to inside jokes.
- It takes almost 5 years to make a 4 year old.
Assuming the mind is modular
- Multiple personality disorders are an existence proof that some modules of brain software can be reused by different persons.
- Vision develops up to around 4 and is done afterwards.
- But a second personality coming later in live can reuse this vision module.
- The mind is modular, and there are interfaces between the modules.
- The brain is not a marshmallow, and there is a lower dimensional description available.
- I.e. there exists a kind of brain operating system - the neuronal interperter(s)?, that supports a range of possible persons. Akin to different software running on a virtual machine.
- This is not surprising, it also seems that during dreaming, a range of possible experiences and personhoods is tried out.
- 'Imagination' seems to be a good name for (one of…) this interpreter.
- The nature of these neuronal interpreters is the core question of neuroscience.
The computational primitives of the mind
- Synaptic overproduction and pruning seem to together with 'infinite possibilites' and subsequent selection.
- A characteristic of brain software is how it's datastructures allow for 'everything possible'.
- 'Only limited by imagination' means 'unlimited' in a way.
- Both Lisp and hyperdimensional computing are algorithmic layers that I consider to have a flair of this quality. Like 'the building material' that ideas might be made out of.
- Computation with large amounts of elements was explored already by Von Neuman in The Computer and the Brain.
- Hyperdimensional computing is the modern computer science following these ideas.
- Hyperdimensional computing might help with formalisms for neuronal ensembles.
- A hyperdimensional Lisp is a possible approach to neurosymbolic computing. Combining my 3 main interests: Lisp, hyperdimensional computing and neuronal ensembles.
- What is the nature of sleep and dreaming?
- Why does the brain have a 40% neuron skip rate?
- What are the neuronal codes and the neuronal interpreters (Yuste, Buzsáki)?
- Following Pentti Kanervas Sparse Distributed Memory, the inputs and outputs of the Cerebellum should be made from neuronal words.
The Associative Memory Hypothesis / Memetic Software Animals / Neuronal Animism
For the Gestalt psychologists a perception was made from wholes.
Such a gestalt is made from neuronal ensemble trajectories at G. Buzsáki The Brain from Inside Out (2019) - highly recommend.
I think they were on to something; A part of a perception retrieves the whole.
Figure 2: Abstract associatve memory structure, rerieved completely at t0, but stretches across t0-tn.
I think this can implement David Spivak's self fulfilling prophecies.
The cell assembly (ensemble) hypothesis might be that the brain is full of "self fulfilling prophecy" spaceships, which are made from activity.
They try keep themselves alive by having the right downstream effects - including, but not exclusively, top down modification of the incoming sensor data.
- Activate your activators
- /Inhibit your alternatives/
That they can survive die, that they can have random wires and subsequent selection. That makes them abstract replicators, neuronal memes that have competence without comprehension (Dan Dennett),
The neuronal ensemble memetic hypothesis is that the datastructures of brain software are made from agental memes. Implemented as neuronal ensembles, developed via neuronal darwinism (Gerald Edelman).
My idea is that this means that agentness is the most primitive aspect of neuronal datastructures. Even objectness is the memes playing the game of "looking" like something that follows the laws of physics.
I have to admit I still need to check out Karl Fristons free energy principle, it might be the same idea.
Development must be some kind of software synthesis (Synaptic pruning is only done at age 24).
Cajal said synaptic pruning is the settling of the cement of the brain. (Yuste, Lectures in Neuroscience 2023).
Francis Crick said dreaming is for getting rid of parasitic thought.
Some of the mysteries of neuroscience:
- Why synaptic pruning?
- Why neuron failure rate? (40%)
- Why intrinsic firing rate?
- Why are brain injuries filled with glia, not neurons?
- Some mechanism of development must prevent neurons from being added after the fact.
- What is the purpose of sleep and dream?
The 'Toy' Approach to Artificial Biological Intelligence
A theory of creativity and brain software is not figured out.
One might as well play around with outlier ideas. I think it's true that the humblest mechanisms can surprise us.
My vision are some Braitenberg-like agental entities that participate a game-like physics. But it's a toy physics and rules are made up on the fly.
Computation and physics must somehow be the same thing. An algorithm is creating a mini world, with a mini-physics.
Conversely, nature is an interpreter, slavishly computing the next step (physics update), given the arrangement (syntax) of whatever the stuff of physics is. (Particles and their arrangement or something, I'm not a physicist.).
If all those things are equivalent, one might as well go with whatever seems aesthetically pleasing. Modeling neuronal networks is completely arbitrary in a way.
Also, there is a Hofstadter-like subcognitive modeling [Copycat describes the slipnet] available, which is under explored.
Figure 3: Toy Braitenbergs
Using game engine physics for cognitive modeling is something that is used in the space Probabilistic / Bayesian approaches. [Probabilistic Models of Cognition].
Brownian Local Explorer Resonator Particle - Blerp
When I close my eyes I am not blind. There is a shimmering there. Vaguely colorful blobs swirling. Perhaps manifesting into the hints of edges, then ebbing and flowing, washed away by some force of nature which they are not able to whithstand. Perhaps no more than the ideas of an idea of an object.
Figure 4: Blerp fields with different parameters. Cyan has higher attenuation, making the elements move. Blerps are inspired by neuronal ensembles.
This is part of my current hammocking on how to make random, local, memetic, self-organising mechanisms, inspired by neuronal ensembles and biological principled approaches to computing.
My goal is to provide useful, resourceful randomness to a hyper dimensional computing framework. (I expect that Neuronal Ensembles are isomorphic to hypervectors).
particle-field
is a directed graph (on a 2d grid).- Each node is connected to itself and it's immediate neighbours (local).
- Time is discrete, at each time step
A
(activation) particles are selected (global inhibition model). - This makes ensembles.
- They are in some ways a little bit like the gliders of this physics system.
- With
adaptation
(attenuation
): Each node has a lower chance to fire when it is active (making it move like an ameaba). vacum-babble
: Random elements fire (analogous to intrinsic firing rate).decay
: Random elements are erased (neuron failure rate).- This gives ensembles a half-time.
- Activity must survive this decay assault, it must regenerate. (or die).
- the resonator part is the idea that top-down processes constrain which nodes are especially excitable for the blerp. (how to do this is to be figured out).
- There would be at least a second element, a top down element, similar to the slipnet in Hofstadter + Mitchel copycat [1988].
- emphasis on internally generated dynamics (Buzsáki)
- weights with log normal distribution, this makes 'hubs' / 'backbones' of activity
- intrinsic firing rate and failure rate are part of the algorithm
- A brain network would have the dynamics to make stable trajectories (sequences) of hubs (ensembles, attractor basins), which are active for roughly 1 second. (Buzsáki The Brain from Inside Out).
- Since I want to model neuronal codes with hypervectors, I'm not sure if I need the trajectory dynamics
- Those are for the neuronal syntax, which is a rhythm encoding. But we don't need a rhythm encoding, we need only get the essential concepts right.
- This is very similar to an Excitable medium
- In the excitable medium, all neighbours of a cell contribute equally, and there is an explicit refractory state for each cell
- (this happens here de-facto, if
attenuation
is a high value).
random?
- Called path of least assumption, by Pentii Kanerva. At the bottom, we can require randomness with impunity.
- Dan Dennett called it Darwnism, a very general idea, that complex things must be made from simpler ones.
David Deutsch
:
It [Creativity] has to be in the broadest sense an evolutionary process. It has to work by variation and selection. Or as Popper calls that in the case of science "conjecture and refutation" or "conjecture and criticism".
But we need to know the details, and the devil is in the details.
I guess that once we understand what it is, we will be able to program it.
There is an analogy here with Darwin's theory of evolution. Darwin's contribution in my view is not his scientific theory of evolution.
It is the philosophical progress that he made in inventing a new mode of explanation not just a new explanation, but a new mode of explanation.
[…]
Paraphrasing:
It is explaining the process that could make things like elephant trunks, not explaining elephant trunks. That is left to the myriad details of the process.
Science Saturday: A Plan to Dye One's Whiskers Green | John Horgan & David Deutsch
This mode of explanation is Dennetts Darwinism, that you can get design via a mechanism that is not designed by itself.
This is required for a theory of life, the fundamental problem of design has to be solved.
Chiara Merletto
(Constructor theory of life), delegates a bottom layer of generic tasks and generic resources to physics.
It is the same idea, in order to make something complicated, at the bottom you need something that is not complicated by itself.
As remarked by George C. Williams, ‘Organisms, wherever possible, delegate jobs to useful spontaneous processes, much as a builder may temporarily let gravity hold things in place and let the wind disperse paint fumes’,
These are called the primitives in a computing system. In computing, all tasks bottom out to be executed by physics, via the arrangement of circuits.
In biology, we also have to explain where the design of any circuit-like thing could come from.
Guiding idea:
Biology is mining the randomness at the bottom; This is how it leverages the resourcefulness of physical reality.
(also called noise or babble, is the same thing).
'Randomness' comes up in theories of self-organisation:
Discussion w/ Kevin Mitchell, Nick Cheney, & Ben Hartl on evolution, development, generative models
Micheal Levin
s favorite example is the Galton board.
This is a mechanism that leverages mathematical truth or something. It's strange, wild and simple. Very intriguing.
There is something about this 'algorithmic resourcefulness' or something, that is a feature of physics and enables biology.
Apparently, the same concept exists in physics as vacuum babble
.
Levin Λ Friston Λ Fields: "Meta" Hard Problem of Consciousness
Generally, babble is one aspect of an evolutionary process, the second is selection.
Together, they implement a search process.
Where I changed my mind recently
Prof. Mark Solms on how you don't need a neocortex for feeling and some animal-like awareness.
Here is what I used to think:
This is just wrong.
Apparantly, what I would label user is there even if the neocortex is gone!
Maybe we can say nuclei of the reticular formation and the superior colliculus are the 2 "lower" control centers, and the cerebral cortex, basal ganglia and cerebellum are the 3 "derived" ("higher") control centers.
Perhaps the "user" are the 2 lower ones, a kind of minimal animal moving and feeling processing unit.
And it has the derived 3 as resources or advisors at it's disposal, which happen to be information processing devices. But perhaps less 'alive' in a sense.
That you actually don't want your computer system to be alive is something I called The Living Language Problem.
- Naturally, you wonder if the reticular formation and superior colliculus already have a complete set of neuronal computation?
- In the computationally complete form, you would expect there to be a mini cerebullum for instance.
- In the functionally complete form, you would expect there to be "cognition modules", which got further and further adviced by the upper centers, using new (presumably more adapative) principles of computation.
- In both cases, it would be more of an externalizing of cognition so to say from the perspective of the lower 2 centers.
- The previous view would have been more like: the cortex, as computational system, evolves on top of some animal part
- The animal part advices this computationall system and perhaps and enables feeling, but ultimately, the cortex is what makes a person.
- With the danger of sounding wooby-schwooby holistic; Perhaps it is the animal that makes the person, and the person has a computer system to use at their disposal.
In this context, I'm becoming a fan of Panksepp-sytle emotion-control systems (Panksepp 1982).
This makes a lot of sense to me, thinking about how to make more complicated Braitenberg vehicles. The obivous move is to flip between a discrete (low dimensional) set of states, activating a 'generalized' behaviour 'scheme'.
(Like Braitenbergs 'love', will find light sources in the environment).
The thalamus is more than just a relay. Talk: Thalamocortical System.
I consider Prof. Murray Sherman one of the most relevant neurophilosphers of our time.
It is the wiring that says How the brain relates to the world.
- Action and perception are totally intertwined, and the wiring reflects this
Information flow is feedforward from cortical areas, and zigzags with higher order thalamic relay nuclei.
- all cortex is making motor outputs
- Messages and modulators are 2 different kind of things, messages drive activity
- The thalamus is the driving input to all cortical areas
- Cortex cannot be understood without understanding cortex and thalamus (my conclusion)
- Consequently, the only way for a lower cortical area to drive activation in a higher area, is by outputing some kind of motor data! I find this quite remarkable.
Analysing the wiring is much in the spirit of V. Braitenberg, too. Vehicles are only one example.
Cognition is musical
Gyorgy Buzsáki - Neural syntax is organized by a hierarchy of brain rhythms Dr. György Buzsáki (NYU) - Keynote Lecture: Brain-inspired Computation: by which Brain Model?
I consider Dr. György Buzsáki one of the most relevant neurophilosphers of our time.
Inside out is the idea that the brain primarily stabilizes it's own dynamics. This is counterintuitive because the previous view says that a brain is a vessel filled with knowledge from the outside.
As David Deutsch is pointing out, a "vessel theory of knowledge" doesn't work. Knowledge needs to be evolutionary in principle, you need some kind of randomness at the bottom, that is matched or criticized with action and perception.
Neuronal Syntax, neuronal code, neuronal interpreters
The genetic code is definend by it's interpreter, the ribosome. It's syntax are triplets of 3 base pairs, called codons. They could also be called genetic words, they say where to put the spacings between the genetic letters, to decipher the code.
Just like letters make words and there are spaces between words. Elsetextwouldbehardtounderstand.
The project of figuring out what the neuronal code is, means understanding what a neuronal letter, a neuronal word and a neuronal interpreter is.
I'm still figuring out what Buzsáki has to say, here is a paper.
Long story short:
- Neuronal ensembles (cell assemblies) firing in gamma frequency are the putative neuronal letter.
The reader centric cell assembly.
- Neuronal ensembles are best described by their reader, with a timewindow.
- Everything in cognition is relational. Nothing can mean anything, unless it has a reader, unless it has an effect. 👈
- This is a cleaning up of the cell assembly concept by Buzsáki that makes it easy to equate it to P. Kanerva's address decoder neuron (Sparse Distributed Memory).
- I wrote this down here: current stuff.
- A Neuronal word are ~7 elementary ensembles firing in time during a theta cycle.
- How this then combines further up the hierachy, how it supports mental navigation, zooming etc is not figured out.
- Presumably, neuronal words represent both the complete vocation and the zoomed in notion of that one evening, and the further zoomed in story line of how you went to that restaurant and so forth and so forth.
- In terms of SCIP, the means of combination and the means of abstraction are not figured out.
- A neuronal interpreter or interpreters is/are a hypothetical brain mechanism that has neuronal words as input and has outputs, which might be values in output registers, side effects, memory updates, data transformations, state changes, presumably motor data outputs.
- That the input and output registers might be in the same neuronal net but across some time step is an idea already at hand from the cognition is musical guiding idea.
How neuronal codes and interpreters work, how they relate to (wired up) the control centers of the brain, to the animals behaviour and the world, is in some ways the central question of neuroscience.
John Von Neuman (died to early), The Computer and the Brain, could have been the most relevant neurophilosopher ever.
This is the real deal, some groundwork for thinking about neuronal computation.
This is truly a line of thinking that goes between computer science and neuroscience;
And I can highly recommend P. Kanervas Sparse Distributed Memory as continuation to this.
It's truly mind blowing.
(I'm into associative, distributed memory models and hyper dimensional computing: latest stuff)
Neuronal computation is distributed, energy efficient and trades powerful logic for degraded arithmetic 👈
Poweful logic?
In some ways, hyperdimensional computing and associative memories allow for a symbolic processing that is more LISP than LISP.
In LISP, symbols are points (point-pointers, hehe). Whatever you want to do with the symbols needs machinery.
In hyperdimensional computing, the symbols are data themselves. And there is a rich symbol-algebra (the Vector Symbolic Arithmetic).
It allows symbolic processing in superposition. It is like programming with pieces of pointers. Kinda strange stuff. A philosophy and practical usage of this programming in superposition is largely unexplored. (latest stuff). Here is a nice overview.
Also:
- MeTTa is a non-deterministic metagraph rewriter. I think this enables some similar criteria.
Biological Intelligence is Intelligence that grows
Alan Turing (died to early) could have been the most relevant neurophilosopher ever.
I think Turing was onto something like:
"Growing cognition means growing a biological system, it's a kind of morphology."
He went into morphogenesis with the same goal as before, to make machines that think.
I think his intuition was that biological intelligence must self-organize, so he wanted into studying self-organization.
See Turing patterns.
Micheal Levin is coming from the other end:
He says that morphology is a collective intelligence problem. Collective intelligences are biological software, it's all the same thing.
So morphology is a kind of cognition.
Biological intelligence is collective and self-organized. 👈
A modern form of this that truly goes a different route is Dave Ackley robust first and 'living computation'.
The Hourglass model of collective computation (Jessica Flack). Talk.
Outdated view:
Behaviour of simple elements make emergent, complex behaviors (kinda the physics stance to take). In physics, you have the statistical behavior of for instance gas molecules, which make the emergent behaviours of for example gasses. (Erwin Schrödinger talks about this in What is Life?).
Updated view:
- A course grainend mesoscale allows the system to communicate efficiently.
- This is a kind of information bottleneck
- It allows the subsystems to inferface,
- ? it allows short codes, (same thing as program source code).
- Micheal Levin also talks about this concept.
- Examples:
- Dominance hierachies (low dimensional ordering),
- Money (scalar value reflecting high dimensional concept of value),
- Calcium concentration (in Morphology)
Naturally, for brain computation, my idea here is that the mesoscale is the neuronal code.
Because cognition is musical this means in some strange way that the mesoscale, the code is made from something like rhythm.
It would say that rhythm and simultaneity or perhaps whatever dancing in harmony is; That this is how the subsystems in the brain communicate, accross scales of hierachy.
Time, rhythm and synchronizity in neuronal computation comes up: Moshes Abeles, Von Der Malsburg, Elie Bienenstock, Earl Miller.
I'm sure this is an idea that keeps coming up since ancient times. And perhaps there is just something true about it.
(Doesn't mean that you need a rhythm encoding for a neuronal code implementation. But that the brain is using a rhythm encoding for it's implemenation).
Also:
I used to think backprop in brain doesn't make sense. But I was naive about the science. Now I think that Lillicrap is cool and nuanced. Talk. I mean seriously he is cool.
Afaik it's Hintons opinion that the brain is doing some form of backprop.
Where I didn't change my mind recently
Neophrenology doesn't even pretend to have a brain theory (it is not even wrong)
- Why can most of a brain be missing (if early), and a peson can work with that?
- Existence proof, there are children with 1 hemisphere missing that are healthy.
- The 'language centers' of this child is missing and it speaks 2 languages. Clearly, something is missing in the story when we call this 'langauge centers'.
- The 'shape' of brain software 'on the brain' seems to be malleable. Like with an octupus; The defining character of it's shape is it's malleability. This is why I think "neophrenology" (derogatory), the finding of neocortical areas 'for' X and Y is quite beside the point.
- The correct theory will say why there are such areas. And the theory will be infinitely deeper to what main stream cognitive neuroscience has to offer.
- Arguably, current AI is based on neuroscience from the 50s - feature detector hierachies of McChulloch Pitts neurons. Why is neuroscience in turn happy to be 'informed' by those nets? Perhaps it's worth spending some time on, but on the whole this won't yeild a theory of the brain.
Neo-Lamarckism is more confusing than useful.
In recent times, transgenerational epigentic inheritance prompts biologists to say something like "Lamarck was not so wrong after all.".
Lamarckism as a theory of complex adaptation is still wrong. It's confusing to bring forth the name. Just let it rest in peace. In the history of bio philosophy where it belongs.
What system knows that a muscle is used, and how to tell the muscles in the offspring to be bigger?
This by itself is a complex adaptation, making Lamarkism plop in a small cloud pink dust, floating prettelly in the air before quickly dispersing from a gust of wind.
On it's own terms, 'the use of transmitting information to the offspring made the capacity to transmitt information to offspring more pronounced'.
What? How does it know how to design itself to do that? Also, this sounds almost like abstract replicator theory to me. The replicator knows how to transmit information to the offspring.
No. Transgenerational epigentic inheritance is a form of a Baldwin effect, where the dynamism is across generations.
One should have brought forward Baldwins name from the get-go, and there would be no confusion, but solid theoretical evolutionary biology.
It would have been, "wow, look at those transgenerational adaptation effects, Baldwin effects and extended phenotypes are so powerful, evolution learns how to learn across time and even individuals".
A More or less correct philosophy of programming
I am going through Porgramming with Categories (with Brendan Fong, Bartosz Milewski and David Spivak), which is very cool. I expect the whole category theory to be a powerful tool.
There is this notion "The types help you make correct programs". The way Bartosz says it in one of first lectures makes it sound like it's common knowledge for software engineers.
But it is totally not!
It is not the case that software engineers agree that static typing helps with the real problems of software engineering.
The philosophy of programing diverges here roughly into a camp that has the feeling that software should be provable and mathematical. As the pinnacle of this, its correctness would be proved.
Proponents include Edsger W. Dijkstra, Leslie Lamport, John McCarthy.
And another camp that emphasizes interactivity, growing the program as you go, bottom up programming.
Proponents include M. Minsky, S. Papert, G. Sussman, Hal Abelson, Rich Hickey, Paul Graham, JCR. Licklider, Alan Kay.
This emphasizes that software is a participant of the real world. Applications are situated [Hickey 2015].
To the user and the client, the software artifact is what matters. Not the way it was made.
Unless they are an isolated calculation, shielded from the real world, like a smart contract. (In that case something like Haskell does make sense).
Sources of bugs usually stem from misunderstanding. Understanding and design is the main work of the programmer in the first place.
In this philosophy of programming, Software engineering is the art of dealing with the real world.
- Situated software is a system of sub components, that deal with edges and requirements of the world.
As a rule, software is not prestine. Just like in biology, exceptions are the rule.
This is almost opposite to the percieved power of computing, the abillity to create worlds, to make all rules as masterminder.
In reality, the masterminded, simple worlds are only at the core of applications (also, domain model). They are important and powerful, but not the main job of programmers, they don't comprise complete systems, and they usually are not the reason why software projects succeed or fail in the real world.
A language that is pointed at optimizing the prestine mathematical core of information processing systems is not a language that helps addressing the actual sources of success and failure.
In defence of the mathematical approach: One idea is that they help design, becuase they allow to spell out the design in mathematical terms, analogous to category theory.
In this capacity, as a design guide, they are pointed at a real world concern.
In my feel, since exceptions are the rule - the strict design and static typing has more downsides than helping.
Because of fallibilism (Popper), programs are in a perpetual state of misunderstanding and incorrectness.
It is the speed with which one engages with this incorrectness (also called debugging), that proponents of interactivity see as the defining aspect of the power of a programing paradigm.
G. Sussman sees the actual thing programmers are doing to "make best guesses and subsequently debug, until some satisfactory state of correctness is achieved".
Debugging
, is a central concept in the philosophy of programing of the MIT AI lab (origins with Minsky and Papert).
This is essentially Poppers conjecture and criticism.
Programming, the task of creating software, is a form of conjecture and criticism.
When Papert says that "Debugging teaches children to think", this is a epistemological claim. That software and by extension "mental technology" is actually created by debugging.
Debugging is a meta technological principle or something. It is the technology that the mind uses to create itself. Since it must create it's own software, it must execute a programming task.
You can also say creating software is evolutionary (Popper).
This is a philosophy of how software is actually made. The claim is that a Haskell programmer is doing the same thing.
It doesn't matter what programming language, a programmer still does debugging best guesses until satisfyingly approximate correctness is achieved.
Papert: You can't think about thinking without thinking about something.
Creating software means building something, then improving that something.
And all software is created this way, be it in a self organizing task as the mind of a child, or be it artificial software programmed by a human.
Popper was saying for knowledge, in an abstract way, the same as evolution. And the same is true for Paperts theory of programming.
Biological Evolution | Scientific Theories | Programming |
---|---|---|
Variation and Selection | Conjecture and Criticism | Best Guesses and Debugging |
The Lisp way, repl driven development, or bottom up design or interactive programming is not just another way of programming, it is aligned with what programmers actually do.
This is why I am certain that should we encounter aliens, we can ask them whether they have discovered Lisp.
This is inspired by Richard Dawkins: "Should we encounter aliens we can ask them whether they have discovered the theory of evolution".
Following David Deutsch's reasoning (The Fabrik of Reality), I expect a deeper theory that unifies computation, epistemology, biology and theoretical physics.
Note: Our current Lisp is not the deepest version. We know this because we cannot program an AGI yet.
We will not be able to ask the aliens what their version of Javascript or Python is, except in the light of morbid humor.
A seminal "case for interactivity" is Bret Victor - Stop Drawing Dead Fish - he manages to do this without talking about Lisp, for better or for worse.
Also:
- The dream machine : J. C. R. Licklider and the revolution that made computing personal, Waldrop M. Mitchell
- Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022)
- ClojureScript in the Age of TypeScript — David Nolen
- Lisp
What Does a philosophy of programming say about biological modes of computation?
"Math is hard for people not because it is so complex, but because it is so simple" - Minsky.
My feel is that seeing the goal of software as navigating the world is already much closer to something that has the chance to unify computer science and biology.
- A correct epistemology of where the biological software comes from (what programming actually is)
- Realistic expactations about what a biological software is actually doing in the world
Older stuff:
Micheal Levin is one of the most important philosophers of our time
(future blog post)
We already know evolution is a tinkerer, engineer and designer but what are the principles, what is tinkering?
Here is this world view suddenly, what is good programming is what works for evolution and complex adaptive systems, too.
This is a historical fact.
From the science of evo-devo, we have this view, the genetic toolbox (Evo-devo gene toolkit) of animal body plans. This kind of stuff is only roughly 15 to 20 years old. Unless you are deeper in biology, there is little chance you have heard of this.
Now Micheal Levin comes and in the narrow sense, he talks about how there is an additional dynamic layer of abstraction between genes and morphology, the cellular electric field his lab is studying. This thing is a map of the territory of morphology. It is a language to speak and if you speak it, you can make cells grow into this or that. Without modifying the genes.
+-----------+ +------------+ +---------------+ | | + | | | | |--> layer, layer, ----> | |-----------------> | | | | | | | | +-----------+ +------------+ +---------------+ genetics cellular electric field, dynamic morphology
And here is the wider view: These abstraction jumps, these new layers of dynamic content coming out of a lower layer of static form is a principle of how evolution makes more capable systems.
The genetic algorithm people have stumbled on the same principle: Building block hypothesis.
+---------------------------------+ higher layer | | map | +-+ +-+ | essentials | +-+ +-+ | body plans +----+----------------------------+ dynamic, content, symbols | ----------------------------------------- | aboutness abstraction barrier +----+----------------------------+ screens, prompts, interfaces, maps, languages, levers | | | bottom layer | v | territory | +------+ | details | +------+ .... | body parts +---------------------------------+ static, form, referents
The whole thing is also called stratified design.
Levin mentioned this about the nature of genetics, where everything is about regulating other genes. This is again the same thing, where a more dynamic layer is built on top of a more detailed, static form.
Now evolution can tinker on the orchestration
, not the form. And it is not a coincidence that the development
of these higher layers in genetics overlaps with the Cambrian explosion of 500Mio. years ago.
See Michael Levin: Biology, Life, Aliens, Evolution, Embryogenesis & Xenobots | Lex Fridman Podcast #325. It blew my mind.
The crazy thing to me here is that this is the magical power of computer science and Lisp. To make a language, so you can speak without being concerned by details.
There is something deep in cybernetics and how the world works spanning across biology, civilization, computers and engineering. And it makes languages and levers the most powerful thing in the world or something.
There is something else here, how maybe now we are the generation of people that are both programmers and biologists. So we can think of the thoughts of good design and see it happening in biological systems.
If biologists had known more programming, I think I would have gotten a different view of something like biochemistry for instance. It would have been way more about the essentials that make it go and not about details. Biologists appreciate the essentials but then are careless about suddenly bringing in implementation details. It takes (currently, more or less) a programmer's mind to be careful about such mixtures of levels of analysis.
Good biologists have this, too. It is the hacker mentality from another perspective.
But since we don't have programming the way we have reading and writing in school, all this thinking is hazy and unrefined. It is only deep in programming engineering, design and craftsmanship that this level of analysis starts being explicit.
See Rich Hickey, Structure and Interpretation of Computer Programs, Eric Normand.
The reason I am interested in this is because I want to build minds, and it seems like maybe minds work because of this nature of abstraction jumps. What is an abstraction jump, can I fabricate one, how do I know if I see one, how do I differentiate it from something else? What do I need to do to make a computer program that makes abstraction jumps and builds its capabilities until it has ideas the way human brains do?
—
The mind is powered by abstractions. After all, I have an experience of a world, an inner world, some social stuff going on, a situation, affordances of my environment, an interface to my muscles, a map of my body sensations, levers on my attention, etc. etc.
I experience the highest layers of these interfaces of the mind to itself. It has to do with the neurons the way transistors have to do with software.
But I am not the neurons, the neurons are an ocean of detail with which I am not concerned.
Hence, it is obvious that there are at least 2 layers of abstraction in what the mind is. Minsky has 6 or 7 or something, is that enough…?
A Hippie with a computer
My goals for the world are togetherness and harmony, the end of death and suffering. I will not rest until this wound in the world is healed! Computers and programming are the most powerful tools in the world currently.
I plan to claim the heritage of the hackers of the computer age. Continuing the vision of human-computer symbiosis. Until we can put our minds into the computers and reach for the stars together.
We evolved on this planet with the rest of the animals being our small brothers and sisters. I cannot imagine being a spacefaring civilization that kills animals for food.
Joyful ideas
The best thinkers are playful. The most powerful way of thinking is when the mind is vibrating with the joy of the ideas.
The most powerful programming language, Lisp
is simple, beautiful and joyful.
My biggest intellectual hero, Marvin Minsky, has been called
the world's oldest 3-year-old. His partner Seymour Papert developed the play programing language Logo
and researched juggleling.
My favorite books of joyful, clear-thinking ideas:
- The Society Of Mind
- Vehicles, Valentino Braitenberg
- The Selfish Gene, Richard Dawkins
I like magic tricks, juggling, Rubik's cube speed solving, and in general, finding things that I am not good at yet. A mind palace is a joyful mind-expanding way to spend your imagination.
Curiosity and imagination are the most powerful aspects of human intelligence.
Joy is the best state to think useful thoughts. Joy is power.
A joyful programming language lets me express what I want to express in a powerful, succinct way. Without things between my ideas and the computer program.
It is not a coincidence that the most powerful language is also the most joyful.
Lisp
Lisp is not a language, it is a building material - Alan Kay
A hacker is somebody who is using a computer for the joy of using a computer.
Lisp was an elegant cleaning up of computer science back then.
McCarthy wanted to play and build/grow programs interactively
on the computer, in order to think about
how to build AI.
Lisp is the only language that supports dialects because it is made of pure ideas, unimpeded by implementation baggage. With Lisp, the ideas were there first, then there was an implementation.
It is an abstract description of process
. It has been called the Maxwell equations of computer science.
A Lisp program is alive, it grows as it evaluates Lisp code.
In batch-oriented languages, the program is about what you talk about in the source code.
In (on) Lisp, the Lisp program is a program about ideas, about Lisp programs. And the Lisp hacker is expressing themself not in the domain of what programs are about, but in the domain of what the ideas are.
This allows the Lisp hacker to think higher-level thoughts about how to solve their problems. Making programs that write programs. Making languages to express the kind of problem you are solving and then expressing your problem in terms of that. Seeing and feeling the program as it grows is the brainchild of the programmer. It is said that Lisp programming is 100 to 50,000 times more effective than current mainstream languages.
Because Lisp is fundamentally about code, and it is beautifully simple and elegant, Lisp code is expressed in Lisp data. It makes it trivial and common sense to write programs that write programs.
Its fundamental core idea is interactivity
, which allows for a style of exploratory programming called
bottom-up programing
. It is the power of joy, of toys, of scientific, childish curiosity.
It is a great and useful tool for thinking.
Lisp is where software and thought stuff meet and this gives intimacy with (computer) process
.
Where will we be able to go when we build better interactive programming environments?
See Lisp.
Clojure
The modern, practical Lisp. Here is one of my Love letters to Clojure.
I see data oriented programing
as an important evolution of how to write good Lisp code.
Next
Finding the next meaningful layers of dynamism and how to get more powerful, interactive programming, are some of the things that are on my mind.
How to make programming tools that mesh with your thinking?
Essentially, I want to complete Lick's Human-Computer Symbiosis.
most of the significant advances in computer technology—including the work that my group did at Xerox PARC—were simply extrapolations of Lick's vision. They were not really new visions of their own. So he was really the father of it all
Its the ultimate hacker dream to be as close to the computer as possible.
Related: Musings On Interactive Programming
Source code colors
The source code
on this website has the same colors as in my Emacs:
(ns happy) ;; comments are much visible (defn happy [] {:happy/color :green :peaceful? true})
Using htmlize.el
. See build-site.el.
I love looking at this heliotrope
everywhere in my Clojure code.
Functions, which are 99% pure functions in Clojure are green, peaceful, with a promise of power.
Is this not beautiful? Also: What Theme Are You Using?
Unix
The best operating system because it is not cluttered with garbage.
Its heritage, also, is interactive computing. The creators of Unix knew the freedom of joy that comes with timed shared systems (they worked on Multics, s History of Unix).
The soul of Unix is to be clean, lean and efficient.
Emacs
…It is written in Lisp, which is the only computer language that is beautiful. It is colossal, and yet it only edits straight ASCII text files, which is to say, no fonts, no boldface, no underlining. In other words, the engineer-hours that, in the case of Microsoft Word, were devoted to features like mail merge, and the ability to embed feature-length motion pictures in corporate memoranda, were, in the case of emacs, focused with maniacal intensity on the deceptively simple-seeming problem of editing text. If you are a professional writer… emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish.
Neal Stephenson, In the Beginning was the Command Line
Emacs is not a computer program. Emacs is the stuff of thought.
Emacs is Lisp
molded into the space between brain and machine.
Emacs is a sheet of brain-idea-software stuff that allows us to think intimately with
the computer - a real tool.
Because Lisp gives you the freedom to define your own operators, you can mold it into just the language you need. If you're writing a text editor, you can turn Lisp into a language for writing text editors.
Paul Graham, On Lisp
Juggleling
See here.
Claude Shannon, Seymour Papert. Some of the coolest people in the history of cybernetics and computing were into juggling. Also Richard Feynman, another joyful thinker.
Juggling is an amazing little playground into the psychology and the mechanics of learning a skill.
I can currently juggle 4 balls.
Rubics Cube
Speedcubing has taught me that I can do anything if I follow what others have put out there on how to go about it.
Speedcubing in the extreme shows you there are 2 aspects to performance. One is the speed of movement and learning a motor skill, and the other is deliberate thinking about what one does during the performance.
With the cube, we can pause, look and think (not so with juggling or running).
The cube is impossible to solve without well thought-through technique, technique becomes an obvious citizen of the skill.
See The Quick Eval for how I am translating the intuitions gained into my practice of programming Lisp with Emacs. See the humble beginnings of my Emacs Meow Lispy screencasts. I take no compromises when it comes to using the computer masterfully, sensually. Standing on the shoulders of giants, I try to feel the aesthetics of my system to make it smooth like coconut. Removing, until there is nothing to take away. Finding the right levers to pull. The quiet competence of the computer makes the simple easy, the complex possible and the impossible thinkable. Like a spaceship responding to mere thinking.1
I spent some time learning to blind solve a while back (video from 8 years ago when I was ~20). This is a nice overlap of precise technique - with memory technique (mind palace is amazing).
Everything could still be different
Computer science is like 60 years old at this point.
We are still figuring out what software design and the soul of the computer even are.
There might be transformative ideas still left and right to be thought. This is why logic programming and alternative programming paradigms are interesting in their own right.
The Grandour of Life
This quote gives me a shudder of epicness every time:
There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.
Charles Darwin
Mostly the writings of Richard Dawkins did a lot for me to convey the uncompromised beauty of life to me.
Dawkins also has an audiobook of On The Origins of Species where he reads this passage - makes me tear up.
Life and civilization is an epic rollercoaster psychedelic trip of the craziest thing happening ever. It is on us that it turns out to be great.
More thinkers: Richard Feynman, Neil DeGrasse Tyson, Carl Sagan, Daniel Dennett and Bachir Boumaaza (a philanthropist that made my generation of computer gamers aware of the bigger picture).
You might be surprised but the Harry Potter fanfic Harry Potter And The Methods Of Rationality conveys this and a whole way of thinking with it. This book is fucking amazing.
Reclaiming The Soul
A few years ago I stumbled across the Wim Hof Method. When I do the breathing, I sometimes cry because I think of the epicness of this whole project which is life on earth.
Being a human means being an animal, being an animal means being a survival machine. We are all into this together.
I am much inspired by Andrew Hubermans recent public thinking. Laying out a vast landscape of understanding the human mind and body. With amazing twists, nooks and crannies.
Because of him, I go outside in the sun first thing in the morning.
Sam Harris's meditation app made me realize what meditation could be about. It is the difference between being absorbed in thought and paying attention.
It was Anil Seth2 recently who talked about how we can now reclaim the term Soul.
The survival instinct that connects all of life, the underlying logic and machinery, the way the living substrate has an intelligence in its own right. Navigating, and regulating inner states and behaviors.
From the single-mindedness of reproducing information came endless forms most beautiful, organismic life.
An organism is the configuration of matter with one goal, to keep on going.
Called The breath, the inner aliveness, the fire, information processing and collective achievement of so many mindless processes, that separate us from death.
The unique flavor of our consciousness is shaped into being by the logic of life and survival.
Which connects us primarily with the rest of the Tree of Life.
Why not call this most interior, this most primordial aspect of our minds, The Soul?
I love neuroscience
Thinking about how the brain works is one of the great joys in life to me.
Biology is my first passion. But I won't go back to the lab after experiencing the joy of building software (little experiments are a dopamine ride…)
I am formally educated in general biology, biochemistry, genetics, cell biology, tree of life etc. I was emphasizing neurophysiology and contemporary neuroscience in my studies.
I can recommend brain science podcast, Huberman Lab, Oliver Sachs, V.C. Ramachandran, Robert Sapolski, textbooks, General stuff like Pinker, and Sam Harris.
Want to read:
- The Expression of the Emotions in Man and Animals, Charles Darwin
- everything from Valentino Braitenberg
- Aaron Sloman
- Freud
Dreaming
I practice lucid dreaming and try to be in touch with my dreams. So I know they are stupid and don't matter. Except for sort of experiencing the mind with a different twist.
In order to solve a hard programming problem, I need to sleep on it for a night, having loaded up my brain with the problem to the point of vibrating.
Some fav books
The best ideas are laid out beautifully simply and clearly. Also, Joyful ideas.
Great programming, great philosophy and great science are alike.
They introduce common sense ideas. And then build a world of ideas that can be explored by the user.
The brain has the capability of instantly parsing a visual scene, and so too it is with obvious concepts. A great communicator of ideas will present your brain with obvious concepts, using the quiet competencies of your brain to build towers and worlds of ideas.
Examples:
- The Society Of Mind
- Vehicles, Valentino Braitenberg
- The Selfish Gene, Richard Dawkins
- Consciousness Explained, Daniel C. Dennett
- Rich Hickey Talks
- The Stoics
- I guess Aristotle
- I guess Freud
The Pale Blue Dot
Push Singh: EM-ONE
I recently learned about Push Singhs' thesis under Minsky, programming some of Minskie's mental agents for common sense reasoning. One of the next things I want to do is implement in Clojure.
EM-ONE: An Architecture for Reflective Commonsense Thinking Push Singh.
Read by Aaron Sloman and Gerald Sussman isn't this super cool?
What is missing from the human-computer symbiosis
Fascinating read:
The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal.
Every programmer should read this, seriously it is an amazing ride through the history of computers.
I feel J. C. R. Licklider
was a kindred spirit to mine.
Same with the cybernetically minded people.
Lick understood the power of interactive computing and was one of the first hackers, spending time on a computer terminal just for the fun of it.
He had an amazing talent for being intuitive about which people and which technology lead to his the vision of computing, which is more or less our modern computing environment (with very interesting deviations).
One of the next things I want to do is go through Man-Computer Symbiosis and make a list of what is still missing.
The Stuff Of Thought
Ah, the power and spirit of Lisp. What is it? There are good reasons why it is hard to say what it is. It has to do with taste and aesthetics. But when you know it, you know it. Hence the ominous claims on its power.
The most important part is, that it is about interactive programing
. It's about having a program one step further dynamically.
Simplicity
Organizing thought is what programming is about.
Philosophy, like programming, is about organizing thought. To keep things simple, to find the essence of the problem, and to express ideas in terms of common sense reasoning.
These are hallmarks of well-organized thought.
Controlling complexity is the essence of computer programming.
Brian Kernighan
To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true.
Aristotle
I like how simple this definition of truth is.
Ethics
Eliezer Yudkowsky said somewhere on Less Wrong:
Live is good, death is bad.
To do what yields more life is good;3 While to fail to do what yields more life, or to do what yields death, is bad.
Good ethics, like good philosophy and good programming, is simple.
Should we cure a 35 old of cancer - Yes
Should we cure a 95 old of cancer - Yes
Should we cure cancer for good? - Yes
At no point in my ethical reasoning must I be confused.
It is the habit of a great programmer to smell out confusion early and from afar. It is the primary hustle of the computer programmer, as Kernighan said, to manage complexity.
The best scientists and philosophers are keeping their thoughts plain and simple.
Audio Diary
I have kept a diary for ca. 10 years.
I have like 7 or 8 years of audio diary that I treasure. It follows the journey of my thinking.
Maybe at some point, I try to publish it in some form. Of course, this website is already an expression of my thinking so it's already happening.
If for some random reason you decide to get close and personal with my raw thinking:
Joy
, curiosity
and spirit
. Not supernatural, but subtrate independent, biological, principles of organisation.

Elegance?
Pardon me, Your Honor, the concept is not easy to explain – there is an ineffable quality to some technology, described by its creators as a concinnitous, or technically sweet, or a nice hack – signs that it was made with great care by one who was not merely motivated but inspired. It is the difference between an engineer and a hacker.
Judge Fang and Miss Pao in Neal Stephenson
's The Diamond Age, or, A Young Lady's Illustrated Primer
Footnotes:
That is from Issac Assimovs Foundation (the last 2 books I think). The spaceship is called the Farstar and has a cognitive interface via the hands.
Why not the hands?.
See Being You, by Anil Seth
Also, Antonio Damasio is all about how the nervous system is built out of living tissue, which is this self-regulating, sort of steermansship-like intelligent substrate with a relationship to its environment, with little kinds of memory, control, goals and purpose by itself.
Maybe I should call this human flourishing.
- More live might be mistaken for a claim about population ethics, but not so.
- Interpreting what it is about life that matters, what it is that should be maximized, is not simplistic.
See also
- The Moral Landscape, by Sam Harris
- Rationality From AI To Zombies, by Eliezer Yudkowsky
- My goals for the world
- How I think that life is about ideas