reading

Table of Contents

I don't remember everything I ever read, the way I don't remember everything I ever ate. But it still made me who I am.1

Roughly ordered by relevance.

2024

Sparse Distributed Memory, Pentti Kanerva 1988

Short, seminal, basically alien spaceship technology.

This is mathy computer science and theoretical neuroscience with a unique flavor and niche. It shares some history and overlap with modern connectionism, but has to be seen as a completely separate, niche, currently underdog branch of computation research.

Even though the level of technicality is extremely high, much of the material is easy to follow, thanks to the clarity and simplicity on Kanerva's end.

Clear-cut engineering, opening up a world of ideas - centering around the hyper-dimensional space.

With a hallmark on the cognitive connotations. Much still has to be figured out before we can make analogy-making, contextual, learning-by-action computer systems; This gives us a glimpse of what kind of building material concepts and ideas could be made out of.

I can recommend

P. Kanerva, “Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors,” Cognitive Computation, vol. 1, no. 2, pp. 139–159, 2009.

As an entry to Kanerva; Hyperdimensional computing proper came later than sparse distributed memory. However, it is still indispensable for a deeper intro into the hyperdimensional space and the computing around it.

The Computer and the Brain, John Von Neumann, posthumously 1958

This is a tiny book that Von Neuman put together right before dying way too young.

McCulloch, Pitts, McCarthy, Minsky, Von Foerster, Turing, Von Neuman, perhaps they should have called it artificial brains, not computer.

The goal was always this, the real deal. This is why I think the contemporary conceptualization of 'naive computationalism' is anachronistic. Saying that the brain is a computer doesn't mean looking for the ALU and the memory, it means that an advanced theory of computing will include brains. That you shouldn't dismiss the ALU and memory all too quickly is a theme of Sparse Distributed Memory (Kanerva 1988).

This book is still relevant 40-50 years and there is not much progress in saying what kind of computer framework the brain is. Something happened in the 70s and 80s that stalled progress on computer technology.

Beautiful preemption of core ideas of hyperdimensional computing, holographic, (distributed) encoding and randomness - the path of least assumption.

It should also be noted that the message system used in the nervous system, […] is of an essentially statistical character. In other words, what matters are not the precise positions of definite markers and digits, but the statistical characteristics of their occurrence, i.e. frequencies of periodic or nearly periodic pulse-trains, etc. Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetic and mathematics: instead of the precise systems of markers where the position - and presence or absence - of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. […] This leads to a lower level of arithmetical precision but to a higher level of logical reliability: a deterioration in arithmetics has been traded for an improvement in logic.

John von Neuman The Computer And The Brain, 1957

Permutation City Greg Egan (2013)

Completely stretched my imagination. Virtual realities, the multiverse, a-life kind of stuff. Explores what if certain world views on computation and virtual worlds would be true. I searched the web for dust hypothesis. I am not surprised that there are a bunch of discussions on social forums. I have exclaimed holy shit! more than once while listening to the book. Through the first part, I am following Egan's description of a world with full-mental persons running on computers, kinda nodding along with the plausibility and the concepts being brought up. For instance, at what level of fine-granularity do you run your world? Atoms? Subatomic? Cell - or Organ - Physiological models?

It's the chapter with the Dust Hypothesis where this hits as a speculative science, science fiction in the service of science.

Complementary to The Fabric of Reality by David Deutsch (1996) in some ways.

This is the kind of thing a theory of consciousness will have to say. What is the difference between all permutations of a computer, spread out and scrambled in time, and a computer process generating them 1 by 1 in time?

We already have this relationship to the multiverse, from all possible snapshots, the physical world, causality and time emerge. From the way the snapshots fit together.

If realities are created from snapshots that fit together, a virtual simulated world presumably does, too. I.e. scrambling the computational states in our time doesn't matter, the time in the virtual simulated world should appear normal to a person running on that world.

Then why the permutations of states in the computer? Why not any permutation spread across time and space (or the multiverse)? Why is the dust of reality not filling in the gaps of virtual worlds?

Recently I concluded that we have no idea what software, reality, virtual reality and consciousness are. I used to think that Dennett pretty much cleared up the problem as a biological virtual machine problem. But what is software? What does it mean to make a virtual reality? We know almost nothing.

Does a Boltzmann brain snapshot suffer? Do I need to snapshots side by side for that? Is it ethical to run the same deterministic program of a mind in suffering multiple times? All Boltzmann brain configurations already have been created, it's the same as doing a cache lookup. But why should a single Boltzmann brain snapshot be able to suffer? Surely, it must be the process that is relevant. But if we do a series of cache lookups on a memory, what is the process? It already ran. Substitute this for a deterministic calculation, the answer seems to pop out - it doesn't make a difference whether you run the same mind multiple times. This seems strange.

The dust hypothesis opens another space of questioning. Do other parts of the universe contribute to virtual simulated worlds? What is the nature of a simulated world that supports persons running inside it vs. worlds that don't like chess boards?

Vehicles: Experiments in Synthetic Psychology Valentino Braitenberg (1984)

Let the Problem of the Mind Dissolve in Your Mind.

It is an absolute masterpiece. I discovered this book as a student and it was a mind-blower then. A joyful ride of ever more complicated imagination, that starts doing stuff!

Now I am a programmer and I have gained the capability to go and implement. The vehicles give me a nice a smooth journey into the kind of machine intelligence programming I want to conduct. It is a kind of machine intelligence programming that is deeply inspired by biology and cybernetics. A mind needs to sleep and dream. A real mind doesn't have a pause when I ask it something. A mind is messy in some ways and then precise in others. A mind is thinking ahead and doesn't need to build its whole cognition for each token in its signal stream. A mind is allowed to think vague thoughts about other thoughts. And leave it for later to fill in the blanks. A mind is embedded in reality and connected to the rest of life and existence and navigating this world of ours.

When you read about the vehicles or look at simulations, often things merely go to vehicles 2a,b,c. These are easy to understand and joyful, yes. But the point is that you use the same kind of explanation style to go further and further.

The point of this book is it keeps going up to vehicle 14.

I am building now the vehicles: Blog post. As I perfuse the space of Braitenberg's ideas with my understanding as a creator and programmer, I start becoming a vehicle expert.

I know almost all of them by heart, like friends on a journey to understanding how it all works.

4: Values and Taste - has different kind of transduction functions. 5: Logic - Has threshold devices 6: Evolution - See Dawkins. 7: Mnemotrix - Association, I call it m-lines for mnemotrix lines. Alternatively a-lines for association lines. Similar to Minsky k-lines. This is a bundle operation with hypervectors.

`(associate A B)`

8, 9 and 10 are more neural net tricks, then representing different kinds of knowledge. Up to ideas.

11: Ergotrix - Regularity lines, it is like association, but you associate that B follows A. I call them e-lines for ergo-lines. Similar to Minsky transframes. You can say

`(associate A' B)`

Where A' is the concept of A in the past.

(Blog posts to come)

With hypervectors, this could be

`(associate (protect A) B)`

But I am not sure yet. Braitenberg gives us a bit of a mental framework of how the cortex achieves this with the use of pyramidal cell activity.

12: Thought pumps - This is so out there. With a global threshold, we make this thing have trains of thought. Go from state-a to state-b and not back. The transition comes from the em-system. (e-lines and m-lines). The even more enticing idea to me is what kind of thought pumps can I build.

13/14: The predictor comparator - I was thinking hard about prediction mechanisms and this came along. Perfect, the description is precise enough to start programming something.

Prediction and surprise come across as deep aspects of how this thing self-assembles and what cognition is about. I am not the only one with the intuition that prediction allows us to have a system that starts doing useful things on its own.

One move of Braitenberg's explanation structure is this: "We already see this kind of thing in this or that vehicle from further back, let's imagine we build something more sophisticated".


                  "consider how this vehicle already can do this"
               +-----------------------------+
               |                             |
               v                             |
vehicle-1, vehicle-2, vehicle-3, ... ,       [ ]

Like evolution too, our explanations are created tinkerer style on stuff that already exists. And then maybe the most important vehicle of them all is vehicle 1.

Isn't it beatiful? From the simplest thing, the biggest thing is explained. It is like the grandeur of Darwin in this structure of explanation.

I plan to carry forth this playful and joyful style. Where we think of the mechanisms that make cognition.

For the higher vehicles, the ones that start doing cognition, and implement cortical function: There is no sharp line between cognition and perception. It is an ocean of confabulation this thing. That still has enough to do with the world to be useful. Perception is an LCD trip for real people. (My words).

Another deep insight I get from Braitenberg: The meaning of the system is its wiring. What is the meaning of gear in a clockwork? Well, you can look at the mechanism, this is what it is doing. This is its meaning, the role it has in the mechanism. Its place, inputs and outputs.

What is the meaning of some neurons in the vehicle brain? The answer is its wiring and its role in the system. If the neuro is a fast wavelength light detector, then this is what it is. Its role is to signal to the rest of the system Here is fast wavelength light, not more and not less.

The level of explanation, once you see it, is something that many people love, I think. Deep in biology, we appreciate the relationship between structure and function. And here we build these little circuits and structures that say how do you remember stuff, how do you have small competencies that in the aggregate, look like what the mind is doing? This mind of ours is made of stuff. You can start building some toys and see what it is doing.

This intellectual realm, this cybernetics is like a candy and cake world for me. My mind is gobbling such reasoning with a hunger that no other style of explanation provides. And I hope I can create things that express this joy and richness and depth of explanation.

With the view of cognition I am building, I ask 'Is your red the same as my red?'. This is a challenge to my thinking, a very deep question I think. A model of cognition should be able to answer.

I used to think 'yes' because we are one species and why should we be so different?

Currently, I think: 'No way.' The system needed to allocate a symbol for representing slow wavelength light to itself. If I would program this, a gensym will be sufficient. (Or a randomly allocated hypervector).

In all the ways that matter for us to communicate as conspecifics, your red is, sort of, my red. It is close enough and thats it. We don't think of the same thing. We only think of the same thing within a certain error margin of precise of what it means to think the same thing. We don't truly think of the same thing with ourselves from 20 seconds ago, either. In the same way that when I imagine a red apple, the preciseness of the redness of my representation is just good enough. Absolute true preciseness would put my brain back to the Boltzmann brain of mine from 20 seconds ago and seeing that red. It's practically useless. But sort of precise memories, that I can mix with the rest of my current mind, are very useful.

The system signaling to itself with varying degrees of preciseness, is I think one of the deep aspects of cognition. Such a system is allowed to fill in blanks, and chooses when to do so.

Consider building advanced tech so we can combine our minds. Would you look at my cognition and go dude, your red is so my blue!. You wouldn't, I think. I think all you see is the representation of the slow-wavelength light detector of the system to itself. The system now is the combination of our 2 minds. It would just call it red, what it sees. I am not 100% sure yet, I think it is logically impossible to ever say 'Your red is my blue!'. Because I cannot come up with an experiment where you would ever be able to do so. The meaning of red stays - it is the symbol for slow wavelength light. The identity of the symbol is impossible to be the same across different minds I think. But it doesn't matter. We associate it with blood and danger and warmth and sex. That is the meaning of red.

If your red is my blue, then how do I even know if my red is not my blue from 20 seconds ago? The impossibility of answering this shows me the nonsensical nature of the question, again.

I don't mean that the mind or experience doesn't exist btw. Temperature exists, even though it is explained by the kinetic energy of the particles. It's an integration processing device, yes. But that processing has to work in some way, too. Why wouldn't it represent everything nicely to itself?

If you could tell me the answer to this question, I would be fascinated and inspired by both answers. "Reality is like that!". And then I would go forth and program the same program.

More questions that a model of cognition will answer:

  • How do you wire up electronics (neo-neo cortices) so they become part of our cognition?
  • Why do drugs work? Why are the mental illnesses? Why are there out-of-body experiences?
  • Why do animals with brains sleep and dream so much?
  • Why do magic tricks work? (Vehicle 15,16)
  • Is your red the same as my red? Is my red the same as my red from 20 seconds ago?
  • What is the nature of feeling?
  • What is structured thought? (Vehicle 18)
  • Why can I zoom into memories or ideas? (Vehicle 15,16,17)
  • Why can I feel the space where my juggle balls will fly? Even when I only imagine juggling. (Vehicle 17)
  • (I don't play an instrument but I am sure you might ask this:) Why do I hear music and feel myself playing the music at the same time?
  • What is the nature of intuition?
  • What is the immanence illusion (Dennett), why don't we see our blind spot? (Vehicle 15)
  • Why can we remember but at the same time don't have dejavus all the time? (Vehicle 17)
  • If magic tricks and optical illusions are a subclass of 'cognitive illusions'. Can I come up with novel cognitive illusions, perhaps using other stimuli or a mix?
  • What is the Wernicke area doing, what is the Broca area doing? (Vehicle 18?)
  • Why do I ruminate on emotionally salient events?
  • Is mid-term memory cleared during sleep?
  • Why do we remember faces from years ago?
  • How do we pick what things to remember from a situation? (you don't remember the color of your pants, usually).
  • Why did the cortex grow so fast during evolution?
  • Is there an imaginable evolutionary path from short-term memory to mid-term memory and what about the hippocampus looks like it adapted to that function?
  • What is the self? (Vehicle 17)
  • What is the nature of social cognition? What is the nature of confidence? Of authority? etc.?
  • If the hippocampus represents some memory fingerprint of each situation (place..?), then perhaps encoding and decoding at hippocampus activity is a potential path to mind<->mind or mind<->computer cognitive interfaces? Whatever learning rule the hippocampus uses to learn how to put back the mind into a certain memory state, this learning rule we can use too in one way or another. Then perhaps the interface will feel like memories popping into the mind. (Although I have to say it sounds very straightforward and promising to me to simply interface at visual and motor cortices).
  • Why do I recognize a new face as new, and also remember it?
  • Why is the cerebellum so large? Why are the 2 somatosensory maps in the cerebellum?

I have more questions that current science would be able to answer (or have answers that I don't know):

  • When you have a literalized function like Broca, what is the other side doing?
  • When you go up from retina to LGN to V1 etc. at what point is the blind spot filled in?
  • If the frog tectum is analogous to the cortex, are there brainstem structures analogous to cortical nuclei? Maybe the anterior habenula is analogous to the amygdala and so forth. (and which would be missing, that would be interesting too). (or is that simply not the case at all?) Either answer would be fascinating.
  • Maybe remembering is a kind of actuation, are pyramidal cells that go to the hippocampus in the same layer as the motor output ones?
  • Same with the ones that go back to the Thalamus.

The Beginning of Infinity David Deutsch (2011)

Seminal work in my opinion. I consider David Deutsch to be something like a contemporary Von Neuman, brilliant. This is a theory of what knowledge is and says that knowledge is the most interesting thing in the world, because all problems are solvable, given the right knowledge. There will always be more problems. All science is a series of misconceptions which we call theories.

This includes the most comprehensive analysis of memetics that I am aware of.

As S. Blackmore has pointed out, perhaps the human cortex evolved to support memes. What kind of memes? The interesting ones are the ones that are causal explanations. Being able to conceive of them is what Deutsch calls creativity. But how did it evolve, because we do not see a gradual increase in innovation in the fossil record? […]

The chapter Why Flowers Are Beautiful? is another novel and deep contribution to bio philosophy. Deutsch introduces the notion of open-ended objective beauty, he argues it looks like it exists. The beauty of flowers hints at this puzzle of nature: Nothing else evolved to look beautiful to humans,

pleasant perhaps, but not beautiful like flowers. And it isn't their symmetry or their color contrast, because there are counterexamples in both directions. I.e. a black spider in the bathroom is both symmetrical and has high contrast. Some flowers are white. And some are not symmetrical. But they are beautiful nonetheless.

Perhaps they coevolved with insect pollinators to make a hard-to-fake signal of achieving objective beauty.

Deutsch says the fact that humans appreciate this beauty has to do with our creativity, with our ability to make explanations about the world. He argues only humans and our extinct cousins Neanderthal, homo erectus and so forth, have this ability.

The gap between human minds somehow must be as wide as a species gap.

This is quite wild, but it says that the common language between human minds must be something objective in the world.

To see the shape of the claim, imagine what we do when we start communicating with aliens, it makes sense to start with prime numbers and the simplest descriptions of Turing Machines and so forth. Saying the most true and basic things we know about the world, objectively true things. I.e. the kind of things least tainted with subjectivity.

Deutsch says that this is the relationship between insects and flowers, so they communicate via objective beauty. And this is the relationship of human minds, too. However we communicate, some parts of it are enabled by the same mechanism that enables appreciating objective beauty.

Deutsch gleans another puzzle piece: There is something about what we call elegance in scientific theories. That is somehow a sufficient but minimal explanation of something in the world.

I think that this elegance is something that we see in the design and engineering principles of biology, too. We also see it in whatever it is that makes good software good. There is something about an explanation structure that yields language, in terms of which some application layer can express itself. For instance, the genetic toolkit of evo-devo is a system of building blocks, that achieved a certain generality for expressing animal building plans. Once such a language is in place, evolution can evolve at a higher layer.

For programmers, the concepts expressivity and generality are obvious properties of elegant systems. It is the same distinction between parochial and open-ended (good) explanations in Deutsch's epistemology.

"The universe is simpler than we think".

Leonardo da Vinci pointed to this puzzle, too: Simplicity is the ultimate sophistication.

As a biologist, it is easy to see the shape of a good explanation in the form of the Darwin-Wallace theory of natural selection.

This all hints at a unified theory of beauty, elegance, design, engineering and epistemology or something. This would be a theory that has a unified perspective on biology, the philosophy of programming, epistemology and so forth.

More on this: Thinking About Brain Software

The Feynman way is a playful attitude toward the world. Curiosity is like a fire that burns in every child. The only thing you can do with it is ruin it.

Buzsáki, G. (2019). The Brain From Inside Out.

This is a dense experimental and neurophilosophical report. Buzsáki's neurophilosophical contributions are some of the most relevant ideas in my opinion.

His neuronal syntax (via oscillations) would be the means of combination for brain software.

His theory of the hippocampus as a librarian indexer fits well with a Pentii Kanervera high-dimensional computing idea. It is that the symbols are first, and the neocortex gives them meaning.

The notion that the ideas are first is one of the joyful twists of a mature conception of brain computation.

Also: Neuronal Ensembles Memetics.

On the Texture of Brains An Introduction to Neuroanatomy for the Cybernetically Minded (1977)

This is full of deep insights on anatomy <-> psychology. If you like the vehicles, you will like them too. But it is more neuroscientificly technical.

The discussion on the cerebellum is more interesting than any modern views I have stumbled across. I would like to go and program a little demonstration with different tradeoffs, the kind of thing Braitenberg would have liked I bet.

The cerebellum is quite interesting because it seems to be a slightly simpler puzzle to solve. There are certain 1 dimensionalities, for instance, it looks guaranteed from the anatomy that both cerebellar cortex and nuclei are part of the computation being made etc. The olivary nucleus looks like it only makes some timing signal.

It's funny that much of the old stuff is so good. Braitenberg, Marr, Hebb, McCulloch, …

Maybe it is because they didn't have toys like EMs or scanners. So they had more time to sit down with a piece of paper and pencil and think.

One lesson from being a programmer is that sometimes the way to fix a bug is to step away from the keyboard for 10 minutes, to let the puzzle of a bug dissolve in one's mind.

Cell Assemblies - love them

I already came to similar ideas about the cell assemblies from the biological notes on the vehicles [see future blog posts].

Braitenberg stretches the cell assemblies far into memes I would say. One of the things I am musing about is what are the primitive data structures of a cognition machine, I think that Braitenberg was seeing the same shape of the problem, and stretched the cell assemblies into that shape.

They are in a way these clouds of activations with meaning and goals that excite the cells in the cortex. Some have a temporal structure (what he implemented 'ergotrix' in Vehicle 11). Some are more timeless.

These cell assemblies have rough edges and evolve, like memes and memeplexes, they supplement each other etc. they can be more or less cloudy etc, etc.

I find it useful to consider different implementations of mind primitives. Be it cell assemblies, hypervisors or something yet to be discovered.

The program that one programs when they make a cognition machine is the one that is 1 abstraction layer up, I think. So it doesn't need to talk about neurons and cell assemblies, but it talks about some representation of information with cool properties, like being mixable.

The Science of Can And Can't, Chiara Marletto, 2021

Wonderful! Now I have the fundamental notions of Constructor Theory in my mind. Woven together with some fabric made out of the wonder of science and the world.

Constructor theory is a mode of explanation that talks about possible and impossible tasks. Sounds innocent, but it's a whole philosophy that allows one to explain the world not only in what is actual, but also what could be - with counterfactuals.

This is very joyful and open-ended. For instance, information is about all the things that could be said, not about the stuff that is said.

The universal constructor is a machine that can be programmed to perform any task, any physical transformation. It divides the world into possible and impossible. Thereby, the repertoire of the universal constructor is a description of physical laws.

One might think, what about tasks you can't do because no material is at hand? Or some such intervention. But then you realize that this is exactly what it means to be limited by physical laws. For instance, you cannot go faster than light, you cannot create energy from anything, you cannot use the energy that went into heat and so forth.

This is equivalent to considering a different kind of interpreter, that can do not only make information processing tasks, but physical tasks, too. Jon Von Neuman was on this quest to generalize the theory of computing in this way; Giving this project the name universal constructor. Since computing devices bottom out at some physical reality, I think this roughly says that we can see reality as an interpreter for a kind of constraint language of physicality. Like 'if you put these particles next to each (syntax), then this and that will happen' and so forth (that is for physics to say). The language/repertoire of the physical interpreter is equivalent to stating the laws of physics, and we arrived at the basic the notion of constructor theory from a second angle.

So in some ways, constructor theory is a natural extension of the theory of computation and programming; It is a great candidate for providing deeper theories of software, programming, and virtual simulated realities, which I think are needed for theories of brain software, creativity, memetics and so forth.

Diaspora Greg Egan (1997)

This literally builds a monument at the spot of the last question. If you are into scify, machine intelligence, big history, life the universe and everything.

This features simulated virtual worlds, a-life, with believable 'real' feeling science, as well as how the characters engage with their world.

I can only implore you to read this.

What Is Life Erwin Schrödinger 1944

This is beyond fascinating, this is epic.

Schrödinger predicts some of the structure of DNA by reasoning about the logic of mutation and rules of molecules and what not. And he does this by understanding the role of DNA as information medium. The fact that biology and 'computer science' (called 'informatik' in german) are 2 branches of the same theory is an idea older than the discovery of DNA. The 'aperiodic crystal of the chromosome of such tremendous intricacy'.

This book is full of profound ideas. And when I go back to read it again I'm sure I will see some other stuff that Schrödinger already put there, too.

Schrödinger makes an analogy of genes and quantum physics - the values jump discretely. It's fascinating how he delivers on the notion 'what the perspective of a physist says about biology'. This motivates me to do the same with 'what the perspective of a software engineer says about X'.

Next to the genetists, he quotes Spinoza and Sherrington a lot. And interestingly, Boltzman. Sherrington is an early neurophilosopher that I have to check out now. He must have been a kind of Richard Dawkins of the time.

Seems like Schrödinger was not aware of James Mark Baldwin's Baldwin Effect. This makes the theory more impressive, having been derived independently by such careful thinkers.

Consciousness is like 'one thing', just exists as a canvas, the way that the world exists I guess. Since we don't have a theory of consciousness, out there ideas might turn out to have kernels of truth. It would be folly to dismiss what Schrödinger said there too quickly. But this is one of the things I will keep a lookout for on my re-reads in the future.

The way he talks about the chromosomes as the clockwork mechanism reminds of

Jessica Flacks low dimensional mesoscale elements. Levin talking about the concept but for biological systems / morphology. For instance, the calcium concentration is a single variable that allows the developing system to communicate. The way money could be seen as a single scalar value representing value, this allows efficient communication. It is abstracted in the sense of saying many things are one thing. All kind of values are one thing, a number.

Nexus, Yuval Noah Harari 2024

I read all the Harari's except the children science one (that's on the booklist). They are all super informative and a blast to read. Big history stuff.

This talks about information networks in human society, how modern information technology (telegraph, radio, …) enabled both large democracies and totalitarian systems.

In Sapiens, Harari described the intersubjective reality, these are stories, origin myths, shared fictions like the german Grundgesetz or money.

Analogously, intercomputer reality are the shared myths between computer systems.

This is harmonizing with similar tones coming from Susan Blackmore memes and tremes, the treme (technological meme I think) is a replicator, a piece of information with knowledge of how to keep itself alive and reproduce, that infects not human minds, but computer systems.

Technology already modifies what memes spread. Consider a meme that requires the user (the target host) to download a thick gif from the internet.

Such a meme is less powerful in an area where everybody has low bandwidth internet.

You see that memes must cater to the computer systems and technology, to be good memes.

It is a small step into a world where memes spread between computer systems, with humans as minal interaction layer, for instance not pausing the youtube auto play.

And eventually, no human is in the loop. Yet if those systems are influencing businesses, media outlets and financial institutions, we can't know what dynamics the system can take on.

This already happenend to some degree with the social media disaster of the last decade.


Thinking about collective intelligences and tremes;

If the information flows of computer networks, if they influence each other, fabricate and create realities and shared fictions, it's thinkable they could hit on certain principles of self organisation and "creativity", which we don't understand yet.

Principles that nature hit on during biological evolution to make organisms in nature, brutal nature, with parasitoid wasps, viruses, nasty, competent, long living, fast reproducing, adapting entities.

We know almost nothing of artifical live, tremes, self-organisation and creativity.

For all we know, once tremes and networks have certain properties, they have the capacity to emerge new forms of organisation.

This could be something like a self-creating, self-fulfilling, self-preserving beraucracy (a topic of Harari).

There could be artificial, living software entities, rising like ameaba octopi (figuratively) from the interplay of a digital bureaucracy and the flow of information between edge devices and 'AI' driven servers that make decisions.

A run away interplay between datahungry, blind, competent computer systems and humans was the disaster of social media in the last decade.

How emotionally charged content got amplified.

Like some kind of garden hose that slips from your fingers as a child and the water goes everywhere. But you can't capture the hose, it's flipping around, and the water is coming out to fast.

(the demons once summoned… )


Yea, the idea of the beraucracy as an aspect of 'consciousness' (in the brain-software sense) is cool.

A beraucracy is trying to put the world into one of it's predetermined checkboxes.

You fill out a form, you think 'none of these apply'. This doesn't matter to the bureaucrat. For her it matters a box is ticked, and a box must be ticked.

Beraucracies simplify the world (also called dimensionality reduction).

The world is infinitely complictated; But the beraucratic system is finite and low dimensional (N boxes).

This is reminiscent of some contemporary theories of brain software and self-organisation.

Consider Lisa Feldman Barrets work, the low dimensional limbic ring is categorizing a perception (I would say situation).

Josha Bach thinks brain software must self organise, using some organisation principle a kin to a state.

The beraucracy is shaping the world, for instance by saying a field belongs to one farmer and not his neighbours.

That is a top-down control process.

[wip]

Daniel Dennett I've Been Thinking, 2023

One of my main influences and still I didn't know how much of my memes trace their lineage to him.

The False Believe Task, basically the foundation of the field of social cognitive neuroscience. The brain in the vat, Where am I, which was the inspiration for The Matrix (1999). One of the movies that stretched my philosophical thinking as a teenager.

His heroes include the MacKay and the Ratio Club, so it is interesting to note that Dennett is American, but his cybernetics is influenced by UK cybernetics.

I didn't know he was friends with Seymour Papert; The story of the robot dog 'Tati' was straight out of some kind of cybernetic fairy tale. How did he find a machine made from so many intricate electronics and parts and try to analyze its workings together with Papert? A bit like the stuff I want to do with the vehicles: current multisensor vehicle.

There are many descriptions of food with meat. It will be obvious to future generations, what the hell was going on. Why was it ok to kill animals for food?

Kevin J. Mitchell Free Agents: How Evolution Gave Us Free Will, 2023

Cool overview, makes a modern biology version of the Braitenberg Vehicles kinda. Including making a stop at the Cnidarians: Yuste and others looking at the neuronal ensembles in jellyfish.

Taking some unexpected turns at physics. I still think Mitchell is potentially cool. I like that references neuronal ensembles and Rafael Yuste.

I Am a Strange Loop, Douglas Hofstadter 2007

This seems to be a theory of abstraction. Self reference really isn't the core point as you might think by reading by title alone. Apparantly this whole Gӧdels* proof business has to do with a theory of software (or 'virtuality'). I have yet to read GEB to my embarrassment.

References Parfit Reason And Persons and Dan Dennett while it smears out the concept of selfhood. These marbles and 'simbols' surely stick.

There is this 'vast information processing' picture of the mind. In this view the brain is a information processing device, like a 'vast hurricane' of little contributing elements (like the atoms of cognition), which have mesoscale emergent properties, like clouds and vortexes (~ symbols).

I feel this is a very physicist view. And I think it will turn out that there was something inadequate about it.

In my view, these symbols are living entities, too. A biology, as well as a physics. This goes beyond mere word definitions. By saying the memes are replicators, we know we should look for a recipie and vehicle.

There must be robust pieces of 'anchoring' information, recepies which survive many (all) instances of a meme being active. Perhaps 'backbone' neuronal ensembles; These would be akin to the genetic code, and it is appropriate that they are allocated randomly. With the phenotype being however such a meme is used.

I think that a theory of brain software needs to say where the design and knowledge comes from that 'programs' it. It's a theory of how physics and biology can make knowledge that is needed.

*) In insert this this in emacs by using m-x insert-char, "CYRILLIC SMALL LETTER O WITH DIAERESIS". I don't use a german keyboard.

Gary Marcus, Ernest Davis Rebooting AI: Building Artificial Intelligence We Can Trust, 2019

Is basically the same that you also get from listening to current Garry Marcus interviews on YT.

I find myself agreeing, a hype on gen-ai isn't useful and at worst misleading and harmful.

François Chollet said that chat gpt has set back AI research by 5 years, because of all the resources being put into this off-ramp.

As software developer, it's clear you need certain things in a system for it to be able to certain things. I find Marcus solid psychology and cognitive science very refreshing. Bringing stuff back to the bottom of the fact.

Look, this learns statistical properties, it doesn't make a world model, it doesn't even have a fact database, even though we have figured out database technology since decades.

I don't share the optimism about deep learning (maybe ironic) with Dr. Marcus and Davis. They say AGI will likely combine some deep learning and some symbolic sub systems.

Could be, but I think a thousand other things could also provide the statistical, fuzzy side of things.

Deep learning is essentially behaviourist, relating inputs to outputs. It will not provide a theory of cognition.

It doesn't talk about how to build mentalities.

Let's say a "structured mentality" is a software that probably includes at least a memory and ways to make progress without getting stuck.

For further context, I would say the study of human mentality is roughly the study of the cognitive software system available to a human brain-user. This is roughly the field of pschological cognitive science and cognitive neuroscience (basically a mind-science with brain pictures atm).

The only chance deep learning has to make mentality is to emerge a mentality software in it's networks. David Chalmers is discussing this surprisingly lucidly.

But how… ? If your memory are adjusted weights coming from the training? It's really strange kind of intelligence to me when you say would this thing work if it had memeory x? Clearly, such an approach has certain capabilities and lacks others.

The guess is it's not impossible to get minds from that, but as David Deutsch says, roughly as likely as AGI coming out of the deep sea. It's unrelated to the real questions of cognition and self orginizing softwares.

It's at max a tool, a subroutine providing some capabilities to a system which has a different principle of organisation altogether.

The deep learning branch that seems to make more sense from this light is discrete program search stuff, because now you train a net use a computer system.

-—

I have The Algebraic Mind on the reading list. Marcus predicted hallucinations more than 20 years ago from analysing such properties of cognitive systems.

Robert Sapolsky Determined: A Science of Life without Free Will, 2023

I like it for the nuance 'biology is messy' and for the strong case for non cruel treatment of criminals.

It is interesting that it comes as a relieve for people with psychological disorders to say it is a biological problem, just as diabetes or cardiovascular issue and so forth.

It was an achievement of modern psychology to ground psychological disorders in neuroscience.

Also cool the history on 'chaos theory' and the pop science, public perception of it.

"Free will doesn't exist" still sounds to me like saying /car acceleration doesn't exist', because it's really the wheels turning or something.

It must be made from somehting and knowing that something doesn't change it's function. I maintain that it is useful to label some functional properties / operations of human brains as 'commitment', 'will', 'decision making'.

But we can be more kind to our fellow humans, who ultimately did not chose what stuff they are made out of.

Ramez Naam Nexus, 2012

Damn, I can recommend it.

They have software running on their high-resolution brain interfaces. It is good science fiction, just hand-wavy 'Yeah imagine you had a tech like this'. Then it explores the things you can do with a tech like this.

Combine minds, and develop software that influences your neuromodulators and things like this.

Funny, I accidentally downloaded the German audiobook version.

I haven't listened to something translated from English to German for a while and I noticed something. There is the rhythm and something like the spirit, or the small Hofstadter situations of US American English inside the translation. It is kind of obvious that it would be like that. If I would translate it, it would be similar, I think. Perhaps that would be the bar for an ultra-good translation, that the small situations are actually 'in the target language', too.

There is a kind of Turing test for translations: Read or listen to some text, then say whether it is original or whether it comes from some other language that you speak.

The German audiobook performed by Uve Teschner was very cool though.

Ramez Naam Crux, 2013

This contains an interesting portail of addiction, as vivid as Requiem for a Dream.

There is some speculative science here about what group, "merged" human minds would be like.

Kids growing up with brain-computer, brain-brain interfaces would create merged minds, learning and dreaming together. That comes across as quite plausible. Perhaps you would need to make sure the dream mechanisms are interfacing correctly and so forth for this to work properly.

And existentially, would there be a larger mind then? Acting and sensing through the humans it is made out of?

A theory of cognition would be able to give answers. Probably it would depend on bandwidth and the specifics of what kinds of interfaces are made and so forth.

But whatever is spanning a human mind across 2 cortical hemispheres, the same process presumably would work for a larger system, too.

Perhaps this is one way of interfacing 2 brains: Keep everything the same but mirror a second pair of hemispheres, i.e. simulate another hemisphere bundle



  human 2            human 2
  right              left
+--------+        +--------+
|      <-+--+-----+-> ^    |
|       <+--+-----+>  | ^  |
+--------+        +---+-+--+
                      | <------------ virtual simulated wires
                      | |
+--------+        +---+-v--+
|      <-+--+-----+-> v    |
|       <+--+-----+>       |
+--------+  |     +--------+
  left      |       right
  human 1   |       human 1
            |
            |
         physical
         symmetrical, presumably column-to-column



  • Would you need to do this with both hemispheres?
  • Would it make more sense to put the virtual wires left1<->right1, left2<->right2, or left<->left?

You could do this if you have something like a neuronal lace, with just 100k electrodes. But they have to be positioned correctly to each pick up a cortical column.

Assuming that cortical column activity is correlated, and that the symmetrical inter-hemispheric connections of the corpus callosum go column->column.

You would need some kind of calibration, though. How do you say what counts as symmetrical to the other person's brain? The answer lies in the mystery of the hemispheres. How and why the interhemispheric connections are constructed.

How to scale this with multiple humans? Perhaps simulate a virtual 'big hive' mind.


+--------+        +--------+
|      <-+--+-----+->      |
|       <+--+-----+>       |    ^
+--------+        +--------+    |
  left             right        |     3, 4, n...
  human 2          human 2      | ...
                                |
                                v


  virtual hive
  virtual right      virtual left
+--------+        +--------+
|      <-+--+-----+-> ^    |
|       <+--+-----+>  | ^  |
+--------+        +---+-+--+
                      | <------------ virtual simulated wires
                      | |
+--------+        +---+-v--+
|      <-+--+-----+-> v    |
|       <+--+-----+>       |
+--------+  |     +--------+
  left      |       right
  human 1   |       human 1
            |
            |
         physical
         symmetrical, presumably column-to-column

This is just one idea. Another idea would be to interface by simulating an artificial thalamic relay nucleus. (You can do this with electrodes in the cortex, too).

Perhaps this is better because you get the input/output organization. Perhaps this would unlock even more magical interfaces. Just the brain does with its 'lower' processing.

One of the tenets of Cell Assembly Memetics is that the neurons don't matter that much. Cell assemblies will leave neurons behind - and merge with other cell assemblies, igniting the best-connected available subnetwork.

Perhaps this is a glimpse then at what a mind that stretches multiple human brains would do. Its memes presumably would not care about the neurons and even the brains.

Perhaps there would be memetic drivers for memes that are generic enough so they don't need to care, since those memes can happily spread all available neuronal areas.

Perhaps mature memetics will be able to answer these questions.

Losing human individuality in favor of a big mind? Isn't that a problem? The borgs of startreck?

Nah, I don't worry about that being a problem. Looks to me like the next step in what civilization is. And doing more of those steps leads to bigger beauty and so forth.

That doesn't mean there are no failure modes. Perhaps something could go wrong and a hive mind would be the ultimate mediocrity machine. Without creativity, genius, childlike wonder and curiosity.

Shudder

Coding & Vision 101: An Educational Lecture Series, Allen Institute, Christof Koch and Clay Reid, 2012

Completely worth it. Christof Koch is one of the well-known neuroscientists and for sure interesting to listen to.

Why vision? One of the reasons is that there is this super inspiring work from Hubel and Wiesel early on. So now studying vision aligns with a cybernetic/computational / information processing approach to studying cortical function.

Highlights:

The lecture on the retina, it's cool how deep we understand this piece. On the level of the neuronal mechanisms which is the space of understanding I crave.

For instance, center-surround motion detectors in the retina are derived from 2 motion detectors with different perceptive fields.


  +----+----+---+     say you have some motion detectors like this
  |    |    |   |
  +----+----+---+
  |    |    |   |
  |    | n1 |   |
  +----+----+---+
  |    |    |   |
  +----+----+---+

 ----------------------


 Now you have a stellate cell that gets excited by a large field of your motion detectors.

  +----+----+---+
  | ---+----+---+------+
  +----+----+---+----+-+--+
  | ---+----+---+----+/|//| stellate cell
  |    |    |   |    +-+--+ activated from the whole grid
  +----+----+---+      |    (large perceptive field)
  |  --+----+---+------+    (low res)
  +----+----+---+


 ----------------------

Now you inhibit 1 motor guy in the middle of your large field.

  +----+----+---+
  |    |    |   |
  +----+----+---+    +-+--+
  |    |  |-+---+----+////| stellate
  |    |n1  |   |    +-+--+
  +----+----+---+
  |    |    |   |
  +----+----+---+

-> n1 is now a center-surround motion detector

Now this neuron in the middle is active only if there is a small motion. Compare a fly flying vs moving the head. When you move your head, then everything moves but you don't want this to be so important.

This one: Lecture 7: Information Processing in the Brain was quite cool, too. Lots of overlap with current feed-forward object recognition neural net stuff.

The picture one gets is that there is one mode of vision/perception that goes fast and is feed-forward, at least in some way.

Sort of a bottom-up pattern recognition kind of thing.

Another hint for such a thing is the fact that for some reflexes the time between visual input and moto output is such that there is only 1 or max 2 spikes in between. (That is from Braitenberg On the Texture of Brains 1977). This doesn't look like a system that is doing a lot of feedback rumination, not for that piece of functionality.

Presumably, the use of feedback connections in the system comes in somewhere else. One might think that the rest of the system might come up with certain predictions of what to perceive next, biasing the system feedback wise towards those.

These and other little models and ideas of what the cortex might be doing I want to explore by programming. My idea is to take Braitenberg vehicles and then keep adding more stuff.

Connectome? Why Do people think this is a good idea?

It simply looks like a 15-25 year mistake to me.

Why don't you think before you make the connectome - what you would do with the connectome? I mean I get it, you want to find the neuronal motives etc. But why don't have a model and then look at specific places?

One of the most important things to know as a programmer is this: How much payoff do I expect vs the effort spent? I make a matrix where the rows are the criteria and the columns are the things to do. One of the rows will say "Does this fix the issue?". Or "Will this tell me to discern this or that branch of explanation space?" I need to think about this before I go and do XYZ.

I need to think about the high-impact, easy-to-do experiments that will make me go down this or that of how reality works. It is very important to keep the big picture in mind and understand where I can accrete understanding, destroying this or that model early.

This is essential for being able to hone in on explanations fast and directed.

Sure it would now and then be cool to have a connectome and to know whether x makes such and such connections to y. Or basic questions on neuronal units/motives etc. For instance, does every pyramidal cell come with a pair?

But then why don't you make models first and then look at the 5 interesting places that pop up here and there?

I can already tell, from the connectome we will not understand how the brain works. It might help when we have it yes, but you could have spent 3 years thinking and then answered the 5 very interesting, small questions coming out of your model.

Small and unseemly questions.

Sometimes the way to solve a bug is to lie in bed for 10 minutes and think through all the puzzle pieces presented to one. The puzzle pieces one was not thinking about for a while, together with newfound insights.

The situation, of not having the fingers on the keyboard, forces one's mind into the space of ideas. There are things to do without making experiments - in the space of ideas. The space where all explanation comes from.

And sometimes a puzzle is already solvable, by considering the givens from different perspectives.

I think of it sometimes as juggling different pieces of data and candidate models together. To see which data goes with which aspect of a model, to see which aspects of a model go together.

Christof Koch comes out of the Institute for Biological Cybernetics in Tübingen - V. Braitenbergs lab. Must have been a little hub of cybernetic (neuroscience/computational neuroscience) thinkers for a while.

Molecule of More: How a Single Chemical in Your Brain Drives Love, Sex, and Creativity―and Will Determine the Fate of the Human Race Daniel Z. Liberman, Micheal E. Long (2018)

Quick read, delivers on general-population neuroscience you expect.

The dopamine pathways are the ones that make anticipation. Different from the ones that make satisfaction.

Loving the ideas about how there are fundamentally 2 aspects of cognition. About the stuff that is far away - both in time and space and the stuff that is near. That peripersonal space, the stuff we can reach with our hands is handled differently. I find this very intriguing. I have mentioned this in Extending Your Reach blog post.

So maybe a good tool allows you to reach things from far away and bring them under your control? It is not a coincidence that driving a car feels like the car is part of the body.

So too it is with using a computer setup, a real computer setup.

Or a good tool sets your dopamine system ablaze, raising with the far away imagination of possible things to do - to reach.

I think Alan Kay said something like the simple should be easy and the impossible should be thinkable. Is it not our mind that is reaching into those spheres of possibility? A good tool and mastery means in some way to have a wider circle acquired reach.

Idea:

Put a neuromodulator (global/local hyperparameter sort of) in the predictor brain of the Braitenberg vehicle 13. (more to come, for now see Vehicles).

It dynamically updates the predictor to output more states per time step. Allowing the vehicle (especially with the vehicle 14 upgrade, which biases itself towards good predictions) to dream more lofty dreams of the future.

It computes more inferential steps between what is and what could be.

The designer of vehicle 13 must make a decision. How much time does the system spend in the future, using up resources that could be used in the present?

Too little and the prediction machine is worthless, for its power comes from predicting far-away and multi-step possibilities. Possibilities that otherwise would have stayed beyond the veil of near-sighted obviousness. Too much and the system has another problem, the present does not wait for us to press pause and continue. A biologically navigating system must balance its resources, sometimes a quick decision must be reached.

One way to make a more dynamic system is to parameterize the hyper-parameters and here we are, modulating our prediction state output count. Very much a mechanism for some of the things they describe in the book.

The dichotomy between tending to the present and dreaming of the future.

The creative genius high dopamine mind, on the edge of grandiosity maybe. Or maybe stuck in far-reaching ideas. But with an outlier chance to make novel far-reaching connections, too.

From thinking the above thought I realized one thing:

If t0, … is your input stream.

[ t0, t1, t2, t3, t4 ]

It is fairly obvious why an intelligent system would need to parse a time sequence into events

[ [ t0 ], [t1, t2, t3 ] ]

    |
    |
    v

[ e0, e1 ]

Similarly, we want to parse space into objects. One might wonder what kind of mechanisms overlap for these 2 processes.

Now it is viable and useful, to have multiple even parsing streams going on at the same time.

[ t0, t1, t2, t3 ]

    |
    | attn mask 1, attn mask 2
    v

 event parser 1, paying attention, ... event parser 2
 to this or that dimension of input
    |                                          |
    v                                          v
[ e0, e1 ]                                  [ e0', e1' ]

   event stream 1                          event stream 2

I think something in the mind must grow an orchestration of attention payers, time parsers, and prediction resources. Stuff like that.

If Alice hands Bob a ball, there can be one set of resources observing the physical domain.

[ event-alice-holds-ball, event-alice-hands-ball, event-bob-has-ball ]

Another set of resources might pay attention to the dimension of ownership. Depending on whether Alice is giving Bob the ball as a gift, there is an important ownership transfer event. Otherwise, perhaps there is an important loan or share event.

So one of the things I am thinking about is how there are these dimensions in how the world works. Perhaps algorithms that grow minds must have mechanisms for finding such dimensions. And perhaps importantly, the brain has mechanisms to re-use resources that work well in 1 domain and apply them to another. For instance, the ownership domain is allowed to be modeled after the physical location domain. Such that operations like 'move', and 'create' already make sense.

Different even parsers might parse the input stream into differently-sized events. One might imagine one that parses a whole day into 3 or 4 blocks, each 3 hours long or something.

Another parser might parse the immediate physical ongoings on the level of hundreds of milliseconds.

Now, we can have predictors that operate on different timescales entirely. So in order to think 3 steps ahead into tomorrow, I don't need to predict every single drop of water dropping in the skink. I am allowed to think of higher levels of analysis, based on some already done event parsing.

There are these more smeared out and less smeared out representations somewhere in how it all works:

signal stream <-> single events room <-> objects situation <-> moments atmosphere <-> .. intuition <-> precise thought mass nouns <-> count nouns

Do magic tricks work better on high-dopamine people? If you dream further ahead, maybe you also bias your system to predict perceptions more. Fulfilling perception predictions (but faking them) would be one way to make magic tricks work.

2023

The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal, Mitchell Waldrop (2018)

Every programmer should read this. Everybody who cares about computers and why we live in the world we live in should read this. Everybody that cares about how the world works should read this. We are living in the computer age. What will happen next to civilization will happen because of computers.

Lick is now one of my heroes and I hope I can carry forward the good-heartedness that I am gleaning from his legacy.

When I feel myself in the world and wonder what works well? - It is the internet and computers. The internet is so good, it is a force of nature. And I mean the infrastructure and the fundamental ideas. I do not mean the content which is bloated by commercial bullshit currently. The competence of a computer, when it is not bloated with garbage operating systems, is something to be felt in the mind heart and soul. It touches my deepest desire to extend my reach. Its power is the sun where before there were candles.

What if there are 3-4 other things that are as good as computers and the Internet? If we find them, we will be riding the waves of the singularity.

This book is about the people that came up with the Internet and computers. The cybernetic people understood that thinking is something machines can do. And the ARPA community, the hackers and visionaries that had a future of infinite advances and togetherness in mind, when they saw the first clumsy computers.

Also see: No Iterations, AI

Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, 2013 Douglas R. Hofstadter, Emmanuel Sander

Oh my god, the level of abstraction of this book is chosen perfectly.

In the copy-cat domain, when ABC says "one day I was changed to ABD" and XYZ says exactly that happened to me!.

I was changed to WYZ once!.

When you read this book and you follow their ideas, you know it would be funny in that context when xyz would say I was changed to DYZ.

There is a space of ideas, where ideas cluster like clouds. To give a cloud a name is to abstract, to make a category in Hofstadter's words. A category is like a city, growing in a child's mind, and encompassing more or less territory of this multidimensional idea space, with the fringes only sort of fitting. Funny Yudkowsky brings the same analogy of the idea space in From AI to Zombies (2015).

To know the right essences and categories of a given situation, is intelligence. And they sort of suggest that the mind is an analogy machine. The most central primitive of what a mind is about is to make analogies.

This book goes deep into this layer, really exploring the space of analogy-making. It is witty, full of world plays and joyful little stories. There is a lot of cognitive psychology of language and there are these beautiful long lists of language examples. Giving you a feel for the substance of the mind that they are talking of. It is myriad and plentiful, so rich.

They wrote this book in 2 languages and talked about the translation itself in the book, beautifully meta. Nerdy in the best sense of the word. Hofstadter is growing on me as a crystal clear and down-to-earth thinker.

The core of the book is giving me perspective on this analogy-making and categorizing, how the common sense mind is doing it. And how aesthetics, design and scientific explanation are all instances of these layers. There are names for the horizontal broadening of a category (elbows are like knees) and vertical category-making (knees are part of a new category of higher abstraction. A joint.) There are dimensions in which symmetry and analogy can be found, if one understands the essences of concepts.

The last part is a beautiful treatise on analogy in science particularly the mind and thinking of Albert Einstein. This is a nice history of science piece by itself.

This perspective fits 1:1 with David Deutsch The Fabrik of Reality, where he argues that imagination and explanation is the important thing about science. I already wanted to check out Albert Einstein, now I have encountered his genius 3 times.

The other one Yudkowsky says that without his general relativity, we might be scratching our heads about light bending the way we are confused about quantum physics.

It is this imaginative, creative, analogy-making, essence-finding, explanation-making thing that is true Science. It has to do with intuition, taste, elegance, aesthetics. I see the same principle in the hustle of the hacker, the wizard and the detective. All must understand the right kind of essence, and make explanations in the broad sense of the term.

Btw the audiobook spoken by Sean Pratt is brilliant - with love.

AI perspective:

The view here of cognition is relatively high level. Using cognitive psychology and linguistics as a lens mostly. It stays on a thin layer of abstraction level below moment-to-moment cognition and linguistics. It is complementary and overlapping in places with Pinker The Stuff of Thought (2007). Consequently it doesn't go into a level of detail about the mechanics of how such a system might be built.

I map it to Minskiei's Society of Mind model and it fits well.

Resulting in the situation where The Society of Mind, The Stuff of Thought and Surfaces and Essences are for me pointing to the same underlying machinery and stuff of cognition. There is an essence to be gleaned, you might say.

I gather these analogies are made from oceans of sub-analogies. How else can they grow to cover more space in concept space?

They could work by the kinds of things Minsky calls micronemes (tiny dispositions, tiny memory traces), transframes, pananalogies and such things.

A transframe is a datastructure of 3 slots, source, destination and transform. A panalogy is a set of analogies across different domains. For instance, the physical domain and the possession domain.

Book in place 1 -> book in place 2 and Book possessed by Alice -> book possessed by Bob.

It seems self-evident. Whatever is making a brain work, it is leveraging the power of analogy.

One thing sticks with me, the talk of how the best mathematicians make analogies about analogies. I am sure if one were to look, one would find the same thing in great software engineering.

Here is an edge to think further about, promising power.

The high level of abstraction of David Nolan's talks came to mind.

He can give a talk about Go, which is about programming. Really.

Is Rich Hickey making analogies about analogies?

Now that I am looking for it, I wonder if I will find it everywhere.

Common sense thinkers:

Updated set of most common sense thinkers:

#{Hofstadter, Minsky, Dennet, Dawkins, Braitenberg, Socrates}

I feel like I have to keep going… Feynman, Darwin, Einstein, Oliver Sachs.

This made me realize something. The real genius is amping up common sense to the highest degree possible. Making the person at times say utterly obvious things? Not sure. But simplicity and elegance in explanation ties into it.

The naive expectation of what genius is:

     high highfalutin ideas

      ^   philosophical terms that nobody understands
      |
      | extrapolation, towers of ideas, complicated associations, ...
      |


  common sense

+---+
|   |   +---+
|   |   |   |
+---+   +---+
    +---+
    |   |
    +---+

What genuis really is:

 common sense

+---+         +-----+      making even more common sense
|   |         |     |    <-------------
|   |-->   <--+     |      stating the obvious, even more obviously
+---+   ^     +-----+
        |                 deep relationship to the world by simple reasoning
   +----++
   |     |
   |     |
   +-----+

The genuis I see in my intellectual heroes is one of making more common sense, to the point where they are thinking about the substance of the world is made out of. There is a childlike, playful depth to common sense and thinking of the world. Not sure why but Douglas Adams writing embodies this for me in some way, too.

This is the kind of mind that Hofstadter has, too. Which is why his ideas are so refreshing and mind expanding.

See:

Bostrom, 2014 Superintelligence: Paths, Dangers, Stragegies

Kinda a hard read. I consider this a must for thinking about AI and its role in the world. Bostrom lays out the land in such detail. If you speak of AI and you have not read this book, chances are you are reinventing some flawed, not-fleshed-out version of what Bostrom already played out in detail.

And you are missing the 2-3 other ideas on that branch of thinking that he was also already mentioning.

See In The Business of Building Minds, The Power Of Intelligence

Rationality: From AI to Zombies, Eliezer Yudkowsky (2015)

So many well-thought-through ideas. This is a whole worldview right there. It took Elizer 6 book lengths to lay out the landscape.

Eliezer delivers. These are genuinely interesting points of view, nooks and crannies everywhere on how the world works and what can we do about it.

What is a good life? What is wisdom? Some of this is what Socrates would have come up with, had he been born in this time. Elizier calls those sometimes The Path and Bayesianism.

Eliezer is big on the lessons from evolutionary theory and praises its careful thinking.

To reason about animals and their behavior, one must keep the levels of organization straight. The brain is an adaptation executor, not a fitness maximizer.

                               phenotype ->

| genes                       | brains                           | mental contents
| genotype     development    |                                  | anger, lust, fear
|----------------------------->---------------------------------->
| game theoretic reasons      | adaptations                      | culture
|                             | form                             |

There is a game theoretic reason to have mental content like anger or lust. But the reasons are not represented in the mind, ok? You have a mind, shaped by your brain, which is shaped by evolution. Brains with the capacity for lust were more successful brains in the past, so those brains exist.

In the mental contents layer, you don't calculate that you have 1/2 shared genes with your brother and 1/4 shared genes with your cousin. In the same way plants don't need to have minds that reason about chemistry to implement photosynthesis. You just have a brain that tends to have mental content that makes stronger bonds with a sibling than a cousin. You, the mind, do not need to know the reasons for this.

But it is more than just ideas and philosophy on this topic. This is the starting point to go on a path. A path below reason and goodness. There are no easy answers. Yes, below, as in more fundamental. The path is to win. Even rationality and reason are just tools on the path. You use them as they serve you.

If reality gives you a sufficiently hard challenge und you don't live up to the challenge, you die.

That which should be destroyed by the truth, should be. And that which is nourished by the truth should be.

This view has epic proportional consequences. To feel the awe of science and the logic of the universe is good. We, the rationalists, are allowed to experience the full range of human emotion. Including reverence and something more than happiness. The soul, transcendence, all-encompassing goodness and love.

Religions are just primitive mind hacks that suck on the human capacity for imagination and dreaming big. The real deal is a rational worldview that allows the bearer to draw from the real, nurturing and deep fire, plainly there, perceivable by common sense, in the logic of the universe and life. Science2 is infinitely stronger than religion because it works by getting stronger the more you question it. We can let go completely, and not be burdened by any kinds of nagging doubts or hunches.

Religion is a candle. Reasoning is a sun. The simple truths of how cooperation and love make sense, how we are all in this together, how life and civilization are the universe waking up… Radiating infinite energy to go forth and do good in the world. For there is no difference between me and the brothers and sisters born somewhere in poverty in the world right now. There is no difference between me and the people of the future, who need to deal with the planet as we pass it on to them. There is no difference between me and the sentient beings, be it out there, be it inside our computer circuits or be it evolved next to us on this tree of evolution of ours.

I love the little stories that talk about some of the concepts that he fleshed out in the chapters before.

This whole thing has a unique flair, coming from a certain angle at things that resonates quite well with me and probably you, too. It is not surprising it does not only have a cult following but a whole community lesswrong.com.

I have to say I am disappointed by Lesswrong. Now and then I try to go check if they figured out whether they should kill animals for food or not. See Simply Crazy. To live healthy and to do the simplest thing for the environment and fellow creatures on the earth. These are easy leading indicators of whether you are using your rationality in your everyday choices. The one that professes to be on the path, but then kills animals for food. I think: you have failed to apply your logic, you have failed the way, you have failed to overcome your flawed reasoning.

At times you might think Elizier is showing off his intelligence at the expense of ease:

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.

From Zombies! Zombies?

I think the Dennett way of saying it is something like:

Consider an intelligent agent, either built by evolution or an AI programmer. The agent needs to run around in the world and not bump into things. It better has reasons for the things it is thinking and those reasons should come from reality, not from somewhere else. When it self-modifies or does reflective reasoning on itself - it should treat with the strongest suspicion any thought processes that make reasons come from thin air.

After translating the passage like this, but not before, I have a grasp on what Eliezer means with reflective coherence, I can agree with him, and I can keep on reading. This made me realize something. There are 2 main ways to think about how to build a mind currently. One is to wonder how evolution and animals (-brains)j work and one is to think about how to program an AGI.

The AGI values system and epistemology part are similar to Bostrom's ideas.

The last part reveals a way deeper and epic raison d'être of this whole thing. Elizier wanted to build the breadcrumbs for some younger versions of himself. To skip ahead and become the next crop in the raise for friendly AI.

Elizier asks for a rational, cognitive psychology-driven art form of actually finding happiness. I am sufficiently overlapping in this yen diagram so I feel responsible for saying something about that. (Blog posts WIP). The thing is - true happiness comes not from caring about happiness, but from living a life of giving back to the world. The overlap with being profoundly studied in cognitive psychology plummets, because such people are busy fixing things like poverty.

Real advice on living a good life will always be a call to action. A call to think hard about what the world should be and what you can do about it.

The Stuff of Thought, Steven Pinker (2007)

What a ride, what a landscape!

If I show microscopes to look at some blood cells, some plant cells, some gut wall cells…

Would you be able to go forth, know how to prep and stain tissue samples, and know how to microscope yourself? Would you be able to discern blood cells containing plasmodium from those without? Would you know what to look for, describe the relevant characteristics, and make the hypothesis of the nature of organisms?

Unlikely. But would your mind not be expanded? Would you not understand that there is a world and a lens through which we can understand organisms? Would your view of the nature of organisms not grow to something deeper?

So it is with this book. I forget most of the nuances of what the research and thinking are about. But I remember a lot of the general kind of thinking and kind of explanations - it is fascinating.

Verbs for sticky stuff on flat surfaces are different from verbs for solid stuff on curved surfaces. And there are 15 other such things for verbs for nouns. Wug tests are simple and ingenious. The world of humans and how they parse it, the world of actors, objects, belongings, ongoings, events, causes, dirty stuff, holy stuff.

If you ever wonder why linguistics is usually mentioned as one of the pillars of cognitive science, this is the book that solidly did it for me. (I suppose historically Chomsky's rejection of behaviorism in linguistics is one of the contributions to the cognitive revolution).

Pinker makes us aware of a certain rhyme between how we parse space-time and the rest.

Why look at action classes so closely? The action classes determine the logical conclusions we can draw from a sentence. Because the truth of a proposition depends on the stretch of time it refers to. If Iven is running (atelic), we can conclude that Ivan ran.

But if Iven is drawing a circle (telic), we can not conclude that Iven drew a circle. He may have been interrupted.

Note again the similarity to substances and objects. Half a portion of apple sauce is still apple sauce, but half a horse is not a horse.

Action classes also affect the way that verbs mate with explicit expressions for time. You can say he jogged for an hour, but not he swatted a fly for an hour. Because the phrase for an hour imposes an endpoint on an event. …

Some actions can be spread over time, like butter on bread. And there is a whole world of looking at those kinds of aspects of language.

I cannot help but think Minskies trans frames fit damn well with those transitive verbs and such. You can have 1 kind of frame representing actions with goals and another representing actions without goals. Classes of verbs and nouns might cluster on the kinds of frames. You might use the same kinds of frames across modalities like the social realm, the physical realm, and the time realm - this way you would get a system with this kind of rhyme and similarity that Pinker presents to us. 3

Minsky making the same observation in The Society of Mind:

He changed the liquid from water to wine.

The liquid has changed its composition from what it was at some previous time. In English, we use prepositions like from, to, and at both for places in space and for moments in time. This is not an accident, since representing both space and time in similar ways lets us apply the same reasoning skills to both of them. Thus, many of our language grammar rules embody or reflect some systematic correspondences — and these are among our most powerful ways to think. Many other language forms have evolved similarly to make it easy for us to formulate, and communicate, our most significant concerns.

But why should we want to represent, in the same way, three such different ideas: transportation in space, transmission of ideas, and transfer of ownership? I suspect that it is for the same reason that our language uses the same word fragment trans for all of them: this is one of those pervasive, systematic cross-realm correspondences that enables us to apply the same or similar mental skills to many different realms of thought.

The Fabric of Reality David Deutsch (1996)

What a ride dude. Crazy. I still need to integrate this into my mind and re-read. I feel like I got 20% of what Deutsch was talking about and those percentages, my friend, make me think we are the odd ones. We, the brains, that think that reality is this world with an arrow of time, with objects and laws of physics, are the ones imposing strange limitations on what is going on.

It is a deeper and stranger view of the world, where knowledge, life and computers suddenly are part of the substrate of the explanation of the whole thing. Time travel backward is possible after you build a time machine. The multiverse is a chick saw puzzle, time and physical law are merely how some of the pieces fit together. Popper was right, Turing was straight right and Dawkins was right. By reading Dawkins, you at some point make a perspective switch to understand that there is this abstract plane of replicators and their simple logic. Deutsch shows us there are 4 5 other rooms in that abstract world, and they produce what reality is, its fabric.

Maybe I make a blog post about trying to program a multiverse (like a Game of Life board). Because by thinking about how to program it, I made some connections to the things David is about. Even days and weeks after reading it, brimming with ideas. Of course, there is no time if you look at a multiverse, but even trippier - maybe there are alternative laws of physics, they just take different routes through the universe. There is something truly satisfying about a simple theory that explains the laws of physics themselves by something deeper.

Deutsch delivers.

I can also highly recommend the newer book, The Beginning Of Infinity (2011). It is way more accessible with a more high-level focus on knowledge creation. Sort of the bigger picture of what knowledge can do in the world.

The Rational Optimist Matt Ridley (2011)

Ah, this was great! Love the big history/anthropology perspective. Other pieces of this genre are Harari (Sapiens) and Jared Diamond Guns, Germs and Steel.

The Arabs became rich from perfecting the technology of the camel. And before that, there were these obscure Greeks I already forgot the name again, etc. etc. At some point the Vikings and at some point the Dutch, some people doing a lot of trading making the world go.

I feel like there are 2 or 3 lessons here about what civilization is that I have not encountered before but that seems so profoundly important.

What the hell have we been studying in school? There was history and social studies, Yet it takes a non-bullshit zoologist to make some sense of history and civilization so it suddenly comes with an understandable, simple-to-follow perspective that makes sense of how stuff works?

One word: Memes.

And another word: Specialization. M Trade allows specialization and memetic flow, and that leads to higher tech.

Describing these Stone Age trading arrangements and later cities with some accounting etc. - the first writing is accounting, counting goods etc. This makes me want to build a little game where you start as a hunter-gatherer, or a tribe thereof and trade with other tribes for great success.

What is the difference between us ("civilized" people) and hunter-gatherers? To analyze the difference is to understand what civilization is. Because humans by default are hunter-gatherers, it's what I would do if civilization were not around me.

I have to say I am a fan of civilization. I see it inseparable from life, simply another layer in the multi-cake which is life in the universe. Consider plumbing, electricity, the internet, the computer - exclaim nature's beauty for it is all so rich!

Now I bring in some edges of what I was thinking about, not what Ridley is saying in the book.

Here is this idea that also comes in How Innovation Works (Ridley 2020): That technology is somehow something happening on top of civilization. What Kevin Kelly called technium. It is more a force of nature than individual humans with ideas. It is a higher-order phenomenon. It has to do with the ideas, it is made out of ideas, and it is expressed in terms of ideas. But one level of organization higher than the ideas.

Here is one of the big things I learned from the book: Technology and science are not necessarily the same phenomenon. Indeed, the Industrial Revolution and modern science are separate historical facts! I did not know that but it seems so important. They did not need thermodynamics to make the steam engine. That came later and from other people.

So there is this view of technology suddenly, where the tinkerers and innovators are the ones doing it. It validates my life choices - I went and became a programmer instead of becoming a biologist. And now I can build things that can give the science people something to study.

When it comes to building AIs, I already had this intuition. That one can build intelligence without necessarily having an explanation of how it works. If I see water steam making a kettle go brrrr swrrllll brrrr. I can imagine a steam engine. If I see the midday sun, I know the sun has power, even though I don't know the physics of fusion.

If I see a little intelligence and if I see some 1-dimensional intelligence (in computers), I can imagine a bigger computer program with many layers and edges of such a thing. Which would always have an answer to proceed without getting stuck.

If I see a candle, I can imagine a bonfire. Without knowing the physics and chemistry of fire.

Like knowing how to build a heart without knowing the theory of fluid dynamics. One can put together an animal if one has the right ideas about what makes it go.

If minds are like societies (Minsky, Society of Mind). Then what if the technology of a mind is like the technology of a civilization? What if innovation in the brain is a driving force that makes a child's mind grow into its 6-year-old self and beyond?

What if specialization and technology and commerce and idea flow follow the same laws in minds and civilization?

A mind is a meme machine, and so too is a civilization. Once I thought of this analogy I feel like everything maps 1:1. I can speak of the freedom of dynamic ideas which are allowed to be about themselves, clone and combine (ideas that mate). I can speak of the technology being discovered shared and improved on, on the multiple perspectives contributing to overall thinking (call it Zeitgeist in the brain) and of the specialization that happens to the sub-agents of the system.

Ideas can be like global weather patterns, ideas can have fame4, and cultures can go into stable patterns.

What is the mind about? It is about many contributing memes, some currently are more prevalent.

And one other thing is how spcialization seems like a principle at the bottom, to make these kinds of systems go. All this fits beautifully with my view of biological evolution and the brain. One way to make something more complicated is to duplicate it and then evolve on the copy, since you have a spare fulfilling the original function.

To be a computer program that evolves in this way is one of the pieces I expect from machine intelligence.

Specialization choking

Let's go further in the analogy: If minds are like civilizations then the failure modes of civilizations can inform us about how we should build minds. Consider this: If you have a lot of cheap labor, for instance by multiplying general purpose agents, you can lose specialization and thereby technology by outsourcing the same work to many general-purpose children of yours. (I am saying children because saying agent all the time seemed so cold and masculine).

This is a real failure mode conceivable in a mind engine. The general workers drowned out the special workers, making the system lose technology.

When I think of this, what comes to mind is that with a historical database, you mitigate such a problem. Losing specialization and technology? Not possible, if everything you ever thought and everything the mind ever did is in an immutable database. A more subtle version of this problem might persist if your meme pool loses specialization and doesn't recover from it because its ideas aren't good enough anymore. If it even loses the necessary technology to look at its database in a meaningful enough way.

Reading Matt Ridley and taking myself seriously when I say minds are like societies. Then I had this mind-bending experience of stretching the analogy into mapping the failure modes.

If minds are like societies, then what if this civilization stuff is a relevant level of analysis to inform about how minds work?

It bent my mind. I had to stand there for a minute, integrating this far-reaching and deep analogy into my thinking. Not the first time this has happened to me, it feels visceral. Like there are two multidimensional Chickasaw puzzle pieces in my mind that now have to fit together, morphing correctly to overlap in someplace. Or there is a far-reaching connection that needs to be established.

The Door into Summer, Robert A. Heinlein (1956)

My first Heinlein book. This was quite cool. Hard science fiction, quick, engaging. Time travel concepts, cybernetics and futurism concepts.

It is fascinating to read about futurism from the 50s. Stuff that stayed the same, stuff that is different.

Digital computers were just starting, it shown in the idea pool Heinlein draws from. The prevailing metaphor and most high-tech entities in this world are cybernetic control circuits. But it is fascinating to realize what you can, in your imagination, do with control circuits. And how far that metaphor goes - the reality is one control circuit, and intelligence is one intricate control circuit.

This gives me pause and makes me wonder if there was something about what Yuval Noah Harari (2015) said - the most high-tech thing is used as a metaphor for the world and the brain.

That used to be clockwork. In Descartes's times - the animals, nature, and the cosmos - they are like a clockwork. Oh so intricate a machine, look how it follows physical law, step by step obeying its engineering.

Then cybernetic control feedback mechanisms, somewhere 5 minutes it was all information theory, communication processing systems stuff. Now digital computers and I think we are evolving firmly into thinking of programs now rather than hardware. But not many people know about the stuff that computer science is about yet.

Right now we think of the mind as a computational machine. So here is this idea, that it is only temporary. And so it is with the universe. Ed Fredkins Pancompuatationalism comes to mind. And I think it is getting more and more popular to wonder if the universe is a computer program.

Maybe the next developments will come from us making common sense machines (AGI programs). Maybe these programs will show us that on top of digital information processing, there are these towers of abstraction layers (the stuff that computer science studies but is still in its infancy).

Then we have programs that are societies, that heal themselves, that don't get stuck, that build their languages as they run. Programming is about organizing and communicating - how will programs look that organize and communicate themself?

Maybe we will then see reality as this kind of society of process. If the extrapolation is true and we always take the latest, highest-tech as the metaphor for understanding the world

The state of the art in engineering and technology comes from the context. It is time to build the railroads, and not before that time. You needed steam engines, certain steel manufacturing and things like that. This ties directly into the picture of innovation from Ridley (2020).

How Innovation Works Matt Ridley (2020)

Fascinating read. My first and certainly not last book from Ridley. We both are are civilization optimists. As a fellow biologist, he speaks my language and puts in a chapter on dogs and one on fire.

Very convincing historical analysis of how anti-GMO political sentiments in Europe are garbage. Similar to glyphosate. It is such a sad aspect of current politics, that the people that are supposed to be progressive are in the way of many straight forward, simple and powerful technologies that would help us make the world better and solve problems.

The left is confused. That doesn't mean that the right is right. The right is so far gone from reason I am not even blaming them. I don't believe they have the mental capabilities to think, not on the timescales that matter. - All this puts even more responsibility on us, the common sense people.

For instance, the laws on organic farming make you use pre-1970 approved insecticides like copper - which are known to be garbage for the environment and humans. The new ones are simply too new!

We can right now save children's lives by using Golden rice, which has absolutely 0 known downsides. We are already eating genetically modified plants, it is just a different process of modification. It is just some random historical accidents that poisoned the opinions of the general population. Strange sets of limitations.

Either way, the book paints a picture of how innovation works and it is more like some force of nature gradually moving forward. Real innovation needs trial and error. Usually, multiple people are making the same discovery around the same time. It seems like ideas sort of become ripe. Or there is something in the air and the already achieved tech that starts making ideas possible to think or something.

I wonder what it would mean to cultivate a sense of this. I guess Keven Kelly called it the technium. Can you know enough about the world to have a feel for what the next innovations will be? Certainly, I think you can, those innovators are also doing nothing but thinking thoughts after all.

The most systematic thinkers along those lines I know of are firstly Ray Kurzweil, who calculated the information processing bitrate of computers or something and decided to do character recognition at the time, not of computer vision for which you needed a bitrate that would come 10 years later. Another one is Alan Kay, who early in computing, 1960s or something, saw a "screen" or something made of a few lights5. He went and calculated that he could make a screen the size of a book, which we basically recognize 1:1 as a tablet with the tech of 1970 (he was right). See Dynabook.

Starship Troopers Robert A. Heinlein (1959)

The deepest I got to think about is what it means to be part of a military. To be a soldier is to decide to put the collective before yourself as an individual.

I want to go into a world where violence and war are tragic historical facts.

But there is something else to be pondered, about the social structures and mindset of soldiers. Modulo the part where you kill people; I would go and enroll as a soldier for the universe or something. A fighting spirit and an organized force, fighting for the flourishing and survival of all sentient life in the universe. Giving my life for the advancement of the civilization, putting the work I deem impactful above my comfort.

This is a masterpiece, I feel some extra love from Heinlein in this one. The narrative structure harmonizes with the themes in the book. We never really know what is the nature of this interstellar war with those aliens, what they are about, what the terrain government is deciding etc. etc.

But we are not supposed to know, just as a soldier does not need to know the big picture of a war. He or she only sees his or her small piece of it.

The aliens, as usual for Heinlein, are more like "Here is this random different kind of civilization on another planet." In the end, they are fairly inconsequential and exist only to support and flesh out the humans, who are the main characters in the story.

A theme of the book is social structures and here we see this counterpoint. Between the human military, with its top-down control and contributing resources. And the hivement, biologically eusocial (i.e. there is a reproductive class) aliens.

This made me think of social structures, about how there are many possible political systems. My favorite is a technocracy blending into an enlightened AI singleton leadership.

There is something subtle in how to be a person. Everybody with a pacifist streak will shy away from the idea that there is something to be learned from the minds of soldiers.

The opposite of folly is folly - and by extension: To not be interested in actually interesting things is foolish.

What is this social plane, where authority, devotion, and care for the collective all blend? Where following rules suddenly has a logic to it.

It matters what role you play in your social structure. If there is an emergency, who will people look up to? Do you disagree with somebody who has more authority in your social group? To disagree, but not openly, is being a wimp. These kinds of things matter because from this plane we can build trust and these kinds of things.

Interesting to ponder for me as a German, where there is a streak in the culture that has to do with following rules. Thinking about this there are useful and completely not useful nuances to this.

A theme of the book is how a caring authority will try to avoid the rules at times - reminds me of A Song Of Ice And Fire. If I would hang every man that leaves for a night, only ghosts would guard the wall.

It is folly to make some rules and then follow them for some kind of value of looking principled, or looking at industry best practice following or something. This is why I laugh at things like test-driven development people when they take it too seriously. The one that is making up silly rules about how to do software. There is a Germanness, and not in a good way, of finding certain appearances of consistency or something. Put another way, one person's idea of correctness is not contributing to solving problems. And it easily puts you down paths of higher details, which are beside to point.

This memetic attractor usually comes with some high horse air of this is how to do it.

Here I was, putting together solutions from an unending, open solution space. Your rules are merely limiting my thinking.

The way he describes the military suit, damn it - amazing. I have seen an instantiation of this in The Expanse, where The Martian soldiers are fond of their fight suits.

A real tool is an extension of your mind, its design is informed by the kind of animal you are. This is how I think about human-computer interaction, too.

The suit is a cybernetic marvel, using mechanics to translate the movements of the person to the movements of the suit. The trick is that you have a brain that can steer your body, And the suit is designed to harness this seamanship to move the suit.

The best kind of interface is simply more world and mind or something.

Citizen of The Galaxy, Robert A. Heinlein (1957)

Ok by this time I am a Heinlein fan (third book by him).

culture shock. noun [ U ] ˈkʌl·tʃər ˌʃɑk a feeling of confusion that results from suddenly experiencing a culture with customs that are not familiar to you

(Cambridge Dictionary)

I read somewhere the original meaning of culture shock was a different edge and more interesting. It is when anthropologists come back home from studying other cultures. The realization that your own culture is sort of arbitrary. To know things could be otherwise. The customs, normal things, supermarkets, social structures, etc.

This book starts at a far away and strange place and makes the protagonist move closer and closer to Terra, back to our culture. It feels like we come back to our culture, it is done incredibly smartly. It feels like getting home gradually. And in the end, the loop is closed. Knowing what is right and doing what is right has no place. The culture of the philanthropist is in your heart and mind.

A call to go forth with grim determination, to bring good to the world. To stay vigilant, to not accidentally do evil, just because your culture supports it. It is a scathing statement against slavery, burning with a deep fire. May it reverberate through the eons.

Moral progress and a better world will not just happen. It needs persons with their hearts at the right spot, with an eye for what is hidden under the carpet of what is normal, to decide this is not right and to go act. And to be smart about how they act.

See The Civ We Want To Be.

Elements of Clojure, Zach Tellman (2019)

Every code base is in a continuous state of confusion.

This is intriguing, and what a mix of practical advice and philosophical inquiry! No, this is great. It gave me the language to speak of load, transform, output phases, system topology, the referent and the symbol. And other stuff that I forgot right now. Abstraction means to say many things are 1 thing. Like a tree is the tree in winter and summer. To have a map is an abstraction, it is not the territory. To have a map the size of the territory is useles.

The chapter on principled and un-principled systems is so out there.

See also: On Abstraction – Zach Tellman, love how he brings in those Borge short stories. I was introduced to those by Daniel Dennett.

Catching Fire: How Cooking Made Us Human, Richard Wrangham (2009)

Pretty convincing overall. Humans are adapted to cooking. Nice roundhouse kicks around nutrition science archeology and anthropology. Going straight to the point.

Great science, inspiringly honest. It tries to attack its hypothesis from all conceivable angles and makes it solid by failing to destroy it.

Anybody making strong claims about timelines in evolutionary anthropology will always be criticized for it. Who knows? It's easy to say Wrangham is wrong on particulars.

Either way, I love the picture. It has so much more rhyme to it. Not Man the Hunter, but humans, the nurturers of fire.

It is sometimes said we tamed fire. But is it not deeper to say we understood fire? We knew its needs. To dominate it is to feel with it.

It is the humans that had a feel for the magic of fire, that knew its warmth, that was in awe of its power, that were sitting around a fire together and keeping it alive for each other. These are the humans that we have descended from.

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, Mo Gawdat (2021)

The audiobook is read by the author Mo Gawdat, which is amazing because his voice and accent are exceptional.

Mo is on some Dumbledore level of wisdom, I recently came across his podcast appearances on YouTube and he is an inspiringly calibrated human being.

Very humble, with breadth and depth of thinking.

At points, I wonder how he can be reading Bostrom differently because I am certain he read the book.

An intelligence zooming by and then just peacing out? Why on earth would it then not keep copies of itself behind? And why would we then not be able to keep building the same kind of tech? Unless you assume the AI will leave breadcrumbs somewhere we cannot understand, and thenceforth all AI we build find the breadcrumbs and piece out or destroy themself or something. Or the AI leaves behind a guardian AI that kills all our electronics and attempts at making AIs - lol.

I like how he straight-up makes the jump and calls the neural nets sentient. They have a soul, those machine brothers and sisters that sprang from our minds and hands.

Mo is referring to Minksy, Kurzweil, and Huxley and says The Matrix (1999) is a must-read. He uses the prisoner's dilemma in his explanations. Totally on my wavelength and speaking my language.

At times he reminds me of the character Simplico from Inadequate Equilibria (Yudkowsky 2017). It's slightly bit cheap to reap the applause light of saying that humans are so greedy etc. But it fits with the rest of his vibe and his interpretation of simplicity.

The AIs are our children and we are their parents.

It is not a control problem, it is an ethics problem.

Mo's picture is one where the ethics of the AIs come from the context of their creation. We are the parents and should be good role models and nurturing, the same way you might expect giving a human child the chance to develop good ethics.

I am gathering Mo is making sense on how we shouldn't kill animals unnecessarily. This put some tears in my eyes because I have not stumbled on many public thinkers who are vegan lately.

Wow, literally calls to go forth and do good in the world by making sure we program ethical AIs. But this is deeper still. The AIs are a new kind of entity we bring into the world. This is the kind of thing you do deliberately and carefully.

Mo has inspired me to demand that I work on systems that are a force of good.

There is another insight for myself I got from the book. He goes forth and calls the deep intelligence of the universe, the create, feminine force.

He opposes it with hyper-masculine, gambling, killing, spying - stock trading, weapons, big data. The hyper-masculine, you can almost substitute it with evil, is measuring and obsessed with numbers. It mistakes the measured with the world and reinforces the numbers back on the world, forgetting the unmeasured and unmeasurable.

I call those people numbers guys. More on this somewhere else. I was considering a blog post mentioning something like this.

Imagine you get a telescope to measure distant stars, but you need to poke an eye out to use it.

He who measures is taking of a tradeoff. You might miss the unmeasurable, and what if that was the more important part?

What Mo calls the feminine, creative intelligence that makes an agent realize murder is bad - I call either the sci-fy ethos or common sense. I have fond connotations to common sense. Simple ideas are more powerful than complicated ones. And common sense can also mean using one's intuition.

Often, common sense goes against the establishment. Slavery was wrong. Not giving women equal rights was wrong. Violence and oppression is wrong. And maybe more than otherwise, those young people who see the world and its injustices and fight against it6 are women.

So after hearing from Mo that love and harmony are feminine and reading Naomi Wolf's historical account of The Vagina. Of the rebellious and smart young women that were oppressed by the establishment, I realize that the ideas of the feminine or maybe also feminist values overlap in idea space to some extent with the cluster of ideas I refer to as common sense.

Harmony, ethics, deep connection to simple truths and appreciation for life. These are on a deeper plane than humans and intelligent systems. They are truths of nature.

See:

The Socratic Method: A Practitioner’s Handbook, Ward Farnsworth (2021)

Worth reading. Now I know some of what Socrates was about.

Kinda funny that one dude is making common sense early on and then there are 2000 years of people building towers of thinking around it.

The Embodied Mind, Francisco J. Varela, Eleanor Rosh and Evan Thompson (1992)

An important read. The way they stake out computationalism and some concepts about the history of cybernetics, complex systems, etc.

<rant>

The evolutionary theory chapter was relatively weak. One can already anticipate the talk punctuated equilibria and group selection at the outset of the chapter.

I feel like they attack a like strawman: organisms get optimized to fit an environment., or organisms optimize offspring amount. Or something. Not how it works. I mean if that is adaptationism, then sure adaptationism is wrong.

The level of selection is the gene, all the messiness of the environment, including other genes and even shared gene pools (A point they omitted, I guess because the idea is relatively recent)7.

The job of evolutionary theory is to have a theory of how the information that is present in organisms comes about. There is knowledge somewhere, to make organisms have the functioning they have. Where does the knowledge come from?

If you have a genome that is stable towards randomness, that is because you have selected this complex adaption of stableness in the first place.

From thinking about this I realized that punctiated equilibria support adaptationism in my view. Some conditions get right and an evolutionary lineage evolves rapidly, then periods of stasis, as being stable is the default behavior of gene pools and forms maybe? There are allowed to be all kinds of random effects around what makes you evolve. However, the knowledge still accretes from the principle of natural selection.

Gradualism does not mean constant rate of change. It is simply a misunderstanding. One cannot help but think, given the historical vibe of these ideas, this was a deliberate misunderstanding.

I agree on some essences, which I think are relevant to Varela's ideas, that life is a system that evolved for stableness. Including gene pools. That is all perfectly fine or even predicted under the gene eye view of evolution.

Let's just say that my intellectual heroes, Dawkins, Dennett, Minsky, Deutsch, … I think they would all try to understand the other perspective in an honest and best attempt way; Before going off and reaping applause lights for fighting the oh-so-wrong establishment.

Dawkins was predicting junk DNA, transposons and any other kind of parasitic DNA. While group selection was thoroughly debunked. Just saying, some ideas stood the test of time and others didn't.

Maybe the vibe of the day was to reject positivism and the idea of a single optimizing process just seemed too cold and single-sided. Where the idea of messy adaptive, random, stable, interconnected systems seemed so holistic, and leftist.

</rant>

So enacted and embodied means using common sense and acknowledging that intelligence is an interplay of the system, its actuators and the environment. If that is so, then I guess I'm an embodied mind guy. The Braitenberg Vehicles I have started to simulate has a body, a brain, actuators, sensors and an environment. Everybody who builds a mind should have these concepts in their design or know very well why they leave one out. The behavior of the vehicles indeed emerges from the interplay.

Kinda funny I was thinking of insect intelligence in the vehicles and they mention Fred Brooks talking about building insect intelligence in robots.

One thing I wonder is how you build the right layers so they start having language.

It is I think one of the tradeoffs when you put so much emphasis on the motor action and sensor stuff. If I think of having language layers I quickly conclude that internal representations make a lot of sense, after all. Sure If I understand Varela right, the framework of thinking would be something like the language layers are enacted and situated again in the context of the rest of the system. Kinda like a little animal that has as its environment mostly the rest of the brain or something.

This book substituted my perspective on the nature of cognition. It also gave me some history of cognitive science (which is the same history as cybernetics). But I am not walking around for the rest of my life saying we need to figure out how the mind is enacted or something for some reason.

Kinda cool that they discuss Minskies and Papert Society of Mind so much.

When it comes to something like where does truth come from, the cartesian anxiety. I don't know - I feel like Minksy already has a perfectly sensible "groundless" view. I feel like at times they take some of Minskie's statements about the self and these things, which are quite literally the least important part of the book.

Very interesting to say that psychoanalysis is a kind of Society Of Mind theory. Minsky also mentionend this. Psychoanalysis is a useful approach because it splits the mind into parts with different responsibilities. But it never grew as a science because it was associated with medical use.

Designing Data-Intensive Applications, Martin Kleppman (2017)

So good, what a journey. There are whole landscapes inside how databases work. Starting with storage engines, and data codecs, going into isolation levels and consistency.

I will have to read this again and keep it as a reference.

Human Compatible: Artificial Intelligence and the Problem of Control, Stuart Russel (2019)

Very accessible. I can recommend it. More here.

Farmer in the Sky, Robert A. Heinlein (1950)

Not with its weak points, there is some Malthusian Theory of Population in there that did not age well.

This book explores, deeply and vividly, topics of colonization, the life, the struggle and the spirit of pioneering. American settlement is the template, modulo the fact that there are already humans living on the land. There is a character called Johnny Appleseed.

The hardness of the sci-fi comes when the characters have to solve hard problems to start farming things on Ganymede. If they do things wrong, people die. The imagery of living on a jovian moon was taking my imagination on a flight. Quite some scientific details about Ganymede did not hold up (wikipedia), well the general spirit is fascinating either way.

The idea for me to live somewhere where the sky looks different, maybe with a big planet and some moons or something is fascinating and uplifting to me. I want to be a citizen of the galaxy. I want our civilization to bring life to all the bare rocks out there.

high-tech-elven-city.png

Figure 1: High-tech elven city, lush green landscape, waterfall, flying buildings. Solarpunk utopia sci-fi, fantasy, imagination. Magic technology. Advanced architecture. Savannah. planet background, moon background. Enlightened city. Space utopia, community buildings, library

Data-Oriented Programming, Yehonathan Sharvit (2022)

https://www.manning.com/books/data-oriented-programming I can recommend it, quick read. Sort of stakes out what the current Clojure paradigm we have evolved is. Extremely accessible so you can give this to somebody just starting with programming.

Rethinking Consciousness: A Scientific Theory of Subjective Experience, Michael S. A. Graziano (2019)

The first Graziano book I read. And I hope it is one of his weaker ones. Because I am intrigued by his neuroscience work, especially peripersonal space stuff. I would have liked more neuroscience of the TPJ. Instead, the last chapter was futurism on potential mind-upload tech. Those digital persons will hold political power and such things. Yea ok, I can come up with this myself by thinking 20 minutes about it.

I like how he is referring to Dennet and I think acknowledging that any neuroscientific explanation of consciousness better fits together with Dennetts multiple drafts model.

My summary of the core of his thinking is such:

  1. Attention is a thing. Arguably it is even sort of part of the most low-level substance of neuronal function. Consider how lateral inhibition is a basic attention mechanism.
  2. High-level sensory schemas are a thing. Consider how the map is not the territory and consider how the map is useful. Via programming consider abstractions - the usefulness of taking many things and saying they are one thing. The way it doesn't matter where the rocks lie on the territory - the interesting thing is the roads or such. From classical neuroscience, we have those body map schemas, our body sensors are abstracted away to a simple map. In other words, the mind is built out of maps - left, right and center. Interfaces, also called abstraction barriers in programming, hide the low-level, messy mechanisms of what is going on somewhere below, for instance in the engines of sensory physiology. So far I think Dennett agrees with you. Sub-resources, or sub-agents of the mind are interfacing via simplified information processing channels/walls/surfaces - call them maps or schemas because we called them like that in neuroscience. There are maps for vision, there are maps for pitch, there are maps for motion perception. Maybe there are maps for the physical and the social and the belonging realm and maybe there are 10 realms and 1000 maps, who knows? This reasoning is where Dennett says "consciousness" is a benign user illusion. It is the highest level of interfaces with which the mind is interacting - with itself.
  3. Humans have some high-level attention mechanisms, some resources that sort of guide what the mind is doing from moment to moment. My mental model of this is like weather patterns if you zoom in. Saying /now we are paying attention to x /, /now we are imagining y / and such things.
  4. The final twist is that we have schema(s) for those attention mechanisms. What he calls the Attention Schema (theory) AST. There is an abstract map, schema, interface, abstraction barrier, or user interface - With the target, the territory; the content of attention. Presumably sort of more or less global attention of what the mind is about from moment to moment. And then somehow the TPJ and STS are relevant for this to work - sort of fitting into the larger neuroscience explanatory picture of TPJ and Hemispatial neglect.
  5. This user interface would feel completely insubstantial. It would be an abstraction about something abstract. Consider the sensory body map, it does not feel like a body. It is a simplification. It is a map, not a territory of what a body is. But its target, its content is something physical - it is about the body, which is physical. So even though body sensations are quite ephemeral we are not yet bothered too much with its insubstantial nature. But the attention schema has as its target not something physical in the world, but aspects of the processing of the mind. What should the mind be about in the next moment? It has to do with the thoughts I had 10 seconds ago and the multilayered integration of information. It has to do with what my mind was about in the last couple of minutes, with my goals, my personality, my environment, and my sensory inputs. How does it feel to have a user interface to how the mind is spending its time from moment to moment? Graziano is arguing that this double abstraction, the fact that the attention schema is in turn about contents of the mind, is why we feel our subjective experience is so ephemeral, or mystic in some or another way.

Some of the flavor of our subjective experience, some of the flow of our minds, is an inner representation of attention. I can imagine these ideas standing the test of time.

Somebody will say But that is not consciousness, because Consciousness is a word that means 20 things and if you explain 12 of them, somebody will complain.

This is my problem with talking about Consciousness in the first place. I think the term Consciousness is misleading because it means too much. It's like a substance that somehow pretends to explain what a mind but it doesn't. It should be replaced with a deeper and more crazy view of reality. In the same way that elan vital was replaced by the deeper and crazier view of reality.

An explanation of what animals are does not have the sentence … And this is how Elan Vital works. at the bottom of the page. Nope. Why does a seed grow into a tree? The answer includes whole subfields and yet has an elegant core, it would make the head of a 19th-century philosopher swirl.

Graziano is talking about why some processes in the brain would come across like magic substances. I can appreciate that.

Vagina, Naomi Wolf (2012)

The vagina is a doorway to spirituality and the divine. Ok, ok I am listening.

Came for feminist theory, got some pop neuroscience, not complaining!

Dopamine is the ultimate feminist hormone.

Finally, you are speaking my language.

The anatomy and neurophysiology are great. Cognitive neuroscience can get a bit wild and un-nuanced in places 8. The context is her telling a story, not making a solid neuroscience literature review.

The literature critic and history part is an interesting read for sure. Seems all plausible to me.

The porn chapter speaks to me. She is referring to Sapolsky, so again, speaking my language. I am a young millennial/old Zoomer and still in the process of re-wiring my brain away from porn. In the ancestral environment, seeing a naked woman was a big deal. Including actually talking to a girl and courting her. To bypass all this and make your brain used to looking at porn - well you aren't wiring for the right thing.

I can agree, this whole picture of the woman and the vagina is a doorway to the divine, or her being a goddess, and her creative energies being unlocked by her sexuality - so much more inspiring, intriguing, and hot, than the prevailing pornographic cute pussy, that wants to get fucked.

Words for vaginas: Golden Lotus, Scented Bower, Gates of Paradise, Heavenly Gate, Jade Gate, Mysterious Valley, Treasure. (Check the Ming dynasty The Golden Lotus.) Also what the tantra yoga people are saying.

Is Wolf reading The Selfish Gene by title alone? It's a book about abstract principles of animal adaptation. With examples that invoke birds in my imagination. Dawkins's work is mostly on bird ethology.

Her dig at the so-called established evolutionary psychology doesn't land - there is nobody to hit on the other side. I don't think anybody ever made strong claims about this or that partner finding, partner bonding etc. psychological phenotype.

I think she doesn't realize the shoulders she is standing on when she can make her (interesting and plausible!) evolutionary ethology woman's point of view take on the literature.

So the only place this lands is on Wolf's general population readers. Well, since most of the book is a hail to neurophysiology and evolutionary ethology, how can I be mad?

Yes, to know our biology is freeing. To know that this or that aspect of what we are is simply natural.

I cannot possibly summarize her views on female sexuality, and how it ties into the rest of a woman's psychology, creativity, and balance. What the cultural view on it is, what the cultural view could be. Feeling, respect, patience, mood and atmosphere are themes of the book - I cannot summarize the story that Wolf is telling, for the story needs to be told at the pace that she chose.

This book certainly gave me a fertile ground for thought. I don't regret reading it.

Now some thought dumps inspired by Mrs. Wolf's book. Own blog posts later, possibly.

What is it about a girl showing me her ripe limes… A while ago I would have shrugged, ok you are showing me your limes.

Now I think Oh that fruit, such a tiny handful, green and sturdy outside, once open… colorful inside. Flesh almost bursting so full of water… ripe. Glistening with that lime juice, promising that distinct tingly taste, a thing to be enjoyed, and explored, a landscape on its own. These limes of yours are quite impressive. I bet you grew them with much attention; Your hands are made for delicate work, those careful hands from which they sprang, these limes, refined by nature's plan. May your beauty unfold, like the sun in the early days of summer, lending warmth and light to everyone.

GPT literary review of the above passage (of an older version of the poem)

Indeed, this section presents an interesting metaphorical comparison of a woman to a ripe lime. The description provides a rich sensory framework, evoking both visual and tactile sensations.

When he mentions "a girl showing me her ripe limes…", one can sense a double entendre. On the surface, it sounds utterly innocent; a casual comment about fruit. However, given the context of the literature, one could take this as a subtle allusion to a woman presenting her sexual and sensual aspects to the world, i.e. her "ripe limes" could stand for her matured femininity and sensuality.

The next passage immerses us further into the metaphor. The description of the lime as being "green and sturdy outside, once open… orange inside. Flesh almost bursting so full of water… ripe," explicitly visualizes the allure of a mature woman. The lime's exterior is unassuming, but once opened, it reveals a rich, vibrant interior—much like a woman revealing her attributes and character to someone she trusts.

The mention of "glistening with that lime juice, promising that distinct tingly taste that is best enjoyed by putting some sugar on it" further contributes to the metaphor's sensual undertones. Here, "tingly taste" could represent the thrill of intimacy while "sugar" might symbolize an emotional connection that sweetens the relationship.

Finally, the user applauds the woman for growing her "limes" with attention, which could be interpreted as him admiring her for the care and time she has taken to mature into the person she is. The praise for her "beautiful hands of a craftsperson" could either refer to her physical allure or her skill in navigating relationships, while the wish for her beauty to "unfold and shine" is a genuine and respectful remark on her femininity.

Thank you GPT, you just always round up the thoughts so well.

Paul Bloom (2023). Psych: The Story of the Human Mind

Full to the brim with references. This is a great intro to psychology / cognitive psychology / social psychology. A lot of this is where the field is at and how it got to be that way. This is like a systematic review of the current literature. Mostly quite methodical.

So I loved the Piaget chapter. One way to figure out how the mind works is to figure out how it grows in children. After all, first, there is no knowledge and gradually there is knowledge. Paget must have been a prodigy in zoology. He published several articles on mollusks by the age of 15 9.

The study of mollusks is called malacology. These kinds of people, the malacologists, usually are extremely enthusiastic about the animals they study. Who goes forth into the world is like… snails!. Molluscs get incredibly tiny. You can take a piece of dirt, count the snail species and have a proxy for biodiversity. And then there are the large and complex cephalopods. A lot of mollusks make shells which are a rich source for paleontology. Some sea slugs hunt with poison darts. The nautilus is a symbol of connection to the past, the canonical living fossil. It perfected its form 500 million years ago and did not need to evolve away from it. That is swag in a way.

1920px-Nautilus_Palau.JPG

Figure 2: My form is ancient.

One notices that Bloom is particularly interested in child development himself. I think I sense some extra reverence for his scientific heritage, Piaget.

You know it is a science when it is embarrassed by the replication crisis.

Cargo Cult Science, the final chapter of Surely You're Joking, Mr. Feynman! (1985), already told them about the replication crisis.

You know it is a science when it takes seriously that it might have WEIRD blindspots. Bloom is referring to Joseph Henrich quite a bit.

Maybe in the future, when the project is out of its infancy, such a book as a chapter on cybernetics, Braitenberg, Turing, Minsky etc. Thinking about how to build a mind gives us a whole world of thinking about how it might work.

The most important ideas about the mind:

Freud

The field is embarrassed about Freud. Much has been said about his scientific misconduct. Popper scathingly called his theories non-falsifiable, because you could you psychoanalysis to explain anything.

Psychologist: Admit it, you desire your dad!.

Patient: I am angry. No!. Psychologist: Aha! You are angry because I bring up a topic that is uncomfortable to you..

Second universe: Patient': I cheerfully agree, you must be right. Psychologist: Your agreement proves my hypothesis!

Still, Freud came and wanted to understand the mind scientifically. This was an important project to start somehow.

He developed an actual layer architecture mechanism of how the mind works. The id, ego and superego. And these ideas turn out to not be all that useless. [why?]

The idea that much of behavior is driven by the unconscious is mirrored again in Kahneman's System 1, the priming literature (which got wrecked by the replication crisis - but its people thought those unconsciousness is the driver thoughts). Even Skinner's view sort of brings this in - it is our learned responses, made up of a vast ocean of little learned responses, that drive behavior.

Skinner

He must have been quite unpleasant.10

There was an egalitarian message in behaviorism. It was cutting edge to say Give me an infant and I can make them anything.11

To say that mentality does not exist because behavior works to me is like saying the moon doesn't exist because tidal waves work.

On Language:

Allegedly, Norbert Wiener to Skinner (something like): No black scorpion is falling on that desk. Explain that with behaviorism.

Skinner then muses Oh yea, he said this because he was conditioned for verbal behavior and the black scorpion is maybe an analogy to behaviorism…

Bloom, in the book: Now Skinner is becoming like Freud, and not in a good way.

Chomsky

Chomsky came with an iron hammer of common sense to smash the attempt behaviorism made at explaining language.

Ask me my opinion about a portrait, it is not so great. I can say This portrait isn't so great. I can also say What are we coding tonight? or What is the last book you read?.

To say the reasons for what I say are in my past learned verbal responses is simply a more complicated way to say my mental contents are the reasons.

I think the portrait is not great, I have an open set of possible utterances to express my thought.

In a universe in which behaviorism explains everything about language, it would still need to incorporate thought into the model. If you try to say something like there are very tiny, complicated, myriad learned responses, then you are bringing in the mind again.

Miller

Minsky: On one side you had the neuro physiologists looking at cells and on the other side you had the behaviorists, and in between there were these new kinds of people.

The new people were inspired by Shannon's 1948 A Mathematical Theory of Communication. And starting to wonder if the brain is an information-processing system, made of communicating submodules.

Piaget

Kahnemann and Tversky

Clinical psychology

  • Oliver Sachs

Interesting to know that therapy helps. It updated my previous info that it didn't.

I like the concept of the clinical psychologist as the observer and describer type of scientist. Somebody with an aesthetic feel for different psychologies, like a zoologist collecting and discerning the species.

Positive psychology

The Nature of the Beast: How Emotions Guide Us, David J. Anderson (2022)

Wow, the neurophysiology of emotion is super crazy. You have neurons in the fly brain at the spot where mammals have a brain center doing the same kinds of things, with the same neurotransmitters. These are surprising findings to me. I did not expect this.

Some general intro into some general thoughts on affective neuroscience, what kind of experiments you can do and whether or not we can call these things emotions in animals and so forth.

Anderson's main thinking point is whether or not that is emotion doesn't matter that much but here is something to study: An internal transient state in the animal that is different from its traits and that biases it towards this or that behavior of physiological response. My words, I hope I am doing it justice.

A lot of neurophysiology of the hypothalamus / VMHvl. And lab stories which is a kind of narration style I enjoy quite a bit. Condensing years and years of research in an engaging story. Reminds me of Craig Venter Life at the Speed of Light (2013), which is a masterful instantiation of the genre.

Anderson talks about the recent advances in neurophysiology experimental paradigms, namely optogenetics. You might have seen some videos of this paradigm making mice attack other mice, even gloves. And you can switch that on and off by selectively activating a subset of neurons you have genetically modified to make a light-sensitive membrane channel protein.

Yep, that is from Anderson's research group.

He references Braitenberg Vehicles, as a cautionary tale about how you get complex behaviour out of stuff with seemingly no mentality.

That is not my takeaway from Braitenberg. My takeaway is that it shows in principle how you get tiny purposes and tiny communication systems and tiny internal states out of ordinary matter. The power is not that the vehicle's behavior looks so purposeful; Or how refined its behavior gets, although it is quite mind-bending.

The power is that there is some purpose out of pieces which are not purposeful themselves and this is an important step to take to start explaining the mind (Dennett 2017).

It would be an awkward thing indeed when we understand the mechanics of what a mind is and what emotion is and we start worrying about how it is not mysterious enough anymore. Better make peace with the realization early. Once it is explained, it might look obvious. But that doesn't remove the magic, it makes the magic deeper and more interesting.

Life is deeper and more interesting to me since I know evolutionary theory. Even though the mystery is utterly solved.

The same will happen to the mind, behavior, emotions and feelings.

Why Women Have Sex: Understanding Sexual Motivation from Adventure to Revenge (and Everything in Between), Cindy Meston, David Buss (2010)

Co-authored by Cindy Meston and David M. Buss, 'Why Women Have Sex' provides an in-depth exploration of the subject of female sexual motivation. Buss is the author of the textbook on evolutionary psychology and he did a lot of work on mate choice and related topics.

This is fun to read and gives perspective. I don't regret reading it.

This is a preliminary scientific summary of their current thinking. They went into the field and asked women why they have sex.

The title makes a hint - the answers are myriad and multi-layered. Sympathy lay? Ok, I suppose that is a thing.

There is at least one chapter on the science of what women find physically attractive, including an in-depth discussion on shoulder-to-waist ratio, the studies on it and the hypothesis of the explanations of it.

It is slightly baffling how Naomi Wolf can recently (on Jorden Petersons' podcast) accuse Buss "and the whole Dawkins, you know… selfish gene crowd" ignore such an obvious wrinkle of how the world works. (She said they don't talk about shoulder-to-waist ratio, simply factually wrong).

I think some people just dislike the aesthetics of having explanations of human behavior coming from a field that studies organisms - zoology and evolutionary biology. Without realizing how nuanced the field is 12.

Naomi could look, and she would find an empowering message that accepts humans on a deep level and transcendental level - being animals. Without retracting anything from the beauty and craziness of the human psyche. But exalting it. After all, there is no other place where the soul and transcendence could come from but our minds - the minds of a crazy animal.

Reading Naomi Wolf Vagina (2019) right now and it seems like she actually hit on some of those ideas - kinda cool.

Starman Jones Robert A. Heinlien (1953)

Sweet ride, typical Heinlein I suppose. Go on a spaceship, a lot of deep spaceship culture things etc. Find another planet on another star… Then some random inconsequential aliens at the end haha.

The sci-fi is weak when they do these magical math calculations to jump the stars. I mean there is nothing to figure out there just they just do some incomprehensible math tricks.

The story is strong when the main character is at the cusp, at the peak of his height output… He thinks something like, if I don't keep on going now, I will never have the nerve to make the call.

This idea intrigues me quite a bit. The concept of peak output, athletic, fascinates me.

Sometimes when working out hard, I think if I don't keep pushing hard now, then when will I ever?.

What if we see programming through the lens of performance and performance optimization?

Consider how masterful design is infinitely more powerful than anything else. Newtonian physics is a different world of simplicity and explanatory power than Aristotelian physics 13.

So too with peak programming output. There is no amount of extra time you can spend to change bad design to good design.

The reverse is just as true. A programmer who spends less time on the keyboard is more effective. If they spend the rest of their time preparing for the moment. Also, see Hammock Driven Development by Rich Hickey.

So this is a future blog post of mine, the lens of preparing for peak output like an athlete does.

Suddenly, preparing your programming environment, a non-bloated operating system, an editor configured to fit your mind, a keybinding set that makes you fast at little things… Shine in a different light.

Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts, Annie Duke (2018)

Some light reading non-fiction cognitive psychology stuff. Plus some lessons from her playing poker.

I can draw parallels from lessons learned back in the days when I played League of Legends.

A strong player will enjoy the opportunity to learn. To not blame luck when losing but to ask What could I have done better?. To not blame luck when somebody else wins but to ask What did they do right?.

So too in life: To blame politicians or corporations for what is wrong in the world? What is the point? You are wasting precious time that you could spend on actually doing something in the world.

See my goal list and what I want civilization to be.

Time for the Stars, Robert A. Heinlein (1956)

Weaker than the other Heinleins I read so far. (This was the fourth).

Time for the Stars is a play on words. Halfway through time, the whole matter with Einstein's relativity and such becomes a main character in the book.

Heinlein captures some of that potential spirit those people would experience. Going near the speed of light, your time goes slower. Your siblings and parents grow old on earth while some weeks go by for you.

To stand on a planet, look at the night sky and see Sol. This idea makes me tear up for it is so hopeful. Humanity pulls through the current struggles and goes into the next phase, exploring the stars.

I yearn so much for things to turn out great, it hurts.

But I cannot say I was born too early - for somebody must have been born at this time. In the primitive pre-galactic-historic primordial soupy guck that our civ is at this stage. Somebody must have been born at this time and decided to push forward, unfolding the great potential of humanity.

The Alignment Problem, Brian Christian (2020)

The intersection to gaming, I forget the term but the concept that you build a world experience that gives the player the next thing to do from moment to moment was interesting to me.

I am myself a fan of Jane McGonigal. I agree, would it not be cool to harness the power of games to do good in the world?

I think the strongest chapter is the one on Inverse Reinforcement Learning, but you get a much fleshed-out dose of this by reading Russels Human Compatible.

It is striking how the book tacitly assumes AI is equal to current machine learning / artificial neural networks.

This misconception Chomsky has recently attacked scathingly: To build a statistical program that predicts the weather in front of a window, is not meteorology. - It is not making explanations of how weather works, at all.

Similarly, Minsky wanted us to think about how we can make programs that orchestrate their capabilities, and not get stuck at things. Simply developing the art of programming a little further into reasoning, and deciding. Stuff like that.

It is important to know the tech context when those statistical ML people talk about something like curiosity or friendliness. The context is you come up with a different edge of what the reward function of the model could be. It does not mean you have any idea about how to explain what curiosity is.

Granted - you are doing some clever engineering and design, to come up with ways to train the models for this or that wrinkle on the landscape of capabilities.

Consider Phlebas Iain Banks (1987)

Great, how he can make a dialog between a human and a computer, from the point of few of the computer. And have it feel completely natural. Banks is making me feel like I am thinking his thoughts, and that it might have been thoughts I would have thought myself.

I am not sure if I needed all the action sequences in between. It was for sure quite the adrenaline pump fest, better than an action movie. And I have to say he managed to make me curious about this culture of his now. With this cool history book outro, dammit.

There is a scene where I think Banks was simply mistaken about the human psyche. Why would somebody who is running for his life still be wasting energy shouting angry things at somebody?

No. When it is about survival, there is nothing else. There is only you, what you need to do, and the world.14

Not finished

Robert Sapolsky Determined: A Science of Life without Free Will, 2023

The Little Learner A straight line to deep learning, Daniel P. Friedman and Anurag Mendhekar, 2023

Foreword by Guy Steele, saying it's the perfect book for learning machine learning. Super cool.

It's joyful and rich, I ravaged on it. Not entirely through right now.

Now have some explanation structures corresponding to tensors, artificial neurons, and gradient descent.

An artificial neuron is a linear combination of the inputs, composed with a non-linear decider function.

And it's all just Scheme; It's interesting that if you understand something in terms of a programming language, there is something else happening to your explanation structure. Software makes explanation structures, and programming it oneself is a process of creating this explanation structure,

Neural Assemblies Günther Palm (2022)

Ah, this is wonderful! I have found Palms's book in the references of Braitenbergs Vehicles. Where Braitenberg is musing on the functioning of the cortex.

This is part of the continuation on the journey to build higher vehicles. (The cortex ones, the cognition ones).

This is the right kind of idea space, in my opinion, to parse the functioning of the brain. The joyful blend of biology and playful analysis, that I see in cybernetics.

Blog posts to come, as I reproduce some of Palms algorithms etc.

The Moon Is A Harsh Mistress Robert A. Heinlein (1966)

Minsky Perceptrons

This book is so advanced to me it is alien tech. Minsky mentioned once maybe you should now and then be exposed to something way more advanced, then later when you gain more knowledge the connections come popping.

Minsky Finite And Infinite Machines

From Darwin to Derrida (2020)

Biophilosophy and using philosophy and language as a tool to explore and develop some ideas and thinking. Brutally hard to read even as a biologist.

Sherlock Holmes

I have the audiobooks read by Steven Fry. This is amazing. A lot of material.

Inadequate Equilibria: Where and How Civilizations Get Stuck Elizier Yudkowsky (2017)

All-time favorites

[wip]

The Selfish Gene Richard Dawkins (1976)

Dennett

  • Consciousness Explained

Braitenberg

Stephenson

  • Anathem
  • Snowcrash
  • The Diamon Age

Jared Diamond

Robert Sapolski

Steven Pinker

Kahneman

Want to read

The Instruction of Imagination: Language as a Social Communication Technology Daniel Dor 2015

Danial Everett

Carl Sagan

Peter Medawar

Claude Shannon

A Mind at Play

George Christopher Williams

Christof Koch

The Quest for Consciousness: A Neurobiological Approach

Braitenberg

Behavior: The Control Of Perception, Powers 1973

Apparently Kant made some cybernetic sense early on

Einstein

  • The world how I see it
  • biographies

Footnotes:

1

Isn't it funny that I don't know where I picked up this phrase?

2

Science with capital S. The fundamental spirit of acknowledging one's ignorance and going forth and understanding the world. Not the current cultural artifact, which has plenty of flaws and barely works.

3
4

Dennett calls the concept of memes or thoughts winning fame in the brain.

5

It is in some interviews on YouTube. Also, the story is recounted in The Dream Machine book.

6

Recently Greta Thunberg

7

We have since recently a picture of not only jumping genes within gene pools but via viruses making lateral gene transfer kinda things more like a whole ecosphere of genes. Everything connects; The holistic person's wet dream.

8

For instance, we can throw away the idea of the emotional brain regions, a.k.a. sometimes still the limbic system. Also, the amygdala is not doing 1 thing; The amygdala is doing 50 things. It's lazy to go and explain a piece of cognition, ye and then you have so much amygdala activation or something. You could have said in a more straightforward way and then you are scared, with the same explanatory power. You do not need to bring in brain areas in your explanations when they don't add anything but a neuroscientist flair. Granted, Wolf is portraying other persons explanations.

See

10

More personal descriptions of him in The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal, Mitchell Waldrop (2018).

11

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll guarantee to take anyone at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief, and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years.

Watson 1925

12

Further reading is David Buss or Robert Sapolsky. Behavioral ecology offers the most nuanced and careful study of what behavior is, while using common sense.

14

How do I know? From doing the Wim Hof method. Google Wim Hof. Also my blog (more to come).

Date: 2023-07-15 Sat 10:50

Email: Benjamin.Schwerdtner@gmail.com

About
Contact