critic-of-pure-bogus
Table of Contents
This website is made without the use of generative AI (except in paragraphs marked as such).
Thought feed on the ongoing accelerating enshittification of the internet and everything.
Credentials: I am just a software engineer.
Conclusions
- Gary Marcus describes how the current gen AI just doesn't work that well. How it can be used by bad actors to spread misinformation and erode democracy.
- We are in the age of the information bubbles ('memepools' if you like).
- Yuval Noah Harari, (2024 Nexus) describes how information processing systems are powerful, and it is not a given that they are a force of good.
- AI, computing and deep learning provide awesome new approaches to science and engineering, which we totally don't want to miss out on
- But gen AI in text and art is contributing to commercial bullshitting (SEO) of the internet, general loss of quality, enshittification
- Malgingant information processing sucks our live, brainrots adults and children, and is pointed against human creativity.
Make Not made with gen AI a quality badge, similar to fair trade.
Those that chose to not contribute to gargabe should be able to.
- We want useful information processing. If tech causes harm without benefits, away with it.
- The press and information flows are essential aspects of democracy, flooding the internet with bogues (viral bullshit) erodes democracy.
- It would be useful to have some form of watermarking, which labels text or art as human-made.
- I would like to see Germany or Europe implement some system that allows me to identify myself as a person.
- AI regulation should incentivize research on actually useful AI.
- Current chat bots don't have the requisite 'cognitive structure' to take a command like "Don't spread misinformation".
- Artists need to be compensated when their work is used to train models and make money.
A similar issue exists for GPL-licensed code.
From the GNU GENERAL PUBLIC LICENSE Version 2:
b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.
This means that LLMs trained on GPL code are a free product and should be licensed with GPL.
I don't think that
that in whole or in part contains or is derived from the Program
can be misinterpreted to mean putting it into a statistical database is fine.
We know that LLMs even store text verbatim.
- This is a blatant violation of the spirit of free software.
- Embarrassing and pernicious, considering the history of computing and the infrastructure on which these companies are making their fortunes.
- Ilya Sutskever says current AI runs on the data of the internet, "the oil of AI".
- But to take licensed oil without asking and then create the notion of "job xyz soon not needed anymore"; Not nice.
AI, intelligence, personhood
- My fear is that gargabe "AI" does damage without working that well.
- "True AI", AGI would be a person, because it would have to have a universal dynamism in it's own mental technology.
- It would be capable of understanding and therefore explanatory memes.
- It would participate in our culture as person.
- AGI technology will come from a breakthrough in the philosophy of programming.
- Everything that has to with big data-centers is making negative progress on AGI, because it is causing more confusion than clearity.
Thought Feed
A thought feed is a website with snippets of text, produced diary-style. The coherence between snippets might be low. The idea is that it reflects the ongoing thought process of a person.
Currently, artists are being ripped off, the world is flooded with unethical gargabe AI generated content. Google search is full of commercial bullshit (SEO is a discipline - Jesus). Code quality goes down from generated code.
It's not the case that all deep learning is garbage. For instance, AlphaFold seems to be an impressively useful tool.
The world is flooded with bogus. We have to create spaces where this is kept out; we have to do something like watermark the information we put out there. Like "this is not made with gen AI".
Brainrotting the Ability to Have Opinions
I notice this myself:
Instead of forming an opinion, I paste into an AI chat nowadays. This is not good. This means that we are "outsourcing" and therefore losing the ability to form aesthetic judgments.
Especially if we have now a wave of 20-year-olds who should be growing their prefrontal cortex; that should be growing their ability to represent the structure and design of systems, and forming aesthetic judgment on it.
Not even mentioning the social-emotional domain.
If those things are outsourced to chat windows now, it would constitute brainrot.
Apparently Plato wrote this down. A story of Thoth, the Egyptian god of writing, giving the "gift" of writing to a pharao guy Thamus.
These are millennial-old ideas, and they resonate with the questions we have now, to the point of goosebumps.
Excerpted here Thamus and Theuth:
Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the
The written word an image only of the spoken
father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Phaedr. Yes, Socrates, you can easily invent tales of Egypt, or of any other country.
"You give your disciples not truth, but only the semblance of truth," in other words, bullshit.
Writing, making a map of a terrain is like the shadows in Platos cave. Because of fallabilism, writing always has non-zero error. The alternative would be the map of the Borges story, where the map makers make a map of the size of the country.
Mistaking the shadows for the world is the same misconception that makes somebody think that 'doing math' is the same as 'math benchmarks', how naive.
Angela Collier recently mentionend how procuding 'hypothesis etc.' text is not the same as doing science.
Programmers have experienced the "this tool will replace x" story for some years now. The answer here is the same, producing text and source code is not programming.
Programming is about design and dealing with the real world. It is an AI-complete problem (an AI that solves would be a person).
This passage is goosebumps in the light of the looming problems of information apocalypse.
bogon: A memetic piece of information, replicating akin to a computer virus. It is usually generated by a computerized process. It usually contains information that is not easily recognized as fake by a human reader (fluent bullshit).
See also treme, meme, bullshit, slop.
Interestingly, I think that Plato was not talking about memory in any of the narrow senses of contemporary cognitive science. I think that when he says memory, he is taking a context and sort of having life.
An ongoing common sense understanding of the world that grow with experience. (Ian McGilchrist).
In that sense, memory is rather synonymous with mentality or the mind.
"They will not use their memory", like when you don't use your sense of aesthetics anymore.
"Memory" also means "embed" and "put into perspective" with "the rest of my experience".
I would add to this the "executive control," to be able to have the free will of making a judgment. I.e. using ones memory as basis for decisions.
My worry is that the chat windows are brainrotting both memory and executive control.
In this way, the current AI chats are creating the very world in which their existence is exalted.
If we lose the ability to produce creative text and to judge text, we use the AI chats for these tasks. My logic is "use it or lose it".
Btw is this the passage that they quote to say Plato said that 'writing is bad because it errodes memory'? If so, what a misleading pseudo simplification.
This passage foreshadows still open topics of neuro-epistemology. It is an anachronism to interpret the term 'memory' in some narrow information storage medium sense, which is a naive contemporary conceptualization that will pass.
Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews
We have studies now that LLM chat interaction can create false memories. Outsourcing "what you know" to a stochastic database of internet text.
Plato really was onto something when he said this thing with deteriorating memory. Again, my interpretation is that for him memory is more like what we call mind.
You could also try to make cognitive model work that is completely centered on memory and retrieval. In this direction there is Hofstadter, for whom cognition is basically perceiving analogies (or situation analysis). This fits with associative memory models, but I stop here; the rest of my blog is about this.
Layering in Deep Learning Where It Is Useful
I am a fan of deep learning in domains where there is obvious use, like medical applications and research. AlphaFold and the antibiotics research are very powerful examples.
It is not great for writing scientific papers. The reasons have to do with the nature of creativity and it's complected-ness with the real world.
The problem is that producing text and doing science are different things.
The companies are currently saying that they will make agental systems around LLMs that go forth and create this feedback with the real world.
In order to this, they would solve the old problems in AI, common sense and such things.
It is a huge claim to make that one knows how to solve it. It is said common sense and AGI are 'a 100 nobel prices away'. To then say you 'internally have a theory of AGI' is an immense claim.
In my personal opinion, the kind of stuff that actually makes sense is niche currently. And I think there are years or decades of neuro philosophy and software philosophy ahead before we know to make intelligence like ours.
Layering in Deep Learning into Software Development
How does gen AI tech actually fit into software engineering and computing?
Currently, the answers are so naive.
They remind me of the huge block 'mobile phones' right before there were modern mobile phones.
Why should generating code be a good usage of gen AI?
Producing a software system is more like writing a novel, or doing science.
Is it capable of doing this? No. Code quality is going down, unsurprisingly.
… In the coming troth of disillusionment;
The real answer is that you find the niches in your software stack where AI can do something useful. For instance google translate is very useful IMO.
Then we will reap the tremendous power of this tech and weave it into both software and how we make software.
If you can generate code with gen AI, you should be writing a compiler for your problem instead.
You are not solving the right problem if the current AI is able to solve your problem.
There are places where coding actually breaks down to a text manipulation task. This is where a text processing system is useful.
Also, it is useful to have a dynamic chat window interface to Stack Overflow answers. Often, I can produce code to interface with something like a browser API. It's useful for this.
This should give reason to pause:
Anthropic says "please don't use AI" in your applications.
This is one of the companies that contribute to the current problem of AI-generated applications. It is exactly this burden on society that down-to-earth people like Gary Marcus and Angela Collier have warned about.
How many AI-generated slops do hiring people look through nowadays before they see real applications? It's a burden. But hey, you can use AI to filter. This probably makes it even worse, since the inter-computer reality (Harari), of these systems is probably such that whatever they analyze is good is also roughly what you get out of using them.
We currently don't know how to make cat-level intelligence; let alone human-level intelligence. Even deep learning people are saying so: Yann LeCun.
Funny, LeCun's problem statement is so back to the initial problems of AI. Yes, common sense (or "world modeling") is the core of the problem.
The current wave of "high performing in benchmarks" is a bit like saying that the grades in school have to do with the person's creativity or intelligence.
It is wrong there and it is wrong here.
In terms of Ian McGilchrist, it is a left-hemispheric delusion. It is mistaking the map for the terrain. It is thinking that some symbolic representation of "test" says something about the high-dimensional activity of personhood.
The AI hype mistakes doing science for "writing scientific papers".
This is why they can delude themselves into thinking any of this tech is PhD level, or top 1% programmer. While from another perspective, it's dumber than a 4-year-old.
It is not even a cat. Hell, I'm not sure it is even "bee-level" intelligence.
It is the same mistake as thinking school grades say what the intelligence of a person is. It is just as wrong and misanthropic here as there.
It is mistaking the fake for the real. It is mistaking the stars reflected in a pond for the night sky.
How to be more intelligent than AI? By having a life, in the Ian McGilchrist sense.
This is related to common sense; Daniel L. Everett calls something similar cultural dark matter of language.
The surrounding context-giving perspective that we use to embed a thing into our life, that we update with experience.
LLMs are a natural processing, data science technique. They are good for being some sort of mirror of language; they are powerful for some things like translations (if you don't care about the human touch). This is the current state of this tech; everything else is speculative.
The funny thing is I hear many research-level people saying something like
"This is good to solve task xyz. But I can't use it in my work, because I have to do things nobody else was doing before."
Well, guess what;
In software engineering, in art, and in craft we are dealing with the real world. "We are not making hamburgers here."
Is truer in a wider set of situations, I think than researchers realize.
Software engineering always is practical research.
- Nobody builds the thing I'm building; else there'd be a compiler for it.
I need to understand what the problem is, what the user wants, what the customer wants, what I can do to deliver value. This sometimes means not writing software.
Show me the AI tool that is prompted with "you are a software engineer" that goes on to tell you that you don't need software for the job.
Dealing with the real world is related to doing original research.
Prompt kiddies are now making their first experiences with code bases:
Welcome, software engineering is about managing complexity.
It is like senior devs said "this thing doesn't help with the big problems.". The big problems of software are design and design tradeoffs. The problem is not 'can I write code that does xyz'.
Code quality is going down because of gen AI tools.
What better time than now, what better place than here?
Unite and raise your voices, gen AI critics of the world!
Bulshytt: (1) In Fluccish of the late Praxic Age and early Reconstitution, a derogatory term for false speech in general, esp. knowing and deliberate falsehood or obfuscation. (2) In Orth, a more technical and clinical term denoting speech (typically but not necessarily commercial or political) that employs euphemism, convenient vagueness, numbing repetition, and other such rhetorical subterfuges to create the impression that something has been said.
― Neal Stephenson, Anathem (quote: https://www.goodreads.com/quotes/8741909-bulshytt-1-in-fluccish-of-the-late-praxic-age-and)
Anathem is one of my favorite books. Stephenson is coining alternative words with alternative etymologies. One of many aspects that make this world rich and beautiful. This is an alternative world with alternative science history!
Gary Marcus has called LLMs a fluent spouter of bullshit.
"Fluent Bullshit" or Bulshytt is a good name for generative AI content, including "hallucinations".
Hallucination
dictionary: https://www.merriam-webster.com/dictionary/hallucination:
computing : a plausible but false or misleading response generated by an artificial intelligence algorithm
But hallucination is the modus operandi of generative AI. In a deep sense, all its outputs are hallucinated.
Powerful computer technologies are Linux, the internet, programming languages, Git.
When these technologies are brought to the user in the right way, they enable what we experience to be "the modern world".
The internet is so powerful and solid; it feels like a force of nature rather than a human artifact.
Powerful tech outshines garbage like the sun outshines a candle.
The only thing missing is that AI systems talk to each other, producing bulshyt catered to AI. Then they become tremes (S. Blackmore), replicators that replicate on information processing systems. In the inter-computer reality (Y. Harari 2024), the shared fiction created between computers.
Neal Stephenson has that in Anathem already, too. "Bogons" or something like that. TODO: find the quote, which had clear memetic connotations iirc.
The warning is that this might break the internet.
Once there is enough garbage, you won't trust anything on the internet anymore. Then, it becomes mere bogus, or total garbage.
This concept was already called information apocalypse (Sam Harris with guests).
It is unethical to contribute to bogus. It is one of the great cultural problems of our time to fight the bogus.
I am a fan of ordinary, useful, solid information processing, like GPS. This used to be AI ("GOFAI").
I feel like currently the term AI is used in a place where computing would fit better. For example, somebody would say "we need to build AI for Germany". You need to build better computing.
It is useful to make progress on solid information processing, optionally introducing fuzziness, high dimensionality, and learning. Aka neurosymbolic computing.
Slop
AI slop, commonly referred to simply as slop, is low-quality media—including writing and images—made using generative artificial intelligence technology. Coined in the 2020s, the term has a derogatory connotation akin to "spam".
Commercial Bulshyt: marketing speech. It is bulshyt in the service of selling stuff.
Gen AI Critic on Yotube (selected)
Angela Collier is refreshingly down to earth and non-bullshit.
Thanks to Freya Holmér, you are not alone.
Brainrot
"Outsourcing thinking" is a terrible idea, leading to Brainrot.
memetics of 'information bubbles'
- the information flows supporting modern memeplexes rely on large exposure on social media. (assumption)
irrational memes (Deutsch 2011) are good when they suppress criticism. In the meme-human interaction, the meme utilizes psychological properties and weak points of humans.
For instance, instilling social fear of criticizing a cults ideas. Hooking into tribal ingroup-outgroup in-bourne instincts. etc.
- A green beard (Dawkins 1976) is a (thought experiment) biological signal in the adaptative domain of a gene, signaling it's presence to either itself or it's geneplex, genes with with close linkage disequilibrium. (that is, they travel together).
- For instance, say a butterfly gene results in a conspicious green beard of the butterfly organsism, then conspecific could chose to help individuals carrying the green beard - preferentially. The point is, the green beard is not an adaptation per-se. But it could bootstrap it's own adaptiveness via altruism between it's individuals.
- Green beard is actually used, for example to make sense of some yeast biosignaling, wikipedia article.
- I think it is the case that irrational memes signal ingroup-outgroup information, similar to green beards.
- There is also the red flag, an immediate, leading signal about a person or situation.
- If irrational memeplexes incoorparate green beards, those can be used as red flags by an competing irrational memeplex.
- The problem is changing the environemnt to one where critic is the criterion of good memes. We need to get away from a regime of irrational memetics.