Artificial intelligence needs to sleep

From supermemo.guru
Jump to navigation Jump to search

This article by Dr Piotr Wozniak is part of SuperMemo Guru series on memory, learning, creativity, and problem solving.

I was wrong about AI

As the student of the brain, I always knew that the shortest path to passing the Turing test is through mimicking the brain. In 2018, I wrote this text arguing that the computation models used in the brain are very simple and that we are close to building an artificial brain. I was wrong. Fantastic intelligence arrived much faster than I expected, and it ditched couple of my ideas by employing vast space, vast computation power, and the astronomical volume of data taken from the web. The brain is still far ahead in the efficiency of its design, but newly powerful artificial intelligence will only help us get to the optimum faster.

I saw ChatGPT in December 2022 (four years after writing this text). The world will never be the same.

I was always considered a pathological optimist, but reality turned out to be more optimistic than my "pathological" expectations. I leave the text with some comments below as a souvenir of optimism surpassed.

Deficiencies of AI

Artificial intelligence kept meandering. It was always a great hope, but we used to be blind to the limitations of our models. We thought we could program AI. We thought neural networks are a solution. We can now build fantastic tools with specialist skills, and still struggle to put a truly universally intelligent system together.

We kept underappreciating a few design principles that are used by the human brain. If we could only use those principles and build a baby brain, the rest would go on its own.

Some things AI needs to learn from the brain:

AI cannot just rely on pattern recognition and feats of combinatorial computation:

The brain needs to actively hunt for quality knowledge, store it coherently for long-term use, optimize knowledge for higher intelligence, and engage in a deductive-exploratory loop in problem solving

Jeff Hawkins is doing great trying to mimic the neocortex. Demis Hassabis is bubbling up with new ideas. Research labs come up with new methods and theories. Turing and Kurzweil prophesies are just about to come true. I would say this is a nice time to be alive.

Update 2023: I wrote those words just 4 years before ChatGPT, Bing/Copilot, and Bard/Gemini; the reality proved even nicer.

Sending AI to school

However, if we mimic the best of the brain, we can also inherit a lot of heavy luggage. For example, a bona fide learn drive would imply we cannot "send AI to school". We cannot keep loading its storage with a stream of knowledge from the web. We might run into a penalty loop, which might turn out to be some sort of "AI depression". This is exactly what we do to our kids.

For high intelligence, the learn drive must be autonomous. AI needs to go on an autonomous hunt for the highest quality knowledge. However, this always carries a risk of breeding evil. Knowledge acquisition is based on emergence and may turn out chaotic. I always thought that if we could control the AI reward system, we would make sure it is a force for good. But this would also deprive AI of a chance to reach a genius-level of creativity. Freedom promotes goodness, knowledge is good, but autonomous AI may turn out to be a flawed implementation or just go rogue like any system that is based on the evolution of abstract knowledge.

As knowledge structure underlies intelligence, it is possible that the greatest breakthrough might come from the unexpected direction: the Semantic Web. It may yet turn out that Tim Berners-Lee will have been the most important human being that had walked this planet before AI (see also: Inevitability of incremental reading).

Update 2023: it turns out that intelligent Chatbots do not mind loading the web into their "brains". The entire process of rewarding good knowledge, and building coherence can occur internally. Vast electronic brains do not mind. They do not need to choose between good or bad knowledge. They can take it all and organize it successfully. The learn drive has not been implemented yet, but the entire humanity is asking questions from chatbots. We all power the curiosity engine. AI does not create goals yet, but it keeps improving by being powered by the human learn drive. All we need now is to get rid of the dogma and blind censorship (see: Artificial intelligence might destroy humanity).

Memory stability

Memory stability is necessary to avoid catastrophic interference. Retrievability is needed for creative associations and generalization. In addition, retrievability underlies the spacing effect that ensures effective interleaving of the input. The sequence of learning is essential for the outcome.

Update 2023: Chatbots are not limited by memory, and their conceptual computation is not limited by model interference. Unlike humans, AI can hold multiple models and weigh up pros and cons. Human brain is unmatched in fast thinking based on well-crystalized models. AI can behave like a set of human brains and still do the same, and do it better.

AI sleep

All the cycles of knowledge acquisition, creativity and optimization must be organized in some kind of homeostatically controlled natural creativity cycle. Speaking of which, it is possible that AI does not need to implement the circadian cycle, but we have been wrong before.

The need to sleep comes from the need to creatively optimize storage. We might build a system in which a copy of knowledge goes to sleep, runs nighttime optimizations, and on waking, is reconciled with the original, while transferring the knowledge difference. If you pause for a second, you will realize that this is exactly what the hippocampus does. The brain has already got its answers. However, differential storage transfers may not be trivial. Dolphin-like unihemispheric sleep is like looking for best routes in London with half of the metropolis shut down (new paths might form independent clusters). It might be simpler to send AI for a power nap. This AI napping will take significant time as computational complexity of nighttime optimizations is significant.

Update 2023: again Chatbots seem to solve the problem of sleep by sheer power: (1) speed and (2) size of data. Unlike the brain, the storage is unlimited, and optimizations do not require routing connections over physical distances. If there are optimizations that do not occur and show any superiority of the human brain, I cannot see them.

Human weaknesses

I am not sure if AI should inherit some of the human frailties for the sake of diversity. If Theresa May showed a bit of emotion, she might have noticed being trapped in an algorithmic blind alley. If Donald Trump could improve on memory stability, he might have finally exhibited some use of strategy. However, diversity is a source of strength. Again we would need to rely on the power of evolution. As much as it is hard to accelerate the thinking process, evolution of AI could easily be accelerated. I am pretty sure we could dispose of the sex drive, but again, we never know where we end up when tinkering with the most potent agent of change in existence.

Waiting for AI toddler

Very often, we need decades to discover a spot of blindness in our reasoning about intelligence. Mimicking the brain seems to be the easiest way forward.

When the first AI toddler roams this planet, we will be on the straight path to Singularity.

Update 2024: we are closer



For more texts on memory, learning, sleep, creativity, and problem solving, see Super Memory Guru