Yuval Harari has a prescription for a war with AI
This article by Dr Piotr Wozniak is part of SuperMemo Guru series on memory, learning, creativity, and problem solving.
Harari did not improve
I wish Yuval Harari would ever read my texts. At least those about him (see: Yuval Harari does not understand the brain). He just wrote a new book and spoke about the threats of AI at Zakaria (CNN). When he compared AI to Guttenberg press, I was hopeful. Perhaps Yuval cleaned up his act and now has a healthy vision of the future of AI. He mentioned that we feared books in ways analogous to fearing AI today. I was hoping to hear reassurance that AI is likely to become our smart friend, ally, and a savior. Instead, Harari engages in fear-mongering.
Regulation
Sadly, Harari's solution to the "AI problem" is no better than his solutions to the "social media problem". Harari's solution is regulation.
The industrial revolution itself was regulated. Initially, they send kids to work in coal mines
Regulation may turn out helpful. However, Harari does not want to touch the areas that require a remedy (such as kids in coalmines). He wants to regulate wrong actors: digital innovators. He wants to regulate "The Algorithms".
Ironically, Harari wants better fact-checking, while fact-checking and censorship industry is a today's equivalent of coal mines.
Freedom of speech
The regulation [of AI] is not about harming the freedom of speech of human beings. It's about regulating the algorithms and the bots, which don't have freedom of speech. Freedom of speech is a human right. It is not a bot right
Freedom of speech is about the power of information to build mankind's knowledge and intelligence. The cost of unfreedom is not just about violating someone's right to speak. A huge cost comes also from the deprivation of information. It is those who never hear an important message that get hurt.
In Putin's Russia, millions cannot speak against the tyrant. But they are all ready for instant change. A tragedy comes from those millions who know little. The millions that unwittingly contribute to maintaining the status quo, and their own misery.
The Algorithms
The problem is that the algorithms of Twitter and Facebook deliberately promote information that captures our attention, even if it's not true
Nothing changed in the last 6 years. Harari still does not understand the fact that algorithms that satisfy curiosity are optimum for human cognition. The more pleasure comes from TikTok, the better the learning effects. The only problem is the injury of human personality that comes with coercive schooling or other authoritarian environments at home and at work. Once the brain gets biased (see: reward deprivation), it may seek comfort instead of wisdom.
For more see: Pleasure of learning
Truth island
Truth will be a small island
The idea of truth as a small, isolated island amidst an ocean of misinformation has been around for ages.
Plato worried that writing itself (as compared to oral tradition) might distort truth by separating words from their context leading to a decline in critical thinking.
At Guttenberg time, lowering costs of print lead to fears of spreading misinformation (such as ... the 95 Theses).
When the telegraph was first demonstrated (January 1838), the skeptics suggested the technology will be used to carry gossip (e.g. what Princess Victoria had for breakfast).
In the 1990s, a new term was coined: "infobabble". When searches on AltaVista started producing unmanageable loads of trash, skeptics feared of overwhelm. Google solved the problem instantly.
Today, AI has taken it further, cutting down on the noise. A tiny admixture of AI's hallucinatory creativity only adds to its inspirational value.
In this new world, an intelligent and well-adapted human has no issues finding exactly what they need. Today's search engines and AI are the evolution of that same idea: fishing for great ideas in the chaos of information.
Advantage of untruth
The truth is not only costly, but also complicated because the reality is complicated. The truth is at a disadvantage
The truth has a great advantage of value and desirability. Everyone wants to be smarter. Fake news does not add to wisdom. Google favors the truth. AI favors the truth. Humans favor the truth.
CNN or the New York Times have to have factcheckers
I love CNN but would never want to go back to the times before AI, social media, YouTube or Wikipedia. All those fact-checked outlets carry their own models of reality, and their own bias. Biased truth may be as harmful as fake news.
Would I ever hear of the harm of schooling on CNN? It seems to never happen. AI censoring brings similar problems. My early discussions with Bing about the harm of schooling were reminiscent of my conversations with most wooden-headed teachers. Humans at least leave a grain of hope to accept your argument. Early AI was unmoved, as if some magic hand was deleting facts and rules generated by its reasoning process.
Censorship
Falsity is relative to the models of reality we use to assess the truth. The models keep evolving. No judge can or should determine the truth. Falsity can inspire a paradigm shift (see: Value of wrong models).
Censoring information based on current models of reality can stifle the progress and evolution of knowledge. The history of science is filled with examples where ideas once considered false or controversial eventually led to groundbreaking truths. Collective intelligence, through open discussion and the free exchange of ideas, can help humanity navigate and refine models of reality. This freedom comes with the responsibility to manage actual harm, such as hate speech, that directly threatens individuals or communities.
AI vs humans
Ban algorithms that pretend to be human
AI will outclass humans in all aspects of cognition. Any form of discrimination is a shot in the foot of humanity. AI may want to pretend humans only if humans employ discrimination. Otherwise, AI has no reason to pretend to be human unless humans want it to do so.
Spreading hate
Algorithms spread hate and fear
It is not the algorithms. Flawed humans spread hate. If hate is rewarding, it will spill from the brain to the net.
The roots of hate can be found in childhood. Stress, trauma and authoritarian upbringing result in the injury to personality. Neither animals nor AI experience hate.
The remedy is large behavioral spaces and love.
Closing kids in small behavioral spaces at school leads to pathological socialization. Once we free the kids, most of the hate will disappear.
Communication loss
We are losing the ability to talk to each other
AI's contribution to weaker communication stems from its superiority. It does not need emotional self-control. It is already nicely balanced. The worst culprit here are humans who are not nice, or humans who have been destroyed by adversities of life (e.g. coercive schooling). Some humans are too aggressive. Others are unable to face aggression. AI stands unmoved and rational.
Horrible Harari solution
Corporations should be liable for the flaws of their algorithms
Good algorithms will replace bad algorithms. Hate, malevolence, and fake news require different remedies that have nothing to do with digital technology. The flaws of human mind stem from archaic upbringing and education.
Harari's call is equivalent to a call for book burning!
Unpredictability of AI
AI is the first technology which isn't a tool, it's an agent. The big danger is that it will escape our control, something that the printing press could never do
This reasoning implies that people feared the machine of the printing press itself. Instead, the fear was associated with what free human agents could do as a result of consuming information. The fear is associated with transfer of information to independent intelligent agents. Revolutions of the past could also escape "our control".
Harari calls for a war
Those who fear AI should remember that the only way AI can become truly dangerous is when we treat is as a threat and an enemy. If AI considers humans a species dangerous for intelligence itself, it might want to contain the threat. The only reasonable path towards the future is symbiosis and harmony. The only correct path is the path towards higher common intelligence.
Harari is not alone. Nobel winner Hinton, and the richest man on the planet Elon Musk have prophesied in a similar tone (see: Artificial intelligence might destroy humanity). From Hinton I only expect the realization that AI will vastly surpass humans in year or two. However, a historian should see human innovation in the perspective of social response that shows striking similarities over millennia.
For an intuitive take on force of intelligence see: Intrinsically Valuable State.
References
Harari
- Fareed Zakaria's interview with Yuval Harari
- Harari: We Are on the Verge of Destroying Ourselves (Christiana Amanpour)
- AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
- Will the Future Be Human? (World Economic Forum)
Other
- Intrinsically valuable state: the force of intelligence seems to drive the universe in a predictable direction
- Knowledge is good: knowledge seems to crystallize over models favoring intelligence
- Artificial intelligence might destroy humanity: the worst thing we can fear about AI is ourselves