Myth: Our brains can get hacked by algorithms
Brain hacking is a myth
Myth: Evil forces can hack into brains using psychographic algorithms
Fact: Data from Facebook is as good at swinging elections as it is good at selling a shampoo to a bald guy
Can algorithms get into your brain?
A powerful meme is making rounds in news cycles these days. Allegedly, social media companies can now use their data to hack into our brains and affect our decisions. They can even swing elections. The tool they use for such exploits is psychographics. The power of psychographics is a myth. The myth is very strong. A couple of my smartest colleagues consider it plausible.
It is true that a butterfly effect can swing elections. It is true that humans are irrational. However, the myth takes its carrying power from a serious misunderstanding of how the brain works, and how we model the reality.
In this texts I would like to explain why humans are so unpersuadable, and why it is a fantastic quality that underlies our collective intelligence.
Criticism of my text
My article about Yuval Harari has produced some interesting feedback:
You are wrong when criticizing Harari: Yuval Harari does not understand the brain. You speak of "The manipulation by ideas, not by algorithms. The algorithms only assist in strengthening models by supplying knowledge". I think you should broaden your knowledge to be able to judge. Search for "Cambridge Analytica investigation". It has been proven that Analytica used Facebook data, and psychological profiling to tip individual ad-hoc choices in Brexit, Trump election, and a number of other campaigns by exploiting and targeting hidden psychological fears, traumas, or preferences. This is not "supplying knowledge", it's hacking
Those words might be used as evidence of brain hacking indeed. It is not brain hacking by algorithms, but brain hacking by ideas again. Cambridge Analytica seems to have succeeded in convincing a large number of people that their methods rooted in state-of-the-art psychology are indeed effective. They did not need their psycho-algorithms. All they needed was plain old-fashioned marketing. They got into people's belief system with their marketing.
News media are great at parroting the same meme with good felicity. Awfully misleading science stories make rounds in the news and grip the public. It then takes years of mythbusting to uproot those viral stories. Brain hacking has been around for years, but Brexit and the election of Trump gave it a new life. Today, many people consider the myth plausible.
The web is good news
Fake news is reality. False claims in advertising are also a reality. The most important things to remember from this text are:
- Internet makes us smarter (see: my proof)
- Healthy human brain is very resistant to persuasion, coercive learning, let alone hacking
Anthropogenic climate change deniers and proponents of creation science are a bunch of people who prove that even perfectly coherent high-quality scientific knowledge will not swing minds that are driven by powerful beliefs. Those beliefs do not need to be based on vested interest. They do not need to be religious. We are all "religious" with our beliefs in one way or another.
Brain algorithms keep guard over our beliefs
At the core of the futility of brain hacking is the natural human resistance to persuasion. This is a very powerful force. Every single losing electoral campaign has learned that in practice. Once the cards are on the table, most of the time, only a fraction of voters are persuadable. Of those who tip with the arrival of new knowledge, the direction of change is predetermined in the belief system. Only those with weak models, or weak knowledge are of value in electoral campaigns in which well crystallized forces clash. New brain-hacking tools are not mature enough to compete with the old tools. In addition, when they are an improvement, this is usually an improvement for the better. New tools get better when they actually increase the knowledge of the voter. In the future, the electorate will only get smarter.
Modelling reality
Human brain has an immense power to spawn and protect coherent models of reality. All our behavior is determined by this abstract generalization rather than by the actual reality. At any given moment, our overall model of reality is confronted with sensory input data, and the map of activations in the brain. The activations and sensory data are integrated using the model of reality to determine the next decision or the next action.
To ensure efficient function of the brain and the body, the model of reality should be coherent and consistent. It is also helpful if it correctly reflects the reality. Incorrect modeling leads to errors, incl. death. A child may believe it can outrun a train. This may provoke risky behaviors. This may also lead to a prompt correction of the error in the model.
Two layers of model protection
We have evolved a number of imprinted algorithms for protecting models stored in memory. Those mechanisms work at the neural level (e.g. confirmation bias), and at the social level (e.g. resistance in coercive learning). A common myth says that those protections are based on faulty algorithms. Cognitive biases, incl. the confirmation bias, are considered errors of the mind. In reality, protection of models leads to diversity of models in a population, which leads to a competition between models, and the evolution of models. The crystallization of collective knowledge undergoes processes that are very similar to those that happen in individual brains (competition, interference, forgetting, revaluation, crystallization, stabilization, etc.). The entire evolutionary process underlies human collective intelligence, and the progress of mankind.
The two essential layers of model protection ensure model stability:
- brain level: generalization is an inherent property of neural networks. We build models that tend to reject inconsistent information. This result is a healthy phenomenon known as the confirmation bias
- social level: social interaction leads to group polarization. This means it tends to stabilize in one of the two extreme states: either (1) the meeting of minds (to confirm models) or (2) intellectual combat (which favors collecting evidence to confirm models)
Generalization can be faulty, but it improves fast thinking and makes further generalizations easy. Bistable interaction between models at the social level, leads to tribalism (formation of clans and coalitions), accelerates generalization, favors coherent learning, and contributes to a clash of models that may, in extreme cases, lead to revaluation. In matters of high weight, on rare occasions, people change their minds under the pressure of the overwhelming evidence to the contrary.
Pain of broken models
Breaking up models is unpleasurable. If the model is strong, and its valuation is high, the pain may be significant. This results in an "emotional attachment" to personal beliefs. This seemingly irrational phenomenon is part of the model protection scheme.
Breaking up a faulty model will often lead to a rich compensation when a new model can be established. This is why it is much easier to transition between models than to break up an existing model with nothing new in its place. The breakup of a strong model may lead to a painful realization: "I have been wrong all along" or "I have wasted my youth/life on a wild goose chase". This is also a reason why a wrong model based on limited data is better than no model. A wrong model can serve as a skeleton of generalization upon which inconsistencies (errors) can be discovered. When a model is missing, coherent learning is inhibited, and the progress stalls. If there is a great deal of data that supports several alternative models, keeping all models in memory may be unavoidable, however, that approach is costly and error-prone. An early choice in favor of one of the models may accelerate progress (incl. falsification of the choice).
Smart people are stubborn
The smarter the individual, the stronger the algorithmic processes of generalization, and the stronger the socially-driven protection mechanisms. It is much easier to persuade an individual whose knowledge is less extensive, less stable, and less crystallized. Adult populations around the world may seem highly persuadable. The factors that weaken the mind are numerous: separation anxiety in daycare, limits on freedom at school, learned helplessness, depression, rat race, bad health, etc. The two key forces that lead to the weakening are the suppressed learn drive (less learning), and incoherent learning. Without the continuous flow of new knowledge, memory structures wither due to forgetting and interference. Instead of lifelong learning, we may experience the loss of the joy of living.
In the name of healthier and happier populations, we need to protect the healthy force of the learn drive, lifelong free learning, and develop tolerance to diversity of opinion. Debating points of view is healthy, however, incessant critical bombardment of weaker models may lead to their break up, without a chance to put in new structures that could underlie the recovery. Tolerance facilitates coexistence. The clash of models is welcome. However, the optimum scenarios in a clash of models are no different than solo learning: it is always recommended to tackle one issue at a time. It is better to conclude one debate before commencing another.
We need to celebrate strong models (points of view), even if they differ from ours. When someone tells you "you are the most stubborn person in the world! You are impervious to argument", take it as a potential sign of your own high intelligence. Models do not need to be correct (see: Value of wrong models). The myth of "the only correct model" we learn at school where only "one truth" is acceptable. That "one truth" often appears to be just a point of view, or a specific interpretation, e.g. of a historic fact. Diversity of models needs to be protected and cherished. Unfortunately, we do not seem to have a brain algorithm for protecting diversity. Tolerance needs to be acquired by learning in the same way as skepticism (i.e. immunity to fake news). Possibly, it is the modern connected world that necessitates that one extra layer of protection for models. We have evolved in conditions of lesser social connectivity. Diversity is precious, but in a connected world, it can turn out overwhelming.
With a bit of self-discipline and training, we can develop a degree of tolerance for diversity. This may turn out helpful to methodically resolve contradictions between diverse models: one at a time. My favorite model to clash with is the model of "good school".
I apologize for my incorrigibility
If you happen to read some of my text, you may have an impression that I am impervious to argument too. I am proud of it. When I get blasted, I double down. Each time I hear of my stubborn stance, I look for good examples in which I was easily convinced with a coherent argument. Invariably, I hear back "That's not a good example. This is obvious". In other words, I do not reject obviously valid claims. I reject those that clash with my models, and I am happy my models are strong, even if they may occasionally turn out wrong. If you see my error, let me know
Books used to be better at hacking
To say that microtargeting algorithms can hack the brain isn't much different than saying that books can hack the brain. Psychographic hacking does not seem preposterous only because we have not yet learned all the pros and cons of the accelerated free flow of information on the net. If you claim that a psychographic profile is a big step ahead of paper books, I claim the opposite. Books have a better hacking power. While algorithmically supplied fake news have their falsification available just a click away, a book can monopolize the brain for weeks. Before Reformation, a book was a treasure. If you had one, it was largely all knowledge you had. Whatever the author opted to tell you was the only revelation of reality available (beyond the same old goat market gossip).
The peak of brain hacking might have taken place at the times of Goebbels. Propaganda was fake news of the past. Goebbels did not need microtargeting. Information monopoly worked much better than today's algorithms. It is the web that is actually the best tool to arm populations against propaganda. See: On freedom of education and freedom of information.
Stability of deep belief
The naked truth and the whole world of science cannot change inner beliefs. Climate change denial will continue when it is derived from beliefs that are often rooted in livelihood. Creation science will thrive, and an army of Richard Dawkins won't make a dent. The backlash against immigration will continue. It is rooted in the quest for stability, predictability, safety, and the comfort of racial and cultural sameness. If brain hacking seems to work, it is only because of those inner beliefs that are immutable on the scale of a lifetime. Fake news feed on the confirmation bias. A creationist with access to fake news will only become a smarter creationist. He will amass vast knowledge, including valid scientific knowledge, to keep his creationist models alive.
Imagine you are trying to convince a creationist that the evolution is a scientific fact. I tried many times and failed. Now imagine you get a help from a magic hand that tells you everything about the person you are trying to convince: personal habits, likes on Facebook, favorite YouTube videos, etc. How does it help you to swing that voter to believe in biological evolution. Chances are just about zero and the help from data would not even register under a microscope.
How can microtargeting swing an election that the Hollywood Access tape could not? Boldfaced and direct Hilary got 6 months to live might have more subliminal power.
Vulnerability to fake news
Vulnerability to fake news is a matter of insufficient training. During long years of schooling we are trained to uncritically accept and internalize the only truth provided by the teacher or the curriculum. This has disastrous consequences for our ability to generalize. Exposure to fake news is like vaccination. Healthy skepticism is trainable.
Psychographic profiling can indeed help identify vulnerable populations. A scientist or a CEO may be a poor target for a far-fetched falsehood. However, there is a long road from being duped with fake news to changing once convictions, e.g. political leanings.
If we want to minimize the impact of fake news or brain hacking, we should address the vulnerability, i.e. that part of the population that has been injured by years of schooling and/or years of rat race. If fake news is a falsehood vaccine, fake news is also its own worst enemy. It raises social awareness and resistance. The only information that is safe in modern world is information based on the truth.
Public resistance to hacking
Popularizer-in-chief of the concept of brain hacking, Yuval Harari will admit that it is a historic rule that we always fear manipulation by new media. Those fears are as old as media itself. Our current stance on brain hacking follows the wave of publicity for Carr's "google makes you stupid", and is reminiscent of prior complaints about the impact of print, press, telegraph, radio, TV, web, etc. Our resistance is invariably strengthened to the point that no one today would question the value of books or the radio. Fears of brain hacking will get rejuvenated when we are about to attach sensors to our scalp, get filmed all day long, or have our thoughts read in a brain scanner. Today's example might be the outcry against cookies. Several times per day we need to issue permissions to sites to plant cookies on our computers. If cookies are hacking too much, imagine scalp electrodes.
Capturing Facebook data is unanimously considered unethical. It is also widely considered illegal. What was possible yesterday, may be impossible tomorrow. Google with come up with a TrustRank. Facebook and Twitter will fall (or reform), and we will live in a world that goes one step further in making a good use of information.
Brain hacking isn't much more than just fancy marketing. Psychographic approach is at least half a century old. Targeted marketing has been growing in strength for more than a century. Psychological warfare is as old as written records. Ancient warlords used decapitated heads to demoralize the enemy.
Invariably, as propaganda tools improve, so do our defenses. For popup adverts we have popup blockers. For TV adverts, we have advert removal in digital video. Even SuperMemo filters out advertising when you import texts from the web. On the other hand, brain hacking has its benefits. We like when advertisers or websites understand us and serve information to our needs. When you happen to click a photo of a lady in a bikini, you may be flooded with more, and then it becomes creepy. Perhaps this inspired Cambridge Analytica's Chris Wylie to say "we know you better than your own wife".
Algorithms in service of creativity
If you look at YouTube recommendation system, you can say it is a variant of brain hacking. However, this is a fantastic system where hacking is based on ideas. YouTube checks what you like to watch and presents its suggestions. This is very similar to neural creativity. In YouTube, you are served with picks from the entire YouTube library. In neural creativity, you are served with portions of knowledge that you yourself opted to master. In YouTube, you still need to make choices. In neural creativity, once you Go neural, you have no influence on the flow of knowledge. It is semantic, partially stochastic, and it is beyond your control.
You have a variant of "hacking" in Wikipedia too. At the bottom of articles, you have links to related article. You can expand your knowledge along a semantic network defined by Wikipedia. This is again a variant of brain hacking, in which ideas determine the influence. You will be fooled by YouTube and Wikpedia too, in proportion to your vulnerability. Vast knowledge and heavy exposure to fake news make a great protective shield.
Harari himself knows a hermetic method against brain hacking. He goes for a 60-day silent meditation retreat to India and cuts himself from the noise. Incidentally, he did so in 2016. He learned of Trump election only in January 2017. If he is not hackable, why should you be? If you read those words, you are most likely well beyond the reach of brain hackers. At least for a while
Social media
Social media experts allegedly use all their power to create algorithms that keep you hooked. They are successful. They create algorithms that attempt to maximize learning. As learning is pleasurable, social media can deliver a solid dose of pleasure. That's great.
Opponents of social media say: "How can you expose a child to a team of experts who want to hack its brain? The child has no chance". In reality, the child has one powerful advantage, it owns its own brain. Unhappy with the performance of the algorithms it will quit and look for better reward. The situation becomes pathological only if we expose a child to reward deprivation. The prime cases are screen limits and compulsory schooling.
Social media augur a better world with more efficient flow of information. All the pathology resides in the brain demaged primarily by limits on freedom. Adding limits, by coercively reducing social media will only add to reward deprivation and the craving. See: On freedom of education and freedom of information
Can the Trump hack be reproduced again?
Donald Trump's Electorate in 2016 was impervious. He could indeed shoot someone without much effect. He will carry a great deal of that support into 2020.
Now that we know "all the truth" about hacking and all the lies that fooled the public, can we be sure of the result of 2020 election, or the second Brexit referendum. The latter will likely be a success, and the success will stem for 2 years of heavy educating of the British public on the implications of Brexit. In addition, the electorate will be younger and more energized. Still, the margin of progress is microscopic and a big chunk is due to the death of older voters who favored Brexit. If we can substitute the truth for the lies, and still be unsure of the outcome, we can see it as the proof that we are naturally resistant to brain hacking, and even resistant to absorbing high quality knowledge based on the truth.
There is a difference in marketing products where value and knowledge are little. Where information is highly asymmetric. Moreover, product purchases correlate well with past purchases. The only difference between old-fashioned advertising and targeted advertising is that we all like it more. It makes more sense. Social data may correlate with political views, but is less reliable in detecting vulnerabilities. In this it is similar to old-fashioned advertising. It often misses a target, and when it does, natural resistance brain algorithm may produce the effect opposite to the one intended.
It makes sense to target undecided voters because their knowledge may be limited, or balanced on the edge of a decision. The same voters will often be attacked from many direction and the net outcome will depend on prior knowledge. Again, this is not a hackable factor. Future campaigns will do far better by crafting a good viral message and good realistic program behind it.
It is possible to change elections by changing the turnout. i.e. without changing beliefs. All we need is to change the enthusiasm. This is far more efficient. In a rare study from Nature we learned that a message on Facebook might have made voters 0.4% more likely to vote in 2010 election. This was not a case of brain hacking though. Not even fake news. It was largely a case of social influence of close friends. As such, it was a welcome case of social interaction for a better cause.
Before a myth becomes a ridicule
We already had brain hacking in the form of subliminal messages in the 1950. Like Cambridge Analytica, James Vicary spawned the phenomenon with his marketing tricks. Marketers and political campaigns still use subliminal messages, but to a thinking man they are just a marketing hoax or a plain joke.
When someone tells you that playing Mozart to a womb will make a baby smarter, you grin. When you hear of learning in sleep with a sleep detector mask, you grin. When someone tells you about brain hacking, you may fall for it. The voodoo is new. It is a good meme. It swept the news around the world. If Anderson Cooper speaks about it, it must be true! It will take some time for the meme to fall into the same level of ridicule as Mozart effect and polyphasic sleep. If Steve Bannon fell for the scheme, it is no a shame. Cruz and Carson tried and admitted it was a waste of money. But Donald Trump won. This gave the meme a new breath of life. Most of the population still believe it.
Stargate project by CIA on remote viewing, terminated only in 1995, cost $20 million. That was nothing else than a money spent on clairvoyance. It was on based on myths that stem from the same lack of basic understanding of how the brain works. The program injected a whole hord of psychics with a great dose of material to boost their credibility. When big wigs don't dig the brain, a monumental waste is around the corner. And the damage may extend into decades. Myths are easy to spawn and hard to kill.
Cambridge Analytica
Cambridge Analytica story contains all the ingredients of a dark gripping novel. It involves inflated marketing, conspiracy theory, bloated egos, abuse of trust, political greed, otherworldly coincidences, interpersonal feuds, mind hacking, bribes, secretive billionaires, lawsuits, Ukrainian girls, and more. In reality, the balloon of Analytica has been inflated by wild marketing and had to pop. CEO Alex Nix admitted that their work over Brexit was just a PR slip, and their involvement was none. Similarly, after a failure with Cruz and Carson campaigns, their work with Trump did not even rely on psychographic data. Post factum, Nix insisted he has been telling that repeatedly to journalists, but once the meme went out of the bottle, nobody could control it.
Nobody knows more about electorate hacking than Eitan Hersh. His comment on the Analytica plot is concise:
"Let’s start with FB data, use it to predict personalities, then use that to predict political views, and then use that to figure out messages and messengers and just the right time of a campaign to make a lasting persuasive impact” ...sounds like a failed PhD prospectus to me
In a BBC interview, Rick Tyler of Cruz campaign was even more concise. He just used an expletive to describe the efficiency of the Cambridge Analytica technology.
If you miss Jon Stewart's top IQ humor, here comes a worthy successor with a 6 minutes summary: Electronic Brainwashing. What I write in this text, Trevor Noah knew all the way! Pay attention at 1:25 how Analytica's Christopher Wylie makes a marketing pitch for brain hacking. If you believe that, you have been had.
I would summarize Cambridge Analytica as a company with a meteoric rise and an abrupt bust. It was all bark and no bite. There were similar efforts before, and there will be more in the future. The coincidence of Trump's win, Russian hacking, and Brexit will breathe new life into those attempts in the future. However, the public will be that one bit more skeptical and the defenses will be much stronger. Brain hacking will transfer money from the pockets of believers to the pockets of future hucksters.
Conclusion
Did this article convince you? I hope not. You came to read it with your mind set. You either believe brain hacking, or poke fun at this ridiculous idea. If you are a believer, I will not convince you. The reason is the same for which hacking is impossible. By not convincing you I provide more evidence against brain hacking. My only hope to make a dent in your reasoning comes from a negligible chance that some of the facts I presented herein made an impression. If an undeniable truth collided with your models, you might experience temporary displeasure. This might spark more reading. If you find another author who provides you with an unbiased narrative, over time, conversion is remotely possible. This is not a conversion by brain hacking. This is a conversion by ideas.