Mechanism of forgetting

From supermemo.guru
Jump to navigation Jump to search

This article by Dr Piotr Wozniak is part of SuperMemo Guru series on memory, learning, creativity, and problem solving.

Forgetting is necessary

Forgetting is mostly known from its bad side effects when memories are gradually lost overtime. It is required for the overall energetic and information processing efficiency of the brain (see Tononi's SHY hypothesis). However, forgetting is also an indispensable part of generalization and conceptualization processes that help the brain transition from its "blank slate" state to a highly intelligent knowledge processor. Without forgetting, the brain would be as dumb as a tape recorder. The brain works as a concept network. Efficiently implemented forgetting is a warranty of an intelligent use of the concept network. In theory, there are no computational tasks the network could not undertake. The evolution of intelligence with the limited size of the network might, in theory, proceed indefinitely. The only true limitation is death. However, a more durable meta-conceptualization occurs in a collective effort of a network of human brains. Individual brains are lost to the network in a process analogous to forgetting. This text with all its old terminology and imprecision or even potential errors will survive in some digital archive as long as humanity persists. However, it will be largely forgotten. Its implications will be filtered, conceptualized, and only used as an inspiration for further work. Human death can be seen as a loss of minor synapses in the collective brain of mankind. Old synapses are replaced by new synapses with a strategically better place in the network. Collective intelligence keeps increasing, and there is no end in sight.

Forgetting is an essential ingredient of an intelligent brain

Interference

In the concept network of the brain, each time we learn something, we modify connections in the network. Each new memory affects other memories. When the new memory is consistent with other memories, the existing memories may receive a boost in stability. If the new memory is inconsistent with some existing memories, a network conflict will result in forgetting via interference. The winner in the war of the networks is usually the memory with higher stability and higher coherence. If this is an old memory (e.g. of one's religious convictions), it will result in proactive interference that will ultimately result in the loss of the new memory. This is how confirmation bias is born in the brain's effort to keep abstract models consistent. If the new memory shows stronger stability, the old memory is likely to be lost via interference, which may here be called retroactive interference (by analogy to proactive interference).

Interference may also affect seemingly unrelated memories. The overall balance of strengthening and weakening old memories in the process of learning will produce a typical exponential forgetting curve whose decay constant will be related to memory stability. Statistically, memories are lost at a constant rate. As the name indicates, stable memories are subject to slower decay. In theory, new memories may also be laid at a constant rate (in adulthood) to meet Tononi's homeostatic postulate (see: How much knowledge can human brain hold?).

The dependence of memory on consistency and coherence is one of the main reasons why schooling does not work, while free learning provides marvelous outcomes.

Forgetting primarily affects memories that are inconsistent or irrelevant

Trace decay

As the brain aims at maximizing the efficiency of processing and the efficiency of the energy expense, unused memories will also be subject to gradual loss even if there is no interference and no firing. Trace decay and interference contribute to the steady rate of forgetting as described by the forgetting curve (e.g. as measured by SuperMemo). The contribution of the two is difficult to separate and quantify. Thus far, SuperMemo does not have any contribution to settling the debate. The answer will probably only come from molecular and neural studies.

Neurogenesis

In the conceptualization process, the emergence of first concepts is slow at the time of massive change. This provides the impression of amnesia. This is particularly evident in the inability to form lasting episodic memories. Unlike common semantic memories encounter in the course of life, episodic memories are unique and can only be maintained by re-activation (i.e. memory replay). The problem is discussed in detail in: Childhood amnesia.

Children are great generalizing machines, but they are equally great at forgetting

Neural mechanism

In the neurostatistical model of memory, we hypothesize that memory stabilization and the spacing effect are all dependent on the perpetual life cycle of dendritic filopodia and dendritic spines. When a useful concept memory is used often in the process of fast thinking, the sprouting of its dendritic filopodia is inhibited, the flow of signals is unimpeded, and the likelihood of interference is less. When a concept memory is not used, its filopodia keep sprouting in search for new patterns to learn. When a memory is activated after a longer break, one of the new dendritic spines may take part in co-activation, establish a new pattern to recognize, and contribute to the reformulation of the concept's input pattern. If some of the previously remembered inputs keeps being inactive, it may become suppressed and retracted (see: Dendritic branch stabilization). This could result in forgetting of the old memory via interference. However, if the old memory can be retrieved correctly after a longer break, the new filopodia would be rapidly retracted and memory stability would increase (e.g. via receptor buildup). In this process, new filopodia may make retrieval harder, but the increase in stability would be more pronounced. This is a possible mechanism of the spacing effect (see: Structural and molecular mechanisms of the spacing effect).

The least understood is the process of trace decay of unused memories in where there are no activations that could stabilize the memory or result in forgetting by interference. The cheapest utilization of unused concept neurons would be to seek new dendritic activation patterns. In this re-conceptualization process, poorly employed neurons would seek new experience by dendritic sprouting towards active axonal targets.

However, it is also conceivable that even this process might fail to result in applicable neuronal reuse. This could then lead to axonal retraction. Competitive terminal arbor pruning is well documented. Pruning of long axon collaterals is also a norm in the developing brain. Activity-dependent non-stereotyped axonal pruning has been studied in many contexts and might have a form of direct retraction, axosome shedding, or Wallerian degeneration. Those processes can involve microglia, astrocytes, and/or Schwann cells. The ultimate fate of the unused concept neurons would be cell death. For a detailed review of involved processes see: Neurite guidance and pruning sculpt the brain architecture

Uncertain course of stabilization in complex memories
Uncertain course of stabilization in complex memories

Figure: Uncertain course of the stabilization of complex memories. The picture shows a hypothetical course of stabilization, forgetting, generalization, and interference on the example of a single dendritic input pattern of a single concept cell. The neuron, dendrites and dendritic filipodia are shown in orange. The picture does not show the conversion of filopodia into dendritic spines whose morphology changes over time with stabilization. The squares represent synapses involved in the recognition of the input pattern. Each square shows the status of the synapse in terms of the two component model of long-term memory. The intensity of red represents retrievability. The size of the blue area represents stability. After memorizing a complex memory pattern, the concept cell is able to recognize the pattern upon receiving a summation of signals from the red squares representing a new memory of high retrievability and very low stability. Each time the cell is re-activated, active inputs will undergo stabilization, which is represented by the increase in the blue area in the input square. Each time a signal does not arrive at an input while the concept cell is active, its stability will drop (generalization). Each time a source axon is active and the target neuron fails to fire, the stability will drop as well (competitive interference). Due to the uneven input of signal patterns to the concept cell, some synapses will be stabilized, while others will be lost. Forgetting occurs when a synapse loses its stability and its retrievability and when the relevant dendritic spine is retracted. Generalization occurs when the same concept cell can be re-activated using a smaller, but a more stable input pattern. Retroactive interference occurs when a new input pattern contributes to forgetting some of the redundant inputs necessary for the recognition of the old input pattern. Stabilization of the old patterns results in the reduced mobility of filopodia, which prevents the takeover of a concept by new patterns (proactive interference). At the every end of the process, a stable and a well-generalized input pattern is necessary and sufficient to activate the concept cell. The same cell can respond to different patterns as long as they are consistently stabilized. In spaced repetition, poor choice of knowledge representation will lead to poor reproducibility of the activation pattern, unequal stabilization of synapses, and forgetting. Forgetting of an item will occur when the input pattern is unable to activate sufficiently many synapses and thus unable to reactivate the concept cell. At repetition, depending on the context and the train of thought, an item may be retrieved or forgotten. The outcome of the repetition is uncertain

Expanding memory

Some engineers dream of extending human memory. However, such an extension would be of little value without implementing a functional concept network. Humans with fast access to extra RAM memory would not be much smarter than ordinary men with access to Google. Speed is secondary to reasoning. The alleged obstacle towards the big memory nirvana is the interface between the electronic and wet memory. In reality, the difficulty of the extension is in the implementation of the conceptual computation. Once we have intelligent concept networks in operation, we do not need human-machine interface. A human can simply ask a concept network to receive intelligent answers. It does not matter if reasoning occurs inside the brain or inside a machine. The answers are all we care. Today, we cannot delegate our memory to Google because human brains are the only efficient concept networks we know. All conceptual computation must occur inside the brain.

Other theories

State-dependent memory

Forgetting by loss of context in context-dependent memory or by change of state in state-dependent memory are not good examples of actual forgetting. Changing the state would allow of retrieval.

Cue-dependent memory

Context-dependent memory relies on multiple cues in retrieval. Endel Tulving described cue-dependent forgetting in American Scientist (1974). From today's perspective, cue-dependent forgetting is uninteresting as it simply pertains to complex memories. In partial forgetting of a complex memory, some atomic memories might be lost, while other atomic memories may serve retrieval given sufficient cues. If we focus on atomic memories, we will still have only interference and trace decay to work with.

Cue-dependent memory is important mainly for one reason: it underlies the myth that we never forget (see: Forgotten memories are ultimately lost for good)! This is what teachers may perpetuate at school, esp. in reference to little children who "learn by absorption". The myth is used to persistently expose children to stimulation that is supposed to etch things in memory. The myth attributes forgetting to the retrieval failure, which implies that things not retrieved one way, might be retrieved some other way.

If we understand cueing, we can safely forget the hope of retrieving lost memories (e.g. of one's childhood). Some memories are still there, but nearly all memories are gone. Just a few grains of sand left of the mountain of episodic memories.

Motivated forgetting

There is yet motivated or intentional forgetting that I leave without a comment (see this paper for details). As an exercise, imagine a polar bear with a green spot on its forehead. Once you have it well visualized, try to forget the color of the spot on the head of the polar bear. We all know that trying not think about polar bears is one of the best ways to think about polar bears. Thinking favors recall. If by any chance you succeeded to confuse the color of the spot on the forehead, try to forget you mom's name or any event from your childhood. This should illustrate the difficulty with motivated forgetting.

When people ask me how to forget, I have no better formula than a laser-sharp and passionate learning that may increase overall forgetting rate. If you worry about a mortgage, study mortgage-less lifestyle (e.g. hunter-gatherer tribes and their joys). You will not forget the mortgage, but you may weaken memories that spark anxiety. Random intentional forgetting through intensified learning is negligibly effective. However, learning might be a good preventive or recovery therapy in depression (see: Learning and depression).



For more texts on memory, learning, sleep, creativity, and problem solving, see Super Memory Guru