Learntropy

From supermemo.guru
Jump to navigation Jump to search

This text is part of: "I would never send my kids to school" by Piotr Wozniak (2017)

Learntropy is the attractiveness of any signal from the point of view of the learn drive system.

Lectures can be boring or attractive. Learntropy expresses their attractiveness from the point of view of an individual.

Neuroscientists noticed that the hippocampus can respond to the level of entropy of the signal delivered to the senses (e.g. vision, hearing, etc.). Shannon entropy is a measure of the average amount of information carried by a signal sent through a channel. In the area of learning, the signal will have a form of a lecture, book, web page, picture, conversation, social interaction, exploration, etc.

If we look closely at the relationship between signal entropy and its ability to reward the learn drive system, we notice that the interaction depends on the prior knowledge, encoding, emotional coloring, neural pre-processing, processing speed, and more. In other words, it is highly imprecise to evoke the term of entropy in the context of efficient learning. We need a similar term: learntropy.

While entropy has a precise mathematic definition, learntropy would probably best be measured by the response of the reward system to the act of learning based on the analyzed signal. As much as entropy depends on the probability of individual messages, learntropy will depend on the rewarding power of signal components such as words, pictures, sentences, shapes, etc. That rewarding power will be highly associated with probability, but the estimate of probability will depend, among others on prior knowledge.

For good learning there is a reward. However, there is also bad learning. There is a decoding failure penalty. If a student makes an effort to decode a message and fails, he is penalized. This is how frustration is born. This is how the dislike of learning begins. If learntropy is low, reward is little, penalty is high, and the net result may be negative. If we take the negative reward signals into account, learntropy, unlike Shannon entropy, could actually assume negative values.

See also: