🧠Brain vs. AI: Forgetting Mechanisms
Task tracking is an evil that leads us to burnout. While the brain tries to forget, task tracking forces it to remember. That's self-harm! The brain has robust forgetting mechanisms for many reasons, and evolution didn't expect we would hack it with tools like Jira or a simple task list. Have you faced situations when you forgot to add a task to the list or to check the task list itself? Thank your brain for still trying to protect you from overwhelm and an unpleasant way of living. We will always remember tasks related to desirable things. So, healthy forgetting!
And while researching how forgetting functions in AI models, I dived into this topic and I'd like to share my findings.
How is forgetting valuable in AI model training and decision-making?
Efficiency to remove outdated, irrelevant, or incorrect information.
Adaptability to discard specifics tied to one context and extract the knowledge to apply in new contexts.
Reducing Load to simplify processes and improving speed and efficiency.
Regulation to manage wrong decisions.
Balance to prevent obsessive decisions.
Leverage to move information between different layers.
Evolution to a consistent improvement over time.
Neuroscientists suggest seven processes of forgetting:
Decay Theory explains short-term forgetting when information is not consolidated into long-term biological memory. It's like the text is fading out in the old book. Decay in AI models (LSTM, RNNs, GPT) reflects the fading importance of temporary data unless it's continually reinforced or deemed significant.
Synaptic pruning leads to forgetting unused information in long-term memory. It's like the pages are disappearing in a book. Network pruning in AI models (CNNs, DNNs, GPT) makes them faster and requires fewer computational resources to be suitable for real-world scenarios.
Interference Theory declares that new information can block the recall of old information, while old information can similarly hinder the memory of new. It's like having a single bookmark that you either place in a new book or leave in an old one. AI neural networks (FNNs, CNNs, RNNs) adapt to new data, sometimes forgetting or diminishing previously learned patterns.
Retrieval Failure prevents information overload by making specific memories accessible only with the right cues (aka a tip-of-the-tongue phenomenon). It's like a bookmark missed in a book. AI models (Transformers and GPT variants) ensure more relevant and streamlined outputs by requiring specific prompts or cues.
The remaining three biological processes of forgetting are Repression/Suppression, Neurodegenerative Disorders, and Sleep deprivation. While it's less common how they can reflect on AI, I will share some ideas.
How AI experiences bad emotions and neurodegenerative disorders
Directed Forgetting helps humans erase undesirable memories influenced by emotions. This is comparable to how emotional intelligence co-pilots our cognitive intelligence. In AI, 'bad outcomes' can be flagged by penalizing actions in Reinforcement Learning (RL), similar to how negative emotions inform human choices. While not a direct analog to directed forgetting, RL does involve storing past experiences and adjusting future actions based on them. This could help AI adapt quickly in dynamic environments like games.
Neurodegenerative Disorders are conditions like Alzheimer's disease that lead to memory loss due to the death of neurons.
Obviously, we would want to avoid implementing this in AI, as it represents a destructive loss of function. However, studying these disorders could help us understand how to maintain the "health" of artificial neural networks over time, particularly as they are exposed to new data or experiences.
Neurodegenerative disorders are a stark reminder of the transient nature of individual memory and knowledge. It underscores the importance of communal knowledge, culture, and societal memory. In every human generation, individuals contribute to this societal memory through creative works, scientific discoveries, and other means, ensuring that a portion of their knowledge and experiences are preserved and can benefit future generations. Meanwhile, the information that isn't passed on and is eventually lost opens up cognitive 'space' for new ideas and innovation.
Applying this concept to AI could suggest exciting areas of exploration. For example, we might consider how AI models can share and preserve knowledge or 'clear out' space to make room for new learning. Should we use algorithms to strategically forget old data, or allow random 'memory loss' for innovation? This links to federated learning, where decentralized data builds collective knowledge, similar to how individuals contribute to societal knowledge.
Finally, the Sleep Deprivation forgetting mechanism makes no sense for AI and humans. So keep your sleep hygiene clean and have a healthy sleep unless you want to forget today's short-term memories.
Last updated