AI News, Machine Un-Learning: Why Forgetting Might Be the Key to AI

Machine Un-Learning: Why Forgetting Might Be the Key to AI

In neurobiology terms, forgetting happens when synaptic connections between neurons weaken or are eliminated over time, and as new neurons develop, they rewire the circuits of the hippocampus, overwriting existing memories (New Atlas).

Let’s take a simplified example- if you teach a child that speaks English to learn Spanish, the child will use relevant clues from learning English to apply it to Spanish —perhaps nouns, verb-tenses, sentence building — and simultaneously forget the parts that aren’t pertinent— think accents, mumbling, intonation.

LSTMs aid in this process by helping a neural network 1) forget/remember, 2) save and 3) focus: EWC is an algorithm created in March 2017 by researchers at Google’s DeepMind that mimics a neuroscience processes called synaptic consolidation.

In the chart below, you can see what happened when the researchers applied EWC to a game of Atari — the blue line is a standard deep learning process, and the red and brown lines are aided by EWC: In the Fall of 2017, the AI community was humming over a talk by Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem and evidence for what he called The Bottleneck Theory.

During fitting, the network labels its training data, and during compression, a much longer process, it “sheds information about the data, keeping track of only the strongest features” (Qanta) — those will be most relevant to helping it generalize.

Neural modularity helps organisms evolve to learn new skills without forgetting old skills

Video summary of Ellefsen, Mouret, and Clune (2015) Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

The Chemical Mind - Crash Course Psychology #3

You can directly support Crash Course at Subscribe for as little as $0 to keep up with everything we're doing. Also, if you ..

Artificial Neural Networks - Unsupervised learning

A network of Gaussian nodes connected to a perceptron being trained in two steps.

After watching this, your brain will not be the same | Lara Boyd | TEDxVancouver

In a classic research-based TEDx Talk, Dr. Lara Boyd describes how neuroplasticity gives you the power to shape the brain you want. Recorded at ...

MIT 6.S094: Recurrent Neural Networks for Steering Through Time

This is lecture 4 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 4 slides: ..

Decay and interference | Processing the Environment | MCAT | Khan Academy

Learn about decay and interference in human memory. Created by Carole Yue. Watch the next lesson: ...

High-Accuracy Neural-Network Models for Speech Enhancement

In this talk we will discuss our recent work on AI techniques that improve the quality of audio signals for both machine understanding and sensory perception.

The Use of Hebbian Cell Assemblies for Nonlinear Computation

The Use of Hebbian Cell Assemblies for Nonlinear Computation. Christian Tetzlaff et al (2015), Scientific Reports When ..

Lec-7 Associative Memory Model

Lecture Series on Neural Networks and Applications by Prof.S. Sengupta, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur.

Remembering of drawing process with temporal autoassociative memory of spiking neurons

The task is to remember the drawing process: remember spatial pixel structure as well as speed of drawing and pixel order (temporal structure). On this video I ...