In a recent study published in the journal Nature Neuroscience, researchers from the MRC Brain Network Dynamics Unit and the Department of Computer Science at the University of Oxford have shed new light on how the brain adjusts connections between neurons during learning. This discovery could not only influence research on learning in brain networks but also inspire faster and more robust learning algorithms for artificial intelligence.
Credit: Pixabay
Learning essentially consists of identifying the components responsible for an error in information processing. In artificial intelligence, this is done through backpropagation, where a model's parameters are adjusted to minimize output errors. Many researchers believe the brain employs a similar principle.
However, the biological brain outperforms current machine learning systems. For example, we can learn new information by seeing it just once, while artificial systems require hundreds of repetitions to integrate the same information. Furthermore, we can assimilate new data while retaining previously acquired knowledge, whereas in artificial neural networks, learning new information often interferes with existing knowledge and degrades it quickly.
These observations have driven researchers to identify the fundamental principle used by the brain during learning. They examined sets of mathematical equations describing changes in the behavior of neurons and synaptic connections. Their analysis and simulation of these information processing models revealed they use a learning principle fundamentally different from that of artificial neural networks.
In artificial neural networks, an external algorithm attempts to modify synaptic connections to reduce errors. In contrast, the researchers propose that the human brain first stabilizes neuron activity in an optimally balanced configuration before adjusting synaptic connections. This method would actually be an efficient feature of how human brains learn, as it reduces interference while preserving existing knowledge, which speeds up learning.
In their study, the authors give the example of a bear fishing for salmon. In an artificial neural network model, the absence of one sense (like hearing) would also result in the loss of another (like smell), which is not the case in the animal brain. Their mathematical theory shows that allowing neurons to stabilize in a prospective configuration reduces interference between information during learning.
The lead researcher, Professor Rafal Bogacz, and the first author of the study, Dr. Yuhang Song, highlight the current gap between abstract models and our detailed understanding of the anatomy of brain networks. They anticipate future research to bridge this gap and to understand how the prospective configuration algorithm is implemented in anatomically identified cortical networks.