Associative memories in the brain receive and store patterns of activity registered by the sensory neurons, and are able to retrieve them when necessary. Due to their importance in human intelligence, computational models of associative memories have been developed for several decades now. They include auto-associative memories, which allow for storing data points and retrieving a stored data points s when provided with a noisy or partial variant of s, and hetero-associative memories, able to store and recall multi-modal data. In this talk, I present a novel neural model for realizing associative memories, based on a hierarchical generative network that receives external stimuli via sensory neurons. This model is trained using predictive coding, an error-based learning algorithm inspired by information processing in the cortex. To test the capabilities of this model, I perform multiple retrieval experiments from both corrupted and incomplete data points. In an extensive comparison, I show that this new model outperforms in retrieval accuracy and robustness popular associative memory models, such as auto-encoders trained via backpropagation, and modern Hopfield networks. In particular, in completing partial data points, our model achieves remarkable results on natural image datasets, such as ImageNet, with a surprisingly high accuracy, even when only a tiny fraction of pixels of the original images is presented. More in detail, I start by providing an intuitive and historical introduction about predictive coding and machine learning, and conclude by discussing the possible impact of this work in the neuroscience community, by showing that our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.
About the speaker:
(Tommaso Salvatori) is currently a PhD student at the University of Oxford. I did Bsc and MSc in theoretical mathematics at La Sapienza University of Rome, where I focused mainly on abstract algebra and geometry. My present focus is to investigate possible applications of computational neuroscience methods in modern machine learning. Among the many possible methods, I believe that predictive coding, an influential theory of information processing in the brain, is a promising candidate to improve contemporary AI. On the theoretical side, my research aims in finding similarities and differences between predictive coding and backpropagation, and to investigate how these difference can be used to improve deep learning models. On the more applied side, I am also interested in how to build large-scale implementations of predictive coding networks, at present not yet used in industrial applications.