Representing meaning using computers has taken many forms. For example, neural networks represent meaning with patterns of activation in constituent neurons, often called hidden representations. How the brain represents meaning remains an open question, but the measurement of neural activity via brain imaging (e.g. fMRI, EEG) has been used to approximate the brain's representation of meaning.
In this talk I will describe and compare the hidden representations of two very different neural networks to each other, and to the human brain. I will show that neural networks trained for very different tasks actually form representations that are very similar, and that they also resemble the brain's representation for the same stimuli. I will use this fact to show that the newly-learned representation of a foreign word can be detected in the human brain using EEG. Together, these findings reveal a convergence of meaning representations across multiple learning paradigms, including human learning.
Alona Fyshe is an Assistant Professor in the Computing Science and Psychology Departments at the University of Alberta. Alona received her BSc and MSc in Computing Science from the University of Alberta, and a PhD in Machine Learning from Carnegie Mellon University. Alona uses machine learning to analyze brain images collected while people read, which allows her to study how humans represent the meaning they encounter in text, and how they combine words to understand higher-order meaning. Alona also studies how computers learn to represent meaning when trained on text or images. Alona connects meaning representations in computer models to those in the human brain, advancing both our understanding of the brain, and machine learning research.