Current deep learning models exploit the power of continuous, dense-vector embeddings of discrete tokens such as words. Traditional symbolic methods of AI, NLP and linguistics exploit the power of structured discrete representations. I will present models that exploit dense-vector embeddings of representations possessing continuous structure that is learned from data. The continuous structured representations of natural and formal language expressions learned by these models enhance performance and interpretability relative to the unstructured hidden representations of standard deep learning models.
Paul Smolensky is a partner researcher in the Deep Learning Group and part-year Krieger-Eisenhower Professor of Cognitive Science at Johns Hopkins University.
His work focuses on the integration of symbolic and neural network computation for modeling reasoning and, especially, grammar in the human mind/brain. This work created: Harmony Networks (a.k.a. Restricted Boltzmann Machines); Tensor Product Representations; Optimality Theory and Harmonic Grammar (grammar frameworks grounded in neural computation); and Gradient Symbolic Computation. The work up through the early 2000’s is presented in the 2-volume MIT Press book with G Legendre, The Harmonic Mind.
Before assuming his position at Johns Hopkins he was a professor in the Computer Science Department of the University of Colorado at Boulder. Prior to that, as a postdoc and Research Professor, with G Hinton, D Rumelhart & J McClelland he was one of the founding members of the Parallel Distributed Processing Research Group at UCSD, which produced the bible of the previous wave of neural network modeling, the two-volume ‘PDP books’. His BA and PhD are in (Mathematical) Physics from Harvard and Indiana University.
He received the fifth David E. Rumelhart Prize for Outstanding Contributions to the Formal Analysis of Human Cognition in 2005.