Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will use the unifying thread of hierarchical syntactic structure to discuss approaches for addressing these shortcomings. First, I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that “The book on the table is blue” implies “The table is blue.”) Second, I will show how these behavioral failings can be explained through analysis of models’ inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation.
Speaker Biography
Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergraduate, he studied computational linguistics at Yale. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.