Many tasks in natural language processing require both human annotated data and special purpose models for separate domains and languages. In this work, we develop models and associated inference techniques that have milder data requirements and broader applicability across domains and languages. We develop our methods in the context of three specific problems: proper name modeling, named-entity recognition, and cross-document coreference resolution. Our technical approach is distinguished in combining ideas from representation learning and Bayesian inference.
Speaker Biography
Nick Andrews is a PhD candidate in Computer Science at Johns Hopkins University, co-advised by Jason Eisner and Mark Dredze. His research focuses on the problem of extracting structured information from unstructured natural language with minimal human supervision. His technical interests are in generative modeling, neural networks, and Marko chain Monte Carlo methods for approximate inference. He holds Bachelor’s degrees in Mathematics and Computer Science from Virginia Tech.