If you’re a layperson who gets your news from public relations firms at major industry research centers, you may think that machine translation is solved, having reached “human parity” sometime in the past few years. But the reality is quite different. While translation accuracy is indistinguishable from that of humans by some definitions in certain narrow settings, claims of human parity rest on an impoverished definition of human capability. This talk will explore three lines of work whose collective goal is to provide neural machine translation systems with a few abilities that come quite naturally to us but are less natural in the modern translation paradigm, namely: translating under supplied constraints, producing diverse translation candidates, and evaluating output more robustly.
Speaker Biography
Matt Post is a research scientist at the Human Language Technology Center of Excellence at JHU, with appointments the Department of Computer Science and at the Center for Language and Speech Processing. He spends most of his time doing machine translation, but he has also worked on text classification, grammatical error correction, and human evaluation, and is interested in most topics in natural language processing. He is the Director of the ACL Anthology, and for many years has helped to organize the annual Conference on Machine Translation (WMT). He spent the 2017–2018 academic year working with Amazon Research in Berlin.