There has been a renewed focus on dialog systems, including non-task driven conversational agents (i.e. “chit-chat bots”). Dialog is a challenging problem since it spans multiple conversational turns. To further complicate the problem, there are many contextual cues and valid possible utterances. We propose that dialog is fundamentally a multiscale process, given that context is carried from previous utterances in the conversation. Deep learning dialog models, which are based on recurrent neural network (RNN) encoder-decoder sequence-to-sequence models, lack the ability to create temporal and stylistic coherence in conversations. João’s thesis focuses on novel neural models for topical and stylistic coherence and their evaluation.
Speaker Biography
João is a final year PhD student at the University of Pennsylvania, advised by Lyle Ungar. His PhD research focuses on Natural Language Generation, particularly deep learning methods for non-task driven conversational agents (chatbots) and the evaluation of these models. His research also includes work on word and sentence embeddings, word and verb predicate clustering, and multi-scale models. He is generally interested in Natural Language Processing, Time Series Analysis, and Deep Learning.