Knowledge graphs such a NELL, Freebase, and YAGO provide a means to address the knowledge bottleneck in artificial intelligence domains such as natural language processing, computer vision, and robotics. State of the art knowledge graphs have accumulated large amounts of beliefs about real world entities using machine reading methods. Current machine readers have been successful at populating such knowledge graphs by means of pattern detection — a shallow way of machine reading which leverages the redundancy of large corpora to capture language patterns. However, machine readers still lack the ability to fully understand language. In the pursuit of the much harder goal of language comprehension, knowledge graphs present an opportunity for a virtuous circle: the accumulated knowledge can be used to improve machine readers; in turn, advanced reading methods can be used to populate knowledge graphs with beliefs expressed using complex and potentially ambiguous language. In this talk, I will elaborate on this virtuous circle, starting with building knowledge graphs, followed by results on using them for machine reading.
Speaker Biography
Ndapa Nakashole is a post-doctoral fellow at Carnegie Mellon University. Her research interests include machine reading, natural language processing, machine learning and data mining. She works with professor Tom Mitchell on using machine learning to build computer systems that intelligently process and understand human language. She received her PhD from Saarland University, Germany, and her MSc and BSc from the University of Cape Town, South Africa.