Robots are increasingly an important part of our world and our economy, and are being trusted with more and more complex tasks while working alongside and interacting with ordinary people. As such, it is particularly important that robots can be taught new skills on the fly by those who are not robotics experts, but are instead experts in their domain. Enabling this requires a wide variety of tools including new user interfaces, methods for learning tasks from demonstration, and new algorithms for intelligent execution of skills.
In this work, we discuss how we can allow end users to create more complex task plans incorporating reasoning and perception through the Behavior Tree-based CoSTAR interface. CoSTAR is the 2016 KUKA Innovation Award-winning interface for collaborative robot instruction. A 35-person study with novice users showed that CoSTAR was an effective and usable system for authoring robot task plans.
However, these plans are limited in how they can adapt to new environments on their own. New algorithms that combine perceptual abstractions with learning and motion planning allow the robot to better adaptation to new environments and to new tasks. In particular, we see that the combination of learned skill models with tree search allows for robust adaptation to new environments. The secret to humanlike generalization is to combine low-level motion policies with high-level task planning; it amounts to giving the robots some measure of common sense.
Together, these components allow for powerful behaviors that can adapt to new environments. Finally, we explore how the robot can generalize and reason about these high-level task decisions by using learned models. These models allow us to combine and execute lower-level behaviors by giving the robot an “imagination” which means it can predict the effects of any individual policy on the world.
This combination of user interfaces, imitation learning, and new planning and machine learning algorithms will allow ordinary end users to create more complex, powerful, and understandable task plans for collaborative robots.
Speaker Biography
Chris Paxton received his B.S. in Computer Science with a minor in Neuroscience from the University of Maryland, College Park, in 2012. He then came to the Johns Hopkins University in Baltimore, MD, where he was given the Intuitive Surgical fellowship from 2012-2014. His first project was using machine learning with electronic medical records for early prediction of septic shock before switching focus to creating and using task plans for collaborative robots. He worked on the original version of CoSTAR and led the team that won the 2016 KUKA Innovation Award in Hannover, Germany. Chris is interested in how we can let robots solve problems the way people do, both because it will help us build more useful systems and because it tells us something about ourselves. He plans to continue his research post-graduation as a research scientist with NVidia.