The field of reinforcement learning is concerned with the problem of learning efficient behavior from experience. In real life applications, gathering this experience is time-consuming and possibly costly, so it is critical to derive algorithms that can learn effective behavior with bounds on the experience necessary to do so. This talk presents our successful efforts to create such algorithms via a framework we call KWIK (Knows What It Knows) learning. I’ll summarize the framework, our algorithms, their formal validations, and their empirical evaluations in robotic and videogame testbeds. This approach holds promise for attacking challenging problems in a number of application domains.
Speaker Biography
Michael Littman joined Brown University’s Computer Science as a full professor after ten years (including 3 as department chair) at Rutgers University. His research in machine learning examines algorithms for decision making under uncertainty. Littman has earned multiple awards for teaching and his research. He has served on the editorial boards of the Journal of Machine Learning Research and the Journal of Artificial Intelligence Research. In 2013, he was general chair of the International Conference on Machine Learning (ICML) and program co-chair of the Association for the Advancement of Artificial Intelligence Conference and he served as program co-chair of ICML 2009.