Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve superhuman performance on various tasks. Ensuring that they are safe—that they do not, for example, cause harm to humans or act in a racist or sexist way—is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we can and should address now.
In this talk I will discuss some of my recent efforts to develop safe machine learning algorithms, and particularly safe reinforcement learning algorithms, which can be responsibly applied to high-risk applications. I will focus on the article “Preventing undesirable behavior of intelligent machines” recently published in Science, describing its contributions, our subsequent extensions, and important areas of future work.
Speaker Biography
Philip Thomas is an assistant professor at UMass. He received his PhD from UMass in 2015 under the supervision of Andy Barto, after which he worked as a postdoctoral research fellow at CMU for two years under the supervision of Emma Brunskill before returning to UMass. His research focuses on creating machine learning algorithms, particularly reinforcement learning algorithms, which provide high-probability guarantees of safety and fairness. He emphasizes that these algorithms are often applied by people who are experts in their own fields, but who may not be experts in machine learning and statistics, and so the algorithms must be easy to apply responsibly. Notable accomplishments include publication of a paper on this topic in Science titled “Preventing Undesirable Behavior of Intelligent Machines” and testifying on this topic to the U.S. House of Representatives Taskforce on Artificial Intelligence at a hearing titled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.”