We are undergoing a revolution in data. As computer scientists, we have grown accustomed to constant upheaval in computing resources – quicker processors, bigger storage and faster networks – but this century presents the new challenge of almost unlimited access to raw information. Whether from sensor networks, social computing or high-throughput cell biology, we face a deluge of data about our world. We need to parse this information, to understand it, to use it to make better decisions. In this talk, I will discuss my work to confront this new challenge, developing new machine learning algorithms that are based on infinitely-large probabilistic graphical models. In principle, these infinite representations allow us to analyze sophisticated and dynamic phenomena in a way that automatically balances simplicity and complexity – a mathematical Occam’s Razor. Our computers, however, are inevitably finite, so how can we use such tools in practice? I will discuss how my approach leverages ideas from mathematical statistics to develop practical algorithms for inference in infinite models with finite computation. I will discuss how combining a firm theoretical footing with practical computational concerns gives us tools that are useful both within computer science and beyond, in domains such as computer vision, computational neuroscience, biology and the social sciences.
Speaker Biography
Ryan Adams is a Junior Research Fellow in the University of Toronto Department of Computer Science, affiliated with the Canadian Institute for Advanced Research. He received his Ph.D. in Physics from Cambridge University, where he was a Gates Cambridge Scholar under Prof. David MacKay. Ryan grew up in Texas, but completed his undergraduate work in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. He has received several awards for his research, including Best Paper at the 13th International Conference on Artificial Intelligence and Statistics.