Machine learning is subject to the limits of computation, and advances in algorithms can open up new possibilities for machine learning. The problem of nearest neighbour search arises commonly in machine learning; unfortunately, despite over 40 years of research, prior sublinear algorithms for exact nearest neighbour search suffer from the curse of dimensionality, that is, an exponential dependence of query time complexity on either the ambient or the intrinsic dimensionality. In the first part of this talk, I will present Dynamic Continuous Indexing (DCI), a new family of exact randomized algorithms that avoids exponential dependence on both the ambient and the intrinsic dimensionality. This advance enables us to develop a new method for generative modelling, known as Implicit Maximum Likelihood Estimation (IMLE), which I will present in the second part of the talk. IMLE can be shown to be equivalent to maximum likelihood under some conditions and simultaneously overcomes three fundamental issues of generative adversarial nets (GANs), namely mode collapse, vanishing gradients and training instability. I will illustrate why mode collapse happens in GANs and how IMLE overcomes it, and also demonstrate empirical results on image synthesis. I will close off with a brief discussion of another approach I introduced, known as Learning to Optimize.
Speaker Biography
Ke Li is a Ph.D. candidate at UC Berkeley advised by Prof. Jitendra Malik. He is interested in a broad range of topics in machine learning, and also enjoys working on computer vision, natural language processing and algorithms. He is particularly passionate about tackling long-standing fundamental problems that cannot be tackled with a straightforward application of conventional techniques. He received his Hon. B.Sc. in Computer Science from the University of Toronto and is grateful for the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).