Latent variable models are widely used in applications to automatically recover simple underlying signals from noisy high-dimensional data. The challenge in estimating such models stems from the presence of hidden (unobserved) variables, and typical local search methods used for this task (e.g., E-M) generally lack basic performance guarantees such as statistical consistency and computational efficiency. In this talk, I will discuss recent developments in linear algebraic methods for learning certain classes of latent variable models, including parameter estimation for hidden Markov models, and structure learning of latent variable tree models. Unlike the local search heuristics, the proposed linear algebraic methods come with statistical and computational efficiency guarantees under mild conditions on the data distribution. Central to the new techniques is the characterization of the models in terms of low-order moments (e.g., averages, correlations) of observable variables, which are readily estimated from data.
This talk is based on various joint works with Anima Anandkumar, Kamalika Chaudhuri, Sham Kakade, Le Song, and Tong Zhang.
Speaker Biography
Daniel Hsu is a postdoctoral researcher at Microsoft Research New England. Previously, he was a postdoc with the Department of Statistics at Rutgers University and the Department of Statistics at the University of Pennsylvania from 2010 to 2011, supervised by Tong Zhang and Sham M. Kakade. He received his Ph.D. in Computer Science in 2010 from UC San Diego, where he was advised by Sanjoy Dasgupta, and his B.S. in Computer Science and Engineering in 2004 from UC Berkeley. His research interests are in algorithmic statistics and machine learning.