Sparsity has been a driving force in signal & image processing and machine learning for decades. In this talk we’ll explore sparse representations based on dictionary learning techniques from two perspectives: over-parameterization and adversarial robustness. First, we will characterizes the surprising phenomenon that dictionary recovery can be facilitated by searching over the space of larger (over-realized/parameterized) models. This observation is general and independent of the specific dictionary learning algorithm used. We will demonstrate this observation in practice and provide a theoretical analysis of it by tying recovery measures to generalization bounds. We will further show that an efficient and provably correct distillation mechanism can be employed to recover the correct atoms from the over-realized model, consistently providing better recovery of the ground-truth model. We will then switch gears towards the analysis of adversarial examples, focusing on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the flexibility and stability of the (supervised) representation map and a notion of margin in the feature space. Leveraging a mild encoder gap assumption in the learned representations, we will provide a bound on the generalization error of the robust risk to L2-bounded adversarial perturbations and a robustness certificate for end-to-end classification. We will demonstrate the applicability of our analysis by computing certified accuracy on real data, and comparing with other alternatives for certified robustness. This analysis will shed light on to how to characterize this interplay for more general models.
Speaker Biography
Jeremias Sulam is an assistant professor at the Biomedical Engineering department at JHU, and a faculty member of the Mathematical Institute for Data Science (MINDS) and the Center for Imaging Science (CIS). He received his PhD in Computer Science from the Technion-Israel Institute of Technology, in 2018. He is the recipient of the Best Graduates Award of the Argentinean National Academy of Engineering. His research interests include machine learning, signal and image processing, representation learning and their application to biomedical sciences.