Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. We conquer this fragility from the network topology perspective. Specifically, we enforce appropriate sparsity forms to serve as an implicit regularization in robust training. In this talk, I will first discuss how sparsity fixes robust overfitting and leads to superior robust generalization. Then, I will present the beneficial role sparsity played in certified robustness. Finally, I will show sparsity can also function as an effective detector to undercover the viciously injected Trojan patterns.
Speaker Biography
Tianlong Chen is currently a fourth-year Ph.D. Candidate of Electrical and Computer Engineering at the University of Texas at Austin, advised by Dr. Zhangyang (Atlas) Wang. Before coming to UT Austin, Tianlong received his Bachelor’s degree at the University of Science and Technology of China. His research focuses on building accurate, efficient, robust, and automated machine learning systems. Recently, Tianlong is investigating extreme sparse neural networks with undamaged trainability, expressivity, and transferability; and the implicit regularization effects of appropriate sparsity patterns on data-efficiency, generalization, and robustness. Tianlong has published more than 70+ papers at top-tier venues (NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, etc.). Tianlong is a recipient of the 2021 IBM Ph.D. Fellowship Award, 2021 Graduated Dean’s Prestigious Fellowship, and 2022 Adobe Ph.D. Fellowship Award. Tianlong has conducted research internships at Google, IBM Research, Facebook Research, Microsoft Research, and Walmart Technology.