Neural networks have become a crucial element in modern artificial intelligence. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. In this talk, I will first introduce the problem of neural network verification and the challenges of guaranteeing the behavior of a neural network given input specifications. Then, I will discuss the bound-propagation-based algorithms (e.g., CROWN and beta-CROWN), which are efficient, scalable and powerful techniques for formal verification of neural networks and can also be generalizable to computational graphs beyond neural networks. My talk will highlight state-of-the-art verification techniques used in our α,β-CROWN (alpha-beta-CROWN) verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP’21), as well as novel applications of neural network verification.
Speaker Biography
Huan Zhang is a postdoctoral researcher at CMU, supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in 2020. Huan’s research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Huan was awarded an IBM Ph.D. fellowship and he led the winning team in the 2021 International Verification of Neural Networks Competition. Huan received the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.