Summer 2022
Institute for Assured Autonomy & Computer Science Seminar Series
June 21, 2022
Abstract: Marzyeh Ghassemi focuses on creating and applying machine learning to understand and improve health in ways that are robust, private, and fair. Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or from different clinicians as new evidence is discovered or individual illness is interpreted. Ghassemi will talk about her work trying to train models that do not learn biased rules or recommendations that harm minorities or minoritized populations. The Healthy ML group tackles the many novel technical opportunities for machine learning in health and works to make important progress with careful application to this domain.
Speaker Biography: Marzyeh Ghassemi is an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology. She is also a faculty member in the Vector Institute for Artificial Intelligence and holds positions as a Canadian Institute for Advanced Research AI Chair and a Canada Research Chair. Ghassemi holds additional MIT affiliations with the Jameel Clinic and the Computer Science and Artificial Intelligence Laboratory. She is the recipient of a Herman L. F. von Helmholtz Career Development Professorship, was named a CIFAR Azrieli Global Scholar, and is one of MIT Technology Review’s “35 Innovators Under 35.” Previously, she was a visiting researcher with Alphabet’s Verily and an assistant professor at the University of Toronto. Prior to completing her PhD in computer science at MIT, Ghassemi received an MSc in biomedical engineering as a Marshall Scholar at Oxford University and BS degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.
CS Seminar Series
July 7, 2022
Applications often have fast-paced release schedules, but adoption of software dependency updates can lag by years, leaving applications susceptible to security risks and unexpected breakage. To address this problem, we present UPGRADVISOR, a system that reduces developer effort in evaluating dependency updates and can, in many cases, automatically determine which updates are backward-compatible versus API-breaking. UPGRADVISOR introduces a novel co-designed static analysis and dynamic tracing mechanism to gauge the scope and effect of dependency updates on an application. Static analysis prunes changes irrelevant to an application and clusters relevant ones into targets. Dynamic tracing needs to focus only on whether targets affect an application, making it fast and accurate. UPGRADVISOR handles dynamic interpreted languages and introduces call graph over-approximation to account for their lack of type information and selective hardware tracing to capture program execution while ignoring interpreter machinery. We have implemented UPGRADVISOR for Python and evaluated it on 172 dependency updates previously blocked from being adopted in widely-used open-source software, including Django, aws-cli, tfx, and Celery. UPGRADVISOR automatically determined that 56% of dependencies were safe to update and reduced by more than an order of magnitude the number of code changes that needed to be considered by dynamic tracing. Evaluating UPGRADVISOR’s tracer in a production-like environment incurred only 3% overhead on average, making it fast enough to deploy in practice. We submitted safe updates that were previously blocked as pull requests for nine projects, and their developers have already merged most of them.
Speaker Biography: Yaniv is a post-doc at Columbia University working with Junfeng Yang. His research focuses on improving the reliability and safety of software. He is broadly interested in program analysis, systems, and machine learning. He received his PhD from the Technion, where he was advised by Eran Yahav.
CS Seminar Series
July 14, 2022
Over the past few decades, Mixed Reality has emerged as a technology capable of enriching human perception by generating virtual content that consistently co-exists and interacts with the real world. Although this content can be delivered through any of the senses, vision-based applications have drawn particular attention from the research community. This Mixed Reality modality has proven particularly valuable in guiding users during tasks that require the manipulation and alignment of real and virtual objects. However, correctly estimating the virtual content’s depth remains challenging and frequently leads to inaccurate placement of the objects of interest.
This talk introduces fundamental concepts of visual perception and their relevance during the design and implementation of Mixed Reality applications. It explores how our visual system uses multiple cues to gather information from the environment and estimate the depth of the objects, as well as how reproducing these cues is particularly challenging when creating Mixed Reality experiences. In addition, it demonstrates the relevance of integrating these concepts to enhance the perception of users of this technology. Finally, it showcases how these fundamental concepts can be transferred into medical applications and discuss how they can shape the future of healthcare.
Speaker Biography: Alejandro Martin Gomez is a postdoctoral fellow in the Laboratory of Computing Sensing and Robotics at Johns Hopkins University. Before joining Johns Hopkins, Alejandro completed his Ph.D. in Computer Science at the Technical University of Munich, from which he graduated summa cum laude. His research interests include the study of fundamental concepts of visual perception and their transferability to medical applications that involve using augmented and virtual reality. His work has been published in some of the most prestigious journals and conferences in these fields, including the IEEE International Symposium on Mixed and Augmented Reality, the IEEE Conference on Virtual Reality and 3D User Interfaces, and the IEEE Transactions on Visualization and Computer Graphics. Alejandro has also served as a mentor and advisor for several students and scholars at the Technical University of Munich, the Johns Hopkins University, and more recently at the Friedrich-Alexander University of Erlangen-Nürnberg. In addition, he is a member of several editorial activities and has participated as a program committee member of the International Symposium on Mixed and Augmented Reality in 2016, 2018, and 2021.
Institute for Assured Autonomy & Computer Science Seminar Series
July 19, 2022
Abstract: Neural networks have become a crucial element in modern artificial intelligence. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness in regards to safety and robustness. In this talk, Huan Zhang will first introduce the problem of neural network verification and the challenges of guaranteeing the behavior of a neural network given input specifications. Then he will discuss bound-propagation-based algorithms (e.g., CROWN and beta-CROWN), which are efficient, scalable, and powerful techniques for the formal verification of neural networks and which can also be generalizable to computational graphs beyond neural networks. His talk will highlight state-of-the-art verification techniques used in his α,β-CROWN (alpha-beta-CROWN) verifier that won the 2nd International Verification of Neural Networks Competition as well as novel applications of neural network verification.
Speaker Biography: Huan Zhang is a postdoctoral researcher at Carnegie Mellon University supervised by Professor Zico Kolter. Zhang received his PhD at the University of California, Los Angeles in 2020. His research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Zhang was awarded an IBM PhD Fellowship and led the winning team in the 2021 International Verification of Neural Networks Competition. He also received the 2021 New Frontiers in Adversarial Machine Learning Rising Star Award, sponsored by the MIT-IBM Watson AI Lab.
CS Seminar Series
July 21, 2022
Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. We conquer this fragility from the network topology perspective. Specifically, we enforce appropriate sparsity forms to serve as an implicit regularization in robust training. In this talk, I will first discuss how sparsity fixes robust overfitting and leads to superior robust generalization. Then, I will present the beneficial role sparsity played in certified robustness. Finally, I will show sparsity can also function as an effective detector to undercover the viciously injected Trojan patterns.
Speaker Biography: Tianlong Chen is currently a fourth-year Ph.D. Candidate of Electrical and Computer Engineering at the University of Texas at Austin, advised by Dr. Zhangyang (Atlas) Wang. Before coming to UT Austin, Tianlong received his Bachelor’s degree at the University of Science and Technology of China. His research focuses on building accurate, efficient, robust, and automated machine learning systems. Recently, Tianlong is investigating extreme sparse neural networks with undamaged trainability, expressivity, and transferability; and the implicit regularization effects of appropriate sparsity patterns on data-efficiency, generalization, and robustness. Tianlong has published more than 70+ papers at top-tier venues (NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, etc.). Tianlong is a recipient of the 2021 IBM Ph.D. Fellowship Award, 2021 Graduated Dean’s Prestigious Fellowship, and 2022 Adobe Ph.D. Fellowship Award. Tianlong has conducted research internships at Google, IBM Research, Facebook Research, Microsoft Research, and Walmart Technology.
Institute for Assured Autonomy & Computer Science Seminar Series
August 16, 2022
Abstract:Fueled by massive amounts of data, models produced by machine learning algorithms—especially deep neural networks—are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber physical systems, such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this talk, Somesh Jha will emphasize the need for a security mindset in trustworthy machine learning and cover some lessons learned.
Speaker Biography: Somesh Jha received his B Tech in electrical engineering from the Indian Institute of Technology Delhi. He received his PhD in computer science from Carnegie Mellon University under the supervision of Professor Edmund Clarke, a Turing Award winner. Currently, Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin–Madison. His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, Jha has focused his interests on privacy and adversarial machine learning. He has published several articles in highly refereed conferences and prominent journals and has won numerous Best Paper and Distinguished Paper Awards. Jha is a fellow of the American Association for the Advancement of Science, the ACM, and the Institute of Electrical and Electronics Engineers.