View the recording >>

Gerald M. Masson Distinguished Lecture Series

September 28, 2023

Abstract: Now that a significant fraction of human knowledge has been shared through the internet, scraped and squashed into the weights of large language models, do we still need embodiment and interaction with the physical world to build representations? Is there a dichotomy between LLMs and “large world models?” What is the role of visual perception in learning such models? Can perceptual agents trained by passive observation learn world models suitable for control? To begin tackling these questions, Stefano Soatto will first address the issue of controllability of LLMs and propose a simplistic definition of “meaning” that reflects the functional characteristics of a trained LLM. He will show that a well-trained LLM establishes a topology in the space of meanings, represented by equivalence classes of the trajectories of an underlying dynamical model. Then, he will describe both necessary and sufficient conditions for controllability in such a space of meanings. He will highlight the relation between meanings induced by a trained LLM upon the set of sentences that can be uttered and “physical scenes” underlying sets of images that can be observed. Lastly, he will show that popular models ostensibly used to represent the 3D scene (neural radiance fields) can at most represent the images on which they are trained, but not the underlying physical scene; however, composing a NeRF with a latent diffusion model or another inductively-trained generative model yields a viable representation of the physical scene.

Speaker Biography: Stefano Soatto is a professor of computer science at the University of California, Los Angeles and a vice president at Amazon Web Services, where he leads its AI labs. Prior to joining UCLA, he was an associate professor of biomedical and electrical engineering at Washington University in St. Louis, an assistant professor of mathematics at the University of Udine, and a postdoctoral scholar in applied science at Harvard University. Before discovering the joy of engineering at the University of Padova under the guidance of Giorgio Picci, Soatto studied classics, participated in the Certamen Ciceronianum Arpinas, co-founded the jazz fusion quintet Primigenia, skied competitively, and rowed single scull for the Italian National Rowing Team. Many broken bones later, he now considers a daily run around the block an achievement. Soatto received the Siemens Best Paper Award at the 1998 Conference on Computer Vision and Pattern Recognitio with the late Roger Brockett; the Marr Prize at the 1999 International Conference on Computer Vision with Jana Kosecka, Yi Ma, and Shankar Sastry; and the Best Paper Award at the IEEE International Conference on Robotics and Automation 2015 with Konstantine Tsotsos and Joshua Hernandez. He is a fellow of the Institute of Electrical and Electronics Engineers and of the ACM. At Amazon, Soatto is now responsible for the research and development leading to products such as Amazon Kendra (search), Amazon Lex (conversational bots), Amazon Personalize (recommendation), Amazon Textract (document analysis), Amazon Rekognition (computer vision), Amazon Transcribe (speech recognition), Amazon Forecast (time series), Amazon CodeWhisperer (code generation), and most recently Amazon Bedrock (Foundational Models as a service) and Titan (GenAI). Prior to joining AWS, he was a senior advisor at NuTonomy, the first company to launch an autonomous taxi service in Singapore (now known as Motional), and a consultant for Qualcomm since the inception of its augmented and virtual reality efforts. From 2004 to 2005, Soatto co-led the UCLA/Golem Team in the second DARPA Grand Challenge with Emilio Frazzoli and Amnon Shashua. Soatto received his PhD in control and dynamical systems from the California Institute of Technology in 1996.

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

October 17, 2023

Abstract: How do we make machine learning as rigorously tested and reliable as any medication or diagnostic test? ML has the potential to improve decision-making in health care, from predicting treatment effectiveness to diagnosing disease. However, standard retrospective evaluations can give a misleading sense for how well models will perform in practice. Evaluation of ML-derived treatment policies can be biased when using observational data, and predictive models that perform well in one hospital may perform poorly in another. In this talk, I will introduce new tools to proactively assess and improve the reliability of machine learning in healthcare. A central theme will be the application of external knowledge, including review of patient records, incorporation of limited clinical trial data, and interpretable stress tests. Throughout, I will discuss how evaluation can directly inform model design.

Speaker Biography: Michael Oberst is an incoming assistant professor of computer science at Johns Hopkins and is currently a postdoc in the Machine Learning Department at Carnegie Mellon University. His research focuses on making sure that machine learning in health care is safe and effective, using tools from causal inference and statistics. His work has been published at a range of machine learning venues (NeurIPS, ICML, AISTATS, KDD), including work with clinical collaborators from Mass General Brigham, NYU Langone, and Beth Israel Deaconess Medical Center. He has also worked on clinical applications of machine learning, including work on learning effective antibiotic treatment policies (published in Science Translational Medicine). He earned his undergraduate degree in Statistics at Harvard and his PhD in computer science at MIT.

View the recording >>

Computer Science Seminar Series

October 19, 2023

Abstract: The security and architecture communities will remember the past five years as the era of side channels. Starting from Spectre and Meltdown, time and again we have seen how basic performance-improving features can be exploited to violate fundamental security guarantees. Making things worse, the rise of side channels points to a much larger problem, namely the presence of large gaps in the hardware-software execution contract on modern hardware. In this talk, I will give an overview of this gap, in terms of both security and performance. First, I will give a high-level survey on speculative execution attacks such as Spectre and Meltdown. I will then talk about how speculative attacks are still a threat to both kernel and browser isolation primitives, highlighting new issues on emerging architectures. Next, from the performance perspective, I will discuss new techniques for microarchitectural code optimizations, with an emphasis on cryptographic protocols and other compute-heavy workloads. Here I will show how seemingly simple, functionally equivalent code modifications can lead to significant changes in the underlying microarchitectural behavior, resulting in dramatic performance improvements. The talk will be interactive and include attack demonstrations.

Speaker Biography: Daniel Genkin is an Alan and Anne Taetle Early Career Associate Professor at the School of Cybersecurity and Privacy at Georgia Tech. Daniel’s research interests are in hardware and system security, with particular focus on side channel attacks and defenses. Daniel’s work has won the Distinguished Paper Award at IEEE Security and Privacy, an IEEE Micro Top Pick, and the Black Hat Pwnie Awards, as well as top-3 paper awards in multiple conferences. Most recently, Daniel has been part of the team performing the first analysis of speculative and transient execution, resulting in the discovery of Spectre, Meltdown, and follow-ups. Daniel has a PhD in computer science from the Technion Israel’s Institute of Technology and was a postdoctoral fellow at the University of Pennsylvania and the University of Maryland.

View the recording >>

Gerald M. Masson Distinguished Lecture Series

October 24, 2023

Abstract: The field of natural language processing has seen rapid growth in recent years, driven by advances in machine learning. As a result, NLP models have become increasingly large and complex, capable of performing a wide range of tasks and understanding many languages. This talk will discuss the evolution of NLP models from specialized to generalist, drawing on the speaker’s personal experience working in the field for over ten years. Topics covered will include: building massively multilingual and large neural machine translation systems (M4); infrastructure and system research enabling the construction of such systems (GPipe, GShard); foundational machine learning research that has advanced the state-of-the-art predictability (aka scaling laws); deployment and organizational challenges in the large language model space; and more. The talk will conclude with a discussion of the future of NLP and the potential of LLMs to unlock new frontiers.

View the recording >>

Gerald M. Masson Distinguished Lecture Series

November 2, 2023

Abstract: Transparency—enabling appropriate understanding of AI models or systems—is considered a pillar of responsible AI. The AI research community and industry have developed an abundance of techniques and artifacts in the hope of achieving transparency, including transparent model reporting, model evaluation, explainable AI, and communication of model uncertainty. Meanwhile, the human-computer interaction community has taken human-centered approaches to these topics, building on its long-standing interests in design to support user understanding and appropriate mental models. In this talk, Q. Vera Liao will give an overview of common approaches and lessons learned from HCI research on AI transparency. With the recent rise of large language models and LLM-infused systems, she will also reflect on their unique challenges in providing transparency and discuss open questions.

Speaker Biography: Q. Vera Liao is a principal researcher at Microsoft Research Montréal, where she is part of the Fairness, Accountability, Transparency, and Ethics in AI group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI, with an overarching goal of bridging emerging AI technologies and human-centered design practices. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Liao has authored more than 60 peer-reviewed research articles and has received many paper awards at ACM and Association for the Advancement of Artificial Intelligence venues.

View the recording >>

Gerald M. Masson Distinguished Lecture Series

November 13, 2023

Abstract: Accelerators are everywhere in computer systems research today, and in this context, it is easy to see “smart” Network Interface Cards—which embed some programmable compute power in the network interface of the server—as simply another chapter in this story. In this talk, Justine Sherry will discuss how SmartNICs do more than simply “accelerate” network-intensive computing. Instead, she argues that SmartNICs radically reorient the control of data movement within the server in a way that is necessary for server hardware to scale with the deluge of network traffic in data centers. She will illustrate a few exemplary systems—Pigasus IDS, the Ensō NIC protocol, and the KOPI OS architecture—from her own research group, as well as exciting work in other labs, all of which reorient server data movement using programmable NICs.

Speaker Biography: Justine Sherry is an associate professor at Carnegie Mellon University. Her interests are in software and hardware networked systems and her work includes middleboxes, field-programmable gate array packet processing, measurement, cloud computing, and congestion control. Her research has been awarded the VMware Systems Research Award, the Applied Networking Research Prize, a Google Faculty Research Award, the ACM Special Interest Group on Data Communication (SIGCOMM) Doctoral Dissertation Award, the David J. Sakrison Memorial Prize, and paper awards at the USENIX Symposium on Operating Systems Design and Implementation, the USENIX Symposium on Networked Systems Design and Implementation, and ACM SIGCOMM. She is a member of the ACM International Conference on emerging Networking Experiments and Technologies steering committee, the DARPA  Information Science & Technology Study Group, and the SIGCOMM Committee to Aid Reporting on Discrimination and Harassment Policy Violations. Most importantly, Sherry is always on the lookout for a great cappuccino. She received her PhD (2016) and MS (2012) from the University of California, Berkeley, and her BS and BA (2010) from the University of Washington.

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

November 14, 2023

Abstract: Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Imandra, we have pioneered the application of formal verification to financial markets, where firms like Goldman Sachs, Itiviti, and OneChronos already rely upon Imandra’s algorithm governance tools for the design, regulation, and calibration of many of their most complex algorithms. With a focus on financial infrastructure (e.g., the matching logics of national exchanges and dark pools), we will describe the landscape and illustrate our Imandra system on a number of real-world examples. We’ll sketch many open problems and future directions along the way.

Speaker Biography: Grant Passmore is the co-founder and co-CEO of Imandra Inc. Passmore is a widely published researcher in formal verification and symbolic Al and has more than fifteen years of industrial formal verification experience. He has been a key contributor to the safety verification of algorithms at Cambridge, Carnegie Mellon, Edinburgh, Microsoft Research, and SRI. He earned his PhD on automated theorem proving in algebraic geometry from the University of Edinburgh, is a graduate of UT Austin (BA in mathematics) and the Mathematical Research Institute in the Netherlands (master class in mathematical logic), and is a life member of Clare Hall, University of Cambridge.

View the recording >>

Computer Science Speaker Series

December 5, 2023

Abstract: In an interconnected world, effective policymaking increasingly relies on understanding large-scale human networks. However, there are many challenges to understanding networks and how they impact decision-making, including (1) how to infer human networks, which are typically unobserved, from data; (2) how to model complex processes, such as disease spread, over networks and inform decision-making; and (3) how to estimate the impacts of decisions, in turn, on human networks. In this talk, I’ll discuss how I’ve addressed each of these challenges in my research. I’ll focus mainly on COVID-19 pandemic response as a concrete application, where we’ve developed new methods for network inference and epidemiological modeling, and have deployed decision-support tools for policymakers. I’ll also touch on other network-driven challenges, including political polarization and supply chain resilience.