Details:

WHERE: B-17 Hackerman Hall, unless otherwise noted
WHEN: 10:30 a.m. refreshments available, seminar runs from 10:45 a.m. to 12 p.m., unless otherwise noted

Recordings will be available online after each seminar.

Schedule of Speakers

Click to expand for talk title, abstract, and speaker biography.

Please note this seminar will take place at 11:00 a.m. Refreshments will be served at 10:45 a.m.

Zoom link >>

Computer Science Seminar Series

“A Symbiotic Ecosystem for AI and Human-Centered Privacy”

Abstract: As AI increasingly permeates our daily lives, it brings immense benefits but also introduces critical privacy challenges, such as data misuse, surveillance, and loss of control. In this talk, Yaxing Yao will share his vision of an ecosystem where AI and privacy are not adversaries, but allies, and mutually reinforce and enrich one another. Rather than treating privacy as an obstacle, this ecosystem positions it as a foundational element of AI’s development, fostering a balanced environment that benefits both technology and society. Yao will introduce three fundamental relationships that drive this symbiotic ecosystem: 1) mutual benefit, wherein AI empowers users to understand and manage their data usage; 2) co-adaptation, wherein AI dynamically adapts to diverse privacy needs across contexts; and 3) ecosystem balance, wherein AI is properly anchored within regulatory frameworks and public policies to ensure users’ privacy. These three relationships redefine AI as a respectful partner in our digital lives—one that supports human-centered values and upholds our autonomy and privacy. Yao will discuss how his research contributes to the advancement of this ecosystem and how he expands its impact to privacy literacy development among families and children via community-based research efforts. Finally, he will discuss challenges and open questions in achieving this vision.

Speaker Biography: Yaxing Yao is an assistant professor in the Department of Computer Science at Virginia Tech. His research lies at the intersection of human-computer interaction, privacy, and accessibility, focusing on exploring privacy issues in user interactions with computing systems and developing solutions to empower users to be aware and control their privacy. He has published in top human-computer interaction venues (e.g., the ACM Conference on Human Factors in Computing Systems, the ACM Conference on Computer-Supported Cooperative Work and Social Computing) and privacy/security venues (e.g., the USENIX Security Symposium, the Symposium on Usable Privacy and Security) and has received multiple paper awards, a Google PSS Faculty Research Award, and two Meta Research Awards. Yao’s work has influenced public policy, including the opt-out icon in the California Consumer Privacy Act. He also founded Kids’ Tech University, a program that engages K-12 students in privacy research through weekly design workshops and summer camps. Yao’s research is generously supported by the NSF, Google, and Meta.

Please note this seminar will take place in 228 Malone Hall at 12:15 p.m. Lunch will be served at 12 p.m.

Zoom link >>

Computer Science Seminar Series

“Probabilistic Experimental Design for Synthesizing DNA”

Abstract: In this talk, Eli N. Weinstein studies how to efficiently manufacture samples from a generative model in the real world. He shows how computational methods for approximate sampling can be adapted into experimental design methods for efficiently making samples in the laboratory. He also develops tools to rigorously evaluate the quality of manufactured samples, proposing nonparametric two-sample tests with strong theoretical guarantees and scalable algorithms. He applies these methods to DNA synthesis, since the cost of DNA synthesis is considered a fundamental technological driver of progress of biology and biomedicine. He demonstrates manufacturing ~10^17 samples from a generative model of human antibodies at a sample quality comparable to that of state-of-the-art protein language models. These samples cost roughly a thousand dollars to make (~$10^3), while using previous methods they would cost roughly a quadrillion dollars (~$10^15).  

Speaker Biography: Eli N. Weinstein is a postdoctoral research scientist at Columbia University advised by David Blei. He also serves as the director of machine learning research at Jura Bio, a biotechnology startup focused on genetic medicine. Weinstein’s research is in probabilistic machine learning, with an emphasis on causal inference and experimental design. His applied work focuses on biology, especially therapeutics. He completed his PhD in biophysics at Harvard University in 2022, advised by Debora Marks and Jeffrey Miller and supported by a Hertz Foundation Fellowship. Previously, he received an AB in chemistry and physics with highest honors, also from Harvard, advised by Adam Cohen. His work has been published at venues including the Conference and Workshop on Neural Information Processing Systems, the International Conference on Machine Learning, the International Conference on Artificial Intelligence and Statistics, and the Journal of Machine Learning Research, and has received Best Paper Awards from the New England Statistical Society in 2021 and 2023 and the Molecular Machine Learning Conference in 2024.

Zoom link >>

Computer Science Seminar Series

“Curious Embeddings, Hazy Oracles, and the Path to Safe, Cooperative AI”

Abstract: Cooperation through safe and trustworthy communication and interaction is fundamental to how human teams accomplish complex tasks. Yet, despite significant—and sometimes revolutionary—advances in AI, we have barely begun to unlock the potential of safe, cooperative AI. This may stem from our limited understanding of how multimodal, large-scale AI models function; the one-sided nature of contemporary, fully-supervised AI approaches; or social concerns about human-AI collaboration. In this talk, Jason Corso will delve into these layers of inquiry, beginning with a principled exploration of what the embeddings in large-scale foundation models reveal about the underlying problem and data, including new results disentangling sample-size from Bayes error and decision-boundary complexity. He will then introduce the concept of the human collaborator as a “hazy oracle”—a fallible partner rather than an omniscient information source—and establish a framework for modeling human-supplied error during collaboration. Building on these foundational insights, Corso will conclude with applications of these ideas to foster safe and effective human-AI collaboration in the health sciences.

Speaker Biography: Jason Corso is a professor of electrical engineering and computer science and robotics at the University of Michigan and a co-founder of and chief scientist at the AI startup Voxel51. Corso received his PhD and MSE degrees in computer science at the Johns Hopkins University in 2005 and 2002, respectively, and a BS degree in computer science with honors from Loyola College in Maryland in 2000. He is the recipient of a departmental 2018 Outstanding Achievement Award, a 2015 Google Faculty Research Award 2015, a 2010 Department of Defense Army Research Office Young Investigator Award, a 2009 NSF CAREER Award, and a 2011 University at Buffalo’s Exceptional Scholar Award for Young Investigators. In 2009 he became a member of DARPA’s Computer Science Study Group, while in 2003 he received a Link Foundation Fellowship in Advanced Simulation and Training. Corso has authored more than 150 peer-reviewed papers and hundreds of thousands of lines of open-source code on topics including computer vision, robotics, data science, machine learning, AI, and general computing. He is a member of the Association for the Advancement of Artificial Intelligence, the ACM, and the Mathematical Association of America, and is a senior member of the Institute of Electrical and Electronics Engineers.

Please note this seminar will take place in 228 Malone Hall at 12:15 p.m. Lunch will be served at 12 p.m.

Zoom link >>

Computer Science Seminar Series

“Reliable AI-Augmented Algorithms for Energy”

Abstract: Modern AI and machine learning algorithms can deliver significant performance improvements for decision-making under uncertainty, where traditional, worst-case algorithms are often too conservative. These improvements can be potentially transformative for energy and sustainability applications, where rapid advances are needed to facilitate the energy transition and reduce carbon emissions. However, AI and ML lack worst-case guarantees, hindering their deployment to real-world problems where safety and reliability are critical. In this talk, Nico Christianson will discuss his recent work developing algorithms that bridge the gap between the good average-case performance of AI/ML and the worst-case guarantees of traditional algorithms. In particular, he will focus on the question of how to robustly leverage the recommendations of black-box AI “advice” for general online optimization problems, describing both algorithmic upper bounds and fundamental limits on the tradeoff between exploiting AI and maintaining worst-case performance. He will also highlight some recent steps toward leveraging uncertainty quantification for risk-aware decision-making in these settings, as well as experimental results on energy resource management in high-renewables power grids.

Speaker Biography: Nico Christianson is a final-year PhD candidate in computing and mathematical sciences at the California Institute of Technology, advised by Adam Wierman and Steven Low. Christianson’s research broadly focuses on decision-making under uncertainty, with a specific emphasis on developing new algorithms to enable the reliable and safe deployment of modern AI/ML tools to real-world sustainability challenges such as energy resource operation and carbon-aware computing. His work is supported by an NSF Graduate Research Fellowship and a PIMCO Data Science Fellowship. Christianson has interned at Microsoft Research (Redmond) and collaborated with industry partners including Beyond Limits and Amazon. Previously, he received an AB in applied mathematics from Harvard University.

Past Speakers

Click to expand for recording, date, abstract, and speaker biography.

View the recording >>

Computer Science Seminar Series

November 14, 2024

Abstract: Causal knowledge is central to solving complex decision-making problems across engineering, medicine, and cyber-physical systems. Causal inference has been identified as a key capability to improve machine learning systems’ explainability, trustworthiness, and generalization. After a brief introduction to causal modeling, this talk explores two key problems in causal ML. In the first part of the talk, we will focus on the problem of root-cause analysis (RCA), which aims to identify the source of failure in large, modern computer systems. We will show that by leveraging ideas from causal discovery, it is possible to automate and efficiently solve the RCA problem by systematically using invariance tests on normal and anomalous data. In the second part of the talk, we consider causal inference problems in the presence of high dimensional variables, e.g., image data. We show how deep generative models, such as generative adversarial networks and diffusion models, can be used to obtain a representation of the causal system and help solve complex, high-dimensional causal inference problems. This approach enables both causal invariant prediction and evaluation of black box conditional generative models.

Speaker Biography: Murat Kocaoglu received his BS degree in electrical and electronics engineering with a minor in physics from the Middle East Technical University in 2010, his MS from the Koç University in Turkey in 2012, and his PhD from the University of Texas at Austin in 2018. Kocaoglu was a research staff member at the MIT-IBM Watson AI Lab at IBM Research in Cambridge, Massachusetts from 2018 to 2020. He is currently an assistant professor in the Elmore Family School of Electrical and Computer Engineering, the Department of Computer Science (by courtesy), and the Department of Statistics (by courtesy) at Purdue University, where he leads the CausalML Lab. Kocaoglu received an Adobe Data Science Research Award in 2022, an NSF CAREER Award in 2023, and an Amazon Research Award in 2024. His current research interests include causal inference, deep generative models, and information theory.

View the recording >>

Computer Science Seminar Series

November 12, 2024

Abstract: In the age of big data and AI, we are witnessing an erasure of voices from underrepresented communities, a phenomenon that can be described as an “ideocide”—the systematic annihilation of the ethical frameworks and data of marginalized groups. Drawing inspiration from anthropologist Arjun Appadurai’s concept of the “Fear of Small Numbers,” Ishtiaque Ahmed argues that modern AI systems—which overwhelmingly prioritize large datasets—inadvertently silence smaller, non-dominant populations. These systems impose an ethical monoculture shaped by Western neoliberal ideologies, further marginalizing communities who are already underrepresented in data-driven systems. Based on his twelve years of ethnographic and design work with communities in Bangladesh, India, Pakistan, the U.S., Canada, and beyond, Ahmed will explore how the exclusion of these “small data” sets undermines the diversity of ideas and ethics, leading to biased and unjust AI systems. This talk will outline how this silence represents not just a technical gap but a profound ethical failure in AI, one that needs urgent addressing through pluriversal, community-based approaches to AI development. Ahmed will further demonstrate how collaborative, co-designed technologies with marginalized communities can resist ideocide, allowing for the inclusion of multiple ethical and cultural perspectives to create more just, inclusive, and ethical AI systems.

Speaker Biography: Syed Ishtiaque Ahmed is an associate professor of computer science at the University of Toronto and the founding director of the Third Space research group. His research interest is in the intersection between human-computer interaction and artificial intelligence. Ahmed received a PhD and master’s degree from Cornell University, and bachelor’s and master’s degrees from the Bangladesh University of Engineering and Technology. In the last fifteen years, he’s studied and developed successful computing technologies with various marginalized communities in Bangladesh, India, Canada, the U.S., Pakistan, Iraq, Turkey, and Ecuador. Ahmed has published over 100 peer-reviewed research articles and has received multiple Best Paper Awards in top computer science venues including the ACM Conference on Human Factors in Computing Systems, the ACM Conference on Computer-Supported Cooperative Work and Social Computing, the International Conference on Information & Communication Technologies and Development, and the ACM Conference on Fairness, Accountability, and Transparency. He has received numerous honors and accolades, including the International Fulbright Science and Technology Award, the Intel PhD Fellowship, the Institute of International Education Centennial Fellowship, a Schwartz Reisman fellowship, the Walter Massey Fellowship, the Connaught International Scholarship for Doctoral Students, the a Microsoft Research AI & Society fellowship, a Google’s Award for Inclusion Research, and a Meta Research Award. His research has also received generous funding support from all three branches of the Canadian Tri-Council (the Natural Sciences and Engineering Research Council of Canada, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada), the U.S. NSF and National Institutes of Health, and the Bangladesh Information and Communication Technology Division. Ahmed was named a Future Leader by the Computing Research Association in 2024.

View the recording >>

Computer Science Seminar Series

November 8, 2024

Abstract: The broad agenda of Fei Miao’s work is to develop the foundations for the science of embodied AI—that is, to assure safety, efficiency, robustness, and security of AI systems by integrating learning, optimization, and control. Miao’s research interests span several technical fields, including multi-agent reinforcement learning, robust optimization, uncertainty quantification, control theory, and game theory. Application areas include connected and autonomous vehicles (CAVs), intelligent transportation systems and transportation decarbonization, smart cities, and power networks. Miao’s research experience and current ongoing projects include robust reinforcement learning and control, uncertainty quantification for collaborative perception, game theoretical analysis for the benefit of information sharing for CAVs, data-driven robust optimization for efficient mobile cyber-physical systems (CPS), conflict resolution of smart cities, and resilient control of CPS under attacks. In addition to system modeling, theoretical analysis, and algorithmic design, Miao’s work involves experimental validation in real urban transportation data, simulators, and small-scale autonomous vehicles.

Speaker Biography: Fei Miao is a Pratt & Whitney Associate Professor in the School of Computing and courtesy faculty in the Department of Electrical and Computer Engineering at the University of Connecticut. She is also affiliated with the Pratt & Whitney Institute for Advanced Systems Engineering. Before joining UConn, Miao was a postdoctoral researcher in the General Robotics, Automation, Sensing, & Perception Lab and the Penn Research In Embedded Computing and Integrated Systems Engineering Center with George J. Pappas and Daniel D. Lee in the Department of Electrical and Systems Engineering at the University of Pennsylvania. Miao earned her PhD—as well as the Charles Hallac and Sarah Keil Wolf Award for the best doctoral dissertation—in electrical and systems engineering in 2016, along with a dual Master’s degree in statistics from the Wharton School at the University of Pennsylvania. She received her bachelor’s degree of science from Shanghai Jiao Tong University in 2010 with a major in automation and a minor in finance.

View the recording >>

Computer Science Seminar Series

October 17, 2024

Abstract: Prohibitive pretraining costs makes pretraining research a rare sight—however, this is not the case for analyzing, using, and fine-tuning those models. This talk focuses on one option to improve models in a scientific way, in small measurable steps; specifically, it introduces the concept of merging multiple fine-tuned/parameter-efficient fine-tuning models into one and discusses works tackling what we understand about it, how it works, more up-to-date methods, and how iteratively merging models may allow collaborative continual pretraining.

Speaker Biography: Leshem Choshen is a postdoctoral researcher at the Massachusetts Institute of Technology and IBM who aims to study model development openly and collaboratively, allow feasible pretraining research, and evaluate efficiently. To do so, they co-created model merging, TIES merging, the BabyLM Challenge. They were chosen for postdoctoral Rothschild and Fulbright fellowships and received a Best PhD Thesis Award from the Israeli Association for Artificial Intelligence, as well as a Blavatnik Prize for Computer Science. With broad natural language processing and machine learning interests, Choshen has also worked on reinforcement learning, understanding how neural networks learn, and Project Debater, the first machine system capable of holding a formal debate (as of 2019), which was featured on the cover of Nature.

View the recording >>

Computer Science Seminar Series

October 15, 2024

Abstract: Massive efforts are under way to develop and adapt generative AI to solve any and all inferential and design tasks across engineering and science disciplines. Framing or reframing problems in terms of distributional modeling can bring a number of benefits, but also comes with substantial technical and statistical challenges. Tommi S. Jaakkola’s work has focused on advancing machine learning methods for controlled generation of complex objects, ranging from molecular interactions (e.g., docking) and 3D structures to new materials tailored to exhibit desirable characteristics such as carbon capture. In this talk, Jaakkola will cover a few research vignettes along with their specific challenges, focusing on diffusion and flow models that surpass traditional or alternative approaches to docking, protein design, or conformational ensembles. Time permitting, he will highlight general challenges and opportunities in this area.

Speaker Biography: Tommi S. Jaakkola is the Thomas Siebel Professor of Electrical Engineering and Computer Science in the Massachusetts Institute of Technology’s Department of Electrical Engineering and Computer Science and the MIT Institute for Data, Systems, and Society; he is also an investigator at the MIT Computer Science and Artificial Intelligence Laboratory. He is a fellow of the Association for the Advancement of Artificial Intelligence with many awards for his publications. His research covers how machines can learn, generate, or control and do so at scale in an efficient, principled, and interpretable manner, from foundational theory to modern design challenges. Over the past several years, Jaakkola’s applied work has been focused on molecular modeling and design.

View the recording >>

Computer Science Seminar Series

October 10, 2024

Abstract: Large-scale pretraining has become the standard solution to automated reasoning over text and/or visual perception. But how far does this approach get us to systems that generalize to language use in realistic multi-agent situated interactions? First, Alane Suhr will talk about existing work in evaluating the spatial and compositional reasoning capabilities of current multimodal language models. Then Suhr will talk about how these benchmarks miss a key aspect of real-world situated interactions: joint embodiment. Suhr will discuss how joint embodiment in a shared world supports perspective-taking, an underlooked aspect of situated reasoning, and introduce a new environment and benchmark for studying the influence of perspective-taking on language use in interaction.

Speaker Biography: Alane Suhr is an assistant professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Also affiliated with the Berkeley Artificial Intelligence Research Lab, Suhr researches language use and learning in situated, collaborative interactions. This includes developing datasets and environments that support such interactions; designing and evaluating models that participate in collaborative interactions with human users by perceiving, acting, and using language; and developing learning algorithms for training such models from signals acquired in these interactions. Suhr received a BS in computer science and engineering from the Ohio State University in 2016 and a PhD in computer science from Cornell University in 2022.

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

September 17, 2024

Abstract: Despite our tremendous progress in AI, current AI systems—including large language models—still cannot adequately understand humans and flexibly interact with humans in real-world settings. One of the key missing ingredients is Theory of Mind, which is the ability to understand humans’ mental states from their behaviors. In this talk, Tianmin Shu will discuss how we can engineer human-level machine Theory of Mind. He will first show how we can leverage insights from cognitive science studies to develop model-based approaches for physically grounded, multimodal Theory of Mind. He will then discuss how we can improve multimodal embodied AI assistance based on Theory of Mind reasoning. Finally, he will briefly talk about exciting future work toward building open-ended Theory of Mind models for real-world AI assistants.

Speaker Biography: Tianmin Shu is an assistant professor of computer science at the Johns Hopkins University, with a secondary appointment in the university’s Department of Cognitive Science. His research goal is to advance human-centered AI by engineering human-level machine social intelligence to build socially intelligent systems that can understand, reason about, and interact with humans in real-world settings. Shu’s work has received multiple awards, including an Outstanding Paper Award at the 2024 Annual Meeting of the Association for Computational Linguistics and the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action. His research has also been covered by multiple media outlets, such as New Scientist, Science News, and VentureBeat. Shu received his PhD from the University of California, Los Angeles in 2019. Before joining Johns Hopkins, he was a research scientist at the Massachusetts Institute of Technology.