Details:
WHERE: B-17 Hackerman Hall, unless otherwise noted
WHEN: 10:30 a.m. refreshments available, seminar runs from 10:45 a.m. to 12 p.m., unless otherwise noted
Recordings will be available online after each seminar.
Schedule of Speakers
Click to expand for talk title, abstract, and speaker biography.
Please note this seminar will take place in 228 Malone Hall.
Computer Science Seminar Series
“Learning, Reasoning, and Planning with Neuro-Symbolic Concepts”
Abstract: Jiayuan Mao aims to build complete intelligent agents that can continually learn, reason, and plan—that is, answer queries, infer human intentions, and make long-horizon plans spanning hours to days. In this talk, Mao will describe a general learning and reasoning framework based on neuro-symbolic concepts. Drawing inspiration from theories and studies in cognitive science, neuro-symbolic concepts serve as compositional abstractions of the physical world, representing object properties, relations, and actions. These concepts can be combinatorially reused in flexible and novel ways. Technically, each neuro-symbolic concept is represented as a combination of symbolic programs, which define how concepts can be structurally combined (similar to the ways that words form sentences in human language), and modular neural networks, which ground concept names in sensory inputs and agent actions. Mao shows that systems that leverage neuro-symbolic concepts demonstrate superior data efficiency, enable agents to reason and plan more quickly, and achieve strong generalization in novel situations and for novel goals. This is illustrated in visual reasoning in 2D, 3D, motion, and video data, as well as in diverse decision-making tasks spanning virtual agents and real-world robotic manipulation.
Speaker Biography: Jiayuan Mao is a PhD student at the Massachusetts Institute of Technology, where she is advised by Professors Josh Tenenbaum and Leslie Kaelbling. Mao’s research agenda is to build machines that can continually learn concepts (e.g., properties, relations, rules, and skills) from their experiences and apply them for reasoning and planning in the physical world. Her research topics include visual reasoning, robotic manipulation, scene and activity understanding, and language acquisition. She was named a 2024 Rising Star in Electrical Engineering and Computer Science and in Generative AI. Her research has received Best Paper Awards at the 2024 Meeting of the Cognitive Science Society, the 2024 Southern California Natural Language Processing Symposium, and the 2024 Workshop on Language and Robot Learning at the Conference on Robot Learning, as well as a best paper nomination at the 2019 Meeting of the Association for Computational Linguistics.
Computer Science and Center for Language and Speech Processing Seminar Series
“Reasoning with Language Models”
Abstract: Language models are primarily trained via imitation on massive amounts of human data; as a result, they’re capable of performing a wide range of tasks, but often lack the deep reasoning capabilities of classic AI systems like Deep Blue and AlphaGo. In this talk, Nicholas Tomlin will first present core technical challenges related to “reasoning with language,” using his work on computer crossword solvers as a running example. Then, he will show how methods for “interactive reasoning” can enable human-AI teams to solve complex problems jointly. Finally, he will discuss his work on “explainable reasoning,” where the goal is to explain the decisions made by expert AI systems like AlphaGo in human-interpretable terms. Tomlin will conclude by sharing his views on the future of language model reasoning, agents, and interactive systems.
Speaker Biography: Nicholas Tomlin is a final-year PhD student in the Berkeley NLP Group at the University of California, Berkeley, where he is advised by Dan Klein. Tomlin’s work focuses primarily on reasoning and multi-agent interaction with language models. He has co-created systems such as the Berkeley Crossword Solver, the first superhuman computer crossword solver, as well as Ghostbuster, a state-of-the-art method for large language model detection. His work has been supported by grants from the NSF and FAR.AI and has received media coverage from outlets such as Discover, WIRED, and the BBC.
Please note this seminar will take place in 228 Malone Hall.
Computer Science Seminar Series
“Algorithmic Stability for Trustworthy Machine Learning and Statistics”
Abstract: Data-driven systems hold immense potential to positively impact society, but their reliability remains a challenge. Their outputs are often too brittle to changes in their training data, leaving them vulnerable to data poisoning attacks, prone to leaking sensitive information, or susceptible to overfitting. Establishing fundamental principles for designing algorithms that are both stable—to mitigate these risks—and efficient in their use of resources is essential for enabling trustworthy data-driven systems. In this talk, Lydia Zakynthinou will focus on statistical estimation under differential privacy—a rigorous framework that ensures data-driven system outputs do not reveal sensitive information about individuals in their input. She will present algorithmic techniques that take advantage of beneficial structure in the data to achieve optimal error for several multivariate tasks without requiring any prior information about the data by building on robustness against data poisoning attacks. Lastly, Zakynthinou will highlight the deeper connection between differential privacy and robustness that underpins these results.
Speaker Biography: Lydia Zakynthinou is a Foundations of Data Science Institute postdoctoral research fellow in the Simons Institute for the Theory of Computing at the University of California, Berkeley, hosted by Michael I. Jordan. Zakynthinou earned her PhD in computer science from Northeastern University under the supervision of Jonathan Ullman and Huy Nguyen. Her research lies in trustworthy machine learning and statistics, with a focus on data privacy and generalization, and has been recognized with a Meta Research PhD Fellowship and a Khoury College PhD Research Award. Zakynthinou holds a diploma in electrical and computer engineering from the National Technical University of Athens and an MSc in logic, algorithms, and theory of computation from the National and Kapodistrian University of Athens in Greece.
Please note this seminar will take place in 228 Malone Hall.
Computer Science Seminar Series
“Building Haystacks to Find Needles”
Abstract: The internet is a big place, comprising billions of users and tens of billions of network devices. Discovering and remediating vulnerabilities in these devices is an imperative for a more secure internet. Unfortunately, vulnerabilities that affect millions of hosts represent only a small fraction of the overall internet. Finding these “needles” at internet scale requires collecting an exponentially larger “haystack.” In this talk, Erik Rye will describe two novel techniques he developed to collect unprecedentedly large network datasets. He will describe how he used these datasets to enable the discovery of new network security and privacy problems at internet scale. These include stark, real-world security and privacy vulnerabilities, such as revealing troop positions in Ukraine and exposing previously-unreachable Internet of Things devices like smart light bulbs in users’ homes. Rye’s findings have prompted design changes in systems run by Apple, SpaceX, and router manufacturers, and improved the security and privacy of millions of affected individuals.
Speaker Biography: Erik Rye is a final-year PhD candidate at the University of Maryland, where he focuses on solving large-scale network security and privacy problems. He regularly publishes in venues like the ACM Special Interest Group on Data Communications Conference and IEEE Security & Privacy, and he has shared his work at industry conventions like Black Hat USA and in popular media like KrebsOnSecurity.com. Rye contributes to the network security and measurement communities by running the IPv6 Observatory, which publishes weekly insights into the state of the internet. He holds master’s degrees in computer science and applied mathematics from the Naval Postgraduate School, and also likes dogs.
Please note this seminar will take place at 12:00 p.m.
Computer Science and Center for Language and Speech Processing Seminar Series
“Beyond Scaling: Frontiers of Retrieval-Augmented Language Models”
Abstract: Large language models have achieved remarkable progress by scaling training data and model sizes. However, they continue to face critical limitations, including hallucinations and outdated knowledge, which hinder their reliability—especially in expert domains such as scientific research and software development. In this talk, Akari Asai will argue that addressing these challenges requires moving beyond monolithic LMs and toward augmented LMs—a new AI paradigm that designs, trains, and deploys LMs alongside complementary modules to enhance reliability and efficiency. Focusing on her research on retrieval-augmented LMs, one of the most impactful and widely adopted forms of augmented LMs today, Asai will begin by presenting systematic analyses of current LM shortcomings and demonstrating how retrieval augmentation offers a more scalable and effective path forward. She will then discuss her work on establishing new foundations for these systems, including novel training approaches and retrieval mechanisms that enable LMs to dynamically adapt to diverse inputs. Finally, she will showcase the real-world impact of such models through OpenScholar, her group’s fully open retrieval-augmented LM for assisting scientists in synthesizing literature now used by over 30,000 researchers and practitioners worldwide. Asai will conclude by outlining her vision for the future of augmented LMs, emphasizing advancements in abilities to handle heterogeneous modalities, more efficient and flexible integration with diverse components, and rigorous evaluation through interdisciplinary collaboration.
Speaker Biography: Akari Asai is a PhD candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on overcoming the limitations of large language models by developing advanced systems such as retrieval-augmented LMs and applying them to real-world challenges, including scientific research and underrepresented languages. Her contributions have been widely recognized, earning multiple paper awards at top natural language processing and machine learning conferences, an IBM PhD Fellowship Award, and industry grants. Asai was also named a 2022 Electrical Engineering and Computer Science Rising Star and one of MIT Technology Review‘s Innovators Under 35 in Japan. Her work has been featured in outlets such as Forbes and MIT Technology Review. Beyond her research, Asai actively contributes to the NLP and ML communities as a co-organizer of high-impact tutorials and workshops, including the first tutorial on retrieval-augmented LMs at the 2023 Meeting of the Association for Computational Linguistics (ACL), as well as workshops on multilingual information access (2022 Conference of the Nations of the Americas Chapter of the ACL) and knowledge-augmented NLP (NAACL 2025).
Additional information to come.
Computer Science Seminar Series
“Pareto-Efficient AI Systems: Expanding the Quality and Efficiency Frontier of AI”
Abstract: We have made exciting progress in AI with massive models and massive amounts of data center compute. However, the demands for AI are rapidly expanding. Simran Arora identifies how to maximize performance under any compute constraint, expanding the Pareto frontier of AI capabilities. This talk builds up to an efficient language model architecture that expands the Pareto frontier between quality and throughput efficiency. In motivation, the transformer, AI’s current workhorse architecture, is memory-hungry, severely limiting its throughput, or the amount of text it can process per second. This has led to a Cambrian explosion of alternate efficient architecture candidates proposed across prior work. Prior work has painted an exciting picture: There exist architectures that are asymptotically faster than transformers, while also matching quality. However, Arora asks, if we use asymptotically faster building blocks, are we giving something up in quality? In part one of this talk, we build understanding. Indeed, there’s no free lunch! Arora presents her work on identifying and explaining the fundamental quality and efficiency tradeoffs between different classes of architectures. Methods she developed for this analysis are now ubiquitous in the development of language models. In part two, we measure how AI architecture candidates fare on the tradeoff space. A major hurdle, however, is that we lack implementations of the architectures that that run at peak efficiency on modern hardware. Further, many proposed architectures are asymptotically fast, but not wall-clock fast. Arora present ThunderKittens, a new programming library she built to help AI researchers write simple, hardware-efficient algorithms across hardware platforms. In part three, we expand the Pareto frontier of the tradeoff space. Arora presents the BASED architecture, which is built from simple, hardware-efficient components. She released the state-of-the-art 8B-405B transformer-free language models per standard evaluations, all on an academic budget. Given the massive investment into language models, this work has had significant impact and adoption in research, open-source communities, and industry.
Speaker Biography: Simran Arora is a PhD student at Stanford University advised by Chris Ré. Her research blends machine learning and systems towards expanding the Pareto frontier between AI quality and efficiency. Her machine learning research has appeared as oral and spotlight presentations at the Conference and Workshop on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML), and the International Conference on Learning Representations, and has won an Outstanding Paper Award at NeurIPS and a Best Paper Award at the ICML Efficient Systems for Foundation Models workshop. Her systems work has appeared at the International Conference on Very Large Data Bases, the ACM SIGMOD/PODS International Conference on Management of Data, the Conference on Innovative Data Systems Research, and the ACM Conference on Human Factors in Computing Systems, and her systems artifacts are widely used in the open-source communities and industry. In 2023, Arora created and taught the Systems for Machine Learning course at Stanford. She has also been supported by a Stanford Graduate Fellowship.
Past Speakers
Click to expand for recording, date, abstract, and speaker biography.
Recording to come.
Computer Science and Center for Language and Speech Processing Seminar Series
March 24, 2025
Abstract: Controlling language models is key to unlocking their full potential and making them useful for downstream tasks. Successfully deploying these models often requires both task-specific customization and rigorous auditing of their behavior. In this talk, Xiang “Lisa” Li will begin by introducing a customization method called Prefix-Tuning, which adapts language models by updating only 0.1% of their parameters. Next, she will address the need for robust auditing by presenting a Frank-Wolfe-inspired algorithm for red-teaming language models, which provides a principled framework for discovering diverse failure modes. Finally, Li will rethink the root cause of these control challenges and propose a new generative model for text, called Diffusion-LM, which is controllable by design.
Speaker Biography: Xiang “Lisa” Li is a PhD candidate at Stanford University, where she is advised by Percy Liang and Tatsunori Hashimoto. Her research focuses on developing methods to make language models more capable and controllable. Li is supported by a Two Sigma PhD Fellowship and a Stanford Graduate Fellowship, and is the recipient of a Conference on Empirical Methods in Natural Language Processing Best Paper Award.
Computer Science and Biomedical Engineering Seminar Series
March 20, 2025
Abstract: Artificial intelligence techniques for scientific discovery have gained increasing interest across the machine learning, physics, chemistry, materials, and biology communities. A central challenge in AI-driven scientific discovery is molecular learning and design, as molecules serve as fundamental building blocks and can be naturally represented through various modalities, including chemical formulas, molecular graphs, geometric conformations, knowledge graphs, and textual literature. Shengchao Liu’s research focuses on leveraging multimodal information to develop a physics-inspired foundation model. To assess its effectiveness, he outlines and applies two key paradigms. The first involves employing physics-inspired AI models to accelerate established scientific discovery processes such as molecular dynamics simulations and molecule crystallization. The second paradigm integrates the reasoning and planning capabilities of generative AI models to explore novel approaches, including text-guided lead optimization, protein engineering, and material design. By bridging AI and physics in chemistry, biology, and materials science, Liu’s work offers a unique perspective on advancing AI-driven scientific discovery, ultimately supporting both science and scientists.
Speaker Biography: Shengchao Liu is a postdoctoral researcher at the University of California, Berkeley, working with Christian Borgs and Jennifer Chayes. Liu’s research focuses on representation learning, self-supervised pre-training, deep generative modeling, and physics-inspired machine learning with applications in scientific discovery. He has published in top venues such as the International Conference on Machine Learning (ICML), the International Conference on Learning Representations, the Conference and Workshop on Neural Information Processing Systems (NeurIPS), the International Conference on Artificial Intelligence and Statistics, Transactions on Machine Learning Research, the AAAI Conference on Artificial Intelligence, Nature Machine Intelligence, and the Journal of the American Chemical Society, and his work on protein engineering was a finalist for the ACM Gordon Bell Prize in 2024. Liu has co-organized AI for Science workshops at NeurIPS 2021, ICML 2022, and NeurIPS 2023 and led lecture tutorials on physics-inspired and scientific foundation models at AAAI 2025.
Computer Science Seminar Series
March 20, 2025
Abstract: AI agents will soon be as commonplace as smartphones. These agents will make sequences of interconnected decisions that impact human lives—from serving as decision support in health care to shaping educational paths for millions of students. A defining challenge for the future of AI is how to build agents that can effectively operate in and adapt to these human environments. In this talk, Stephanie Milani shows how human-centered reinforcement learning offers a promising framework for addressing this challenge. First, Milani focus on the issue of interpretability, presenting novel algorithms for learning transparent decision-making policies. Then, she shows how human-centered design can be used to define the objectives for AI agents, exemplified through a grounded use case in mental health. Finally, recognizing that complex human domains often defy precise specification, Milani presents her benchmark for AI agents to learn from human feedback for complex tasks. Together, this work illustrates how human-centered reinforcement learning is a valuable approach for developing AI agents that can learn from and for the people whose lives they impact.
Speaker Biography: Stephanie Milani is a final-year PhD candidate in the Machine Learning Department at Carnegie Mellon University. Her research focuses on building reinforcement learning agents to address human-centered and use-case-inspired challenges. Her research has been published at top machine learning and human-computer interaction venues, including the International Conference on Learning Representations, the Conference and Workshop on Neural Information Processing System (NeurIPS), and the ACM Conference on Human Factors in Computing Systems, and has received Best Paper Awards at the International Conference on Machine Learning Multimodal Foundation Model Meets Embodied AI and NeurIPS GenAI for Health workshops. Milani is a 2024 Michigan Institute for Data & AI in Society Future Leader in Responsible Data Science and AI and a Rising Star in Data Science. She has received a CMU machine learning teaching assistant award, co-organized the MineRL international competition series at NeurIPS, and received a Newman Civic Fellowship for her service to computer science education.
Computer Science and Laboratory for Computational Sensing and Robotics Seminar Series
March 19, 2025
Abstract: Generative visual models like Stable Diffusion and Sora generate photorealistic images and videos that are nearly indistinguishable from real ones to a naive observer. However, their grasp of the physical world remains an open question: Do they understand 3D geometry, light, and object interactions, or are they mere “pixel parrots” of their training data? Through systematic probing, Anand Bhattad will demonstrate that these models surprisingly learn fundamental scene properties—intrinsic images such as surface normals, depth, albedo, and shading (à la Barrow & Tenenbaum, 1978)—without explicit supervision, which enables applications like image relighting. But Bhattad will also show that this knowledge is insufficient. Careful analysis reveals unexpected failures: inconsistent shadows, multiple vanishing points, and scenes that defy basic physics. All these findings suggest these models excel at local texture synthesis but struggle with global reasoning—a crucial gap between imitation and true understanding. Bhattad will conclude by outlining a path toward generative world models that emulate global and counterfactual reasoning, causality, and physics.
Speaker Biography: Anand Bhattad is a research assistant professor at the Toyota Technological Institute at Chicago. He earned his PhD from the University of Illinois Urbana-Champaign in 2024 under the mentorship of David Forsyth. His research interests lie at the intersection of computer vision and computer graphics, with a current focus on understanding the knowledge encoded in generative models. Bhattad has received Outstanding Reviewer honors at the 2023 International Conference on Computer Vision and the 2021 Conference on Computer Vision and Pattern Recognition (CVPR), and his CVPR 2022 paper was nominated for a Best Paper Award. He actively contributes to the research community by leading workshops at CVPR and the European Conference on Computer Vision (ECCV), including “Scholars and Big Models: How Can Academics Adapt?” at CVPR 2023, “CV 20/20: A Retrospective Vision” at CVPR 2024, “Knowledge in Generative Models” at ECCV 2024, and “How to Stand Out in the Crowd?” at CVPR 2025. For more details, visit https://anandbhattad.github.io/
Computer Science Seminar Series
March 18, 2025
Abstract: Current artificial intelligence systems can synthesize images, solve math problems, and write code. Despite these advances, they still struggle with basic tasks that humans and animals perform effortlessly. One potential idea why humans and animals can perform basic tasks well is because they have a predictive world model that integrates perception, reasoning, and planning capabilities. Can we build such a model in a bottom-up fashion from sensorimotor data and primarily visual observations? In this talk, Amir Bar will propose a path toward building such a world model. He will introduce Visual Prompting, a new paradigm that unifies many computer vision tasks and can readily adapt pre-trained models to novel tasks without fine-tuning. Building on this, Bar will present an extension to planning using generative world models—showing that action-conditioned video models can act as simulators of the environment that support real-world decision-making, with a case study in visual navigation. Finally, Bar will discuss future directions for improving the capabilities of world models and the challenges we face to enable their real-world deployment.
Speaker Biography: Amir Bar is a postdoctoral researcher at Meta AI, working on self-supervised learning with Yann LeCun. Previously, he completed his PhD at Tel Aviv University and was a visiting PhD student at the University of California, Berkeley’s Artificial Intelligence Research Lab, where he was advised by Amir Globerson and Trevor Darrell. Bar began his PhD following the acquisition of the startup Zebra Medical Vision, where he led the AI team and developed multiple FDA-approved algorithms currently in clinical use worldwide. His work on video models won the Ego4D PNR Temporal Localization Challenge at the 2022 Conference on Computer Vision and Pattern Recognition.
Computer Science and Biomedical Engineering Seminar Series
March 13, 2025
Abstract: Biological systems are organized across a hierarchy of scales, from the spatial organization of cells in tissues to networks of interactions between genes and proteins. However, the systematic study of spatial and network processes is challenged by high levels of heterogeneity, sparsity, and other forms of noise in modern sequencing data. In this talk, Uthsav Chitra presents new statistical and machine learning methodologies for spatial and network biology. First, he introduces “gene expression topography,” a fundamentally new paradigm for modeling spatial gradients and tissue geometry from sparse spatial data. He derives algorithms for learning “topographic maps” of 2D tissue slices using tools from complex analysis and interpretable deep learning. These maps reveal the spatial and molecular organization of tissues from the brain, skin, and tumor microenvironment. Second, Chitra introduces a statistical framework for anomaly detection in biological interaction networks, or the problem of identifying anomalous subnetworks of interacting disease genes and proteins. He proves that many widely used algorithms are statistically biased—resolving a 20-year-old open question on why these methods identify large and unrealistic subnetworks—and derives asymptotically unbiased and efficient algorithms for network anomaly detection. Taken together, Chitra’s research underscores the need for specialized, principled, and interpretable ML approaches for advancing biomedical discovery.
Speaker Biography: Uthsav Chitra is a postdoctoral fellow at the Eric and Wendy Schmidt Center at the Eli and Edythe L. Broad Institute of MIT and Harvard. He recently received his PhD in computer science from Princeton University. Chitra’s research broadly develops rigorous statistical and machine learning methodologies to address fundamental questions in biology, with a focus on problems involving spatial structure or graphs. His research has appeared at top computer science conferences (e.g., the International Conference on Machine Learning, Research in Computational Molecular Biology, the International Conference on Intelligent Systems for Molecular Biology, the ACM International Conference on Web Search and Data Mining) and biological journals (e.g., Nature Methods, Cell Systems, Nature Communications) and has been recognized with the Rising Stars in Data Science award, a Best Paper Award at the RECOMB Satellite Workshop on Computational Cancer Biology, a Siebel Scholars award, and an NSF Graduate Research Fellowship. Chitra previously received an ScB in mathematics, an AB in computer science, and an AB in applied mathematics from Brown University, and spent a year in industry.
Computer Science and Seminar Series
March 13, 2025
Abstract: The design of online platforms plays a critical role in shaping public discourse—yet these same platforms can unintentionally contribute to misinformation, polarization, and social harm. Tiziano Piccardi’s research investigates how to reorient these platforms toward positive societal outcomes by re-imagining the algorithmic and design principles at their core. In this talk, Piccardi will present AI-driven designs that (1) reduce the harmful societal effects of social media and (2) enhance the reliability of open knowledge platforms like Wikipedia. He will introduce a framework for running large-scale algorithmic reranking field experiments without platform collaboration and share results from a large-scale experiment on X (formerly Twitter), where a large language model-based system reranked feeds in real-time to mitigate political polarization. Piccardi will then discuss tools and studies that have shaped Wikipedia’s design, reinforcing its role as the world’s largest online encyclopedia and a trusted global information source accessed by millions daily. By translating research into deployable tools and real-world applications, Piccardi’s work demonstrates how embedding societal values into platform design can foster healthier online information environments.
Speaker Biography: Tiziano Piccardi is a postdoctoral scholar in the Computer Science Department at Stanford University, working in its human-computer interaction group. He earned his PhD in data science from the École Polytechnique Fédérale de Lausanne. Piccardi’s research focuses on social computing and web research, aiming to improve the design of online platforms, including social media and open knowledge ecosystems. He is a long-term formal collaborator with Wikimedia Research, contributing to the enhancement of Wikipedia. He is also a fellow of the Swiss National Science Foundation and Stanford Impact Labs.
Computer Science and Laboratory for Computational Sensing and Robotics Seminar Series
March 12, 2025
Abstract: Building and deploying broadly capable robots requires systems that can efficiently learn from and work with people. To achieve this, robots must balance capability—the fundamental tools necessary to enable real-world deployment—and sustainability—the ability to grow and adapt through human feedback. In this talk, Siddharth Karamcheti will motivate language-driven learning to tackle these axes, providing robots with better abstractions for perception, action, and human-robot interaction. Towards capability, he will present Voltron, his approach for using language to learn visual representations that can be efficiently adapted for diverse robotics tasks. Building on these ideas, he will discuss Prismatic, his experimental framework for developing visually conditioned language models and vision-language-action policies at scale. Towards sustainability, Karamcheti will next introduce Vocal Sandbox, a new framework that integrates these models to develop collaborative robots that can work alongside human partners, using language to express uncertainty and learn new behaviors from real-time interactions. Finally, he will conclude with open challenges for enhancing both the capability and sustainability of modern robots, with directions for future work.
Speaker Biography: Siddharth Karamcheti is a final-year PhD student at Stanford University advised by Dorsa Sadigh and Percy Liang, and a robotics intern at the Toyota Research Institute. His research focuses on robot learning, natural language processing, and human-robot interaction, with the goal of developing scalable approaches for human-robot collaboration. Prior to his PhD, Karamcheti earned his bachelor’s degree in computer science and literary arts at Brown University, where he worked with Eugene Charniak and Stefanie Tellex. He is a recipient of a 2018 Open Philanthropy AI Fellowship, is a 2024 Robotics: Science and Systems (RSS) Pioneer, and his research has won paper awards at conferences such as RSS, the Conference on Robot Learning, the IEEE International Conference on Robotics and Automation, and the Meeting of the Association for Computational Linguistics.
Computer Science Seminar Series
March 11, 2025
Abstract: As robots become increasingly integrated into our daily lives, ranging from household assistants to autonomous vehicles, ensuring that they make safe, reliable, and ethically sound decisions is more urgent than ever. In this talk, Yiwei Lyu will present her research on achieving provable safety for robots operating in uncertain, unstructured environments. She will begin by introducing a control-theoretic framework that provides a mathematical foundation for guaranteeing safety in probabilistic settings while allowing a robot to accomplish its tasks without being overly conservative. Building on this, Lyu will explore the challenge of collective safety in multi-agent systems, where robots must collaborate with heterogeneous teammates in a decentralized manner and make context-aware decisions. She will show how a novel responsibility reasoning approach, combined with rigorous control-theoretic methods, can lead to group intelligence with formal safety assurances. Next, she will discuss how to learn under-defined safety specifications by blending data-driven learning with control-theoretic techniques, addressing the question, “How safe is safe?” and enabling robots to adapt and refine safety constraints in real time. Lyu will conclude by outlining a vision for responsible robotics—systems that not only guarantee physical safety, but also align their behavior with human values, comply with ethical and legal standards, and seamlessly integrate into sectors such as health care, transportation, and domestic environments.
Speaker Biography: Yiwei Lyu is a final-year PhD candidate in the Department of Electrical and Computer Engineering at Carnegie Mellon University, advised by Dr. John Dolan. Lyu’s research in robotics sits at the intersection of control theory, machine learning, and social science, with broad interests in safe control and verification, behavior planning, and human-robot interaction. Specifically, she develops principled methods to enable safe robot autonomy in multi-agent systems, supporting effective collaboration among robots and between robots and humans in uncertain, dynamic environments. Lyu is a recipient of the Qualcomm Innovation Fellowship and received an honorable mention for the Jane Street Graduate Research Fellowship. She was also recognized as an IEEE/ACM Human-Robot Interaction Pioneer and named a 2024 Massachusetts Institute of Technology Electrical Engineering and Computer Science Rising Star. Her work has earned best paper awards and nominations at multiple conferences, including the International Conference on Autonomous Agents and Multiagent Systems, the AAAI Conference on Artificial Intelligence, and the IFAC Workshop on Cyber-Physical Human Systems, as well as workshops at the IEEE International Conference on Robotics and Automation, the IEEE/RSJ International Conference on Intelligent Robots and Systems, and the International Joint Conference on Artificial Intelligence. Prior to joining Carnegie Mellon in 2019, Lyu earned her bachelor’s degree in electronic and information engineering from the Chinese University of Hong Kong, Shenzhen.
Computer Science and Center for Language and Speech Processing Seminar Series
March 10, 2025
Abstract: Pre-trained language models have brought revolutionary progress to information-seeking in the English world. While the advance is exciting, how to transfer such progress into non-English—and especially lower-resource—languages presents new challenges that require developing new resources and methodologies. In this talk, Xinyu “Crystina” Zhang will present her research on building effective information-seeking systems for non-English speakers. She will begin by introducing the benchmarks and datasets developed to support the evaluation and training of the multilingual search systems. These resources have since become widely adopted within the community and enable the development of effective multilingual embedding models. The next part of her talk will share the best training practices found in such model development, including strategies for enhancing backbone models and surprising transfer effects across languages. Building on these foundations, Zhang’s work expanded to understand how language models process multilingual text and facilitate knowledge transfer across languages. Her talk will conclude with a vision for the future of multilingual language model development, with the goal of adapting these models to unseen languages with minimal data and resource requirements and thus bridging the gap for underrepresented linguistic communities.
Speaker Biography: Xinyu “Crystina” Zhang is a PhD candidate at the University of Waterloo, where she is advised by Professor Jimmy Lin. Zhang’s research focuses on enhancing search systems in multilingual scenarios, with works featured at top natural language processing and information retrieval conferences and journals such as Transactions of the Association for Computational Linguistics, ACM Transactions on Information Systems, the Meeting of the Association for Computational Linguistics, and the International ACM SIGIR Conference on Research and Development in Information Retrieval. Zhang has hosted competitions on multilingual retrieval in the 2022 ACM International Conference on Web Search and Data Mining Cup and the 2023 ACM Forum for Information Retrieval Evaluation, and received Outstanding Paper Awards at the 2024 Conference on Empirical Methods in Natural Language Processing and a Best Paper Award nomination at SIGIR 2024. She has interned at Google DeepMind, Cohere, the Max Planck Institute for Informatics, and NAVER. Prior to graduate school, she received her bachelor’s degree in computer science from the Hong Kong University of Science and Technology in 2020.
Computer Science Seminar Series
March 6, 2025
Abstract: Deep neural networks are becoming incredibly sophisticated; they can generate realistic images, engage in complex dialogues, analyze intricate data, and execute tasks that appear almost humanlike. But how do such models achieve these abilities? In this talk, Tamar Rott Shaham will present a line of work that aims to explain behaviors of deep neural networks. This includes a new approach for evaluating cross-domain knowledge encoded in generative models, tools for uncovering core mechanisms in large language models, and their behavior under fine-tuning. She will show how to automate and scale the scientific process of interpreting neural networks with the Automated Interpretability Agent, a system that autonomously designs experiments on models’ internal representations to explain their behaviors. Shaham will demonstrate how such understanding enables mitigating biases and enhancing models’ performance. The talk will conclude with a discussion of future directions, including developing universal interpretability tools and extending interpretability methods to automate scientific discovery.
Speaker Biography: Tamar Rott Shaham is a postdoctoral researcher at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory working with Antonio Torralba. Shaham earned her PhD from the Technion Faculty of Electrical and Computer Engineering, supervised by Tomer Michaeli. She has received several awards, including the 2019 International Conference on Computer Vision Marr Prize, a Google Women Techmakers Scholarship, an Adobe Research Fellowship, a Rothschild Fellowship, the VATAT Zuckerman Israeli Postdoctoral Scholarship, and the Schmidt Sciences Israeli Women’s Postdoctoral Award.
Computer Science Seminar Series
March 6, 2025
Abstract: Large language models have broadly revolutionized programming and software development. In this talk, Yangruibo “Robin” Ding will discuss his research on enabling LLMs to meet the real-world demands of software engineering. First, he will describe how we can improve LLMs’ code reasoning capabilities by training them with comprehensive program semantics, enhancing their effectiveness in code generation, runtime analysis, and self-debugging. Second, Ding will discuss how we can adapt LLMs for realistic programming practice, enabling these models to retrieve additional context, interact with symbolic tools to collect feedback, and iteratively refine their solutions. Third, he will introduce his efforts to develop code-embedding language models that represent program functionalities with vectors to support non-generative tasks, such as code search, clone retrieval, and vulnerability detection. Finally, Ding will envision the future of AI systems for software engineering, which will achieve the next level of automation in a more reliable, intelligent, and cost-efficient way.
Speaker Biography: Yangruibo “Robin” Ding is a PhD candidate in the Department of Computer Science at Columbia University. His research is at the intersection of software engineering and machine learning, focusing on developing large language models for code. Ding trains LLMs to generate, analyze, and refine software programs and constructs benchmarks to systematically evaluate LLMs in solving software engineering tasks. He also studies how to improve LLMs’ reasoning capabilities to tackle complex programming tasks such as debugging and patching. His interdisciplinary research has been published in top-tier conferences of software engineering, programming languages, natural language processing, and machine learning. His work has won an ACM SIGSOFT Distinguished Paper Award and was an IEEE Transactions on Software Engineering Best Paper Runner-Up; he has also received an IBM PhD Fellowship Award.
Computer Science Seminar Series
March 4, 2025
Abstract: Modern machine learning has transformed problem-solving across diverse fields, yet it faces challenges that require structure, logic, and planning—domains where traditional symbolic reasoning excels. A promising solution to this challenge lies in the neurosymbolic paradigm, which bridges the gap between perception and reasoning. In this talk, Jiani Huang will demonstrate the potential of this approach by applying it to build LASER, a state-of-the-art foundation model for video understanding. LASER leverages the vast availability of video captions as a valuable source of weak supervisory signals for learning fine-grained video semantics. The key insight underlying LASER is a symbolic module for aligning video-caption pairs wherein captions are formulated in a domain-specific language based on finite linear temporal logic and video is structured as a spatiotemporal scene graph. This alignment process is end-to-end differentiable, enabled by a symbolic checker implemented in Scallop, a programming language tailored for the neurosymbolic paradigm. The resulting approach enables us to efficiently train a video understanding model without the need for fine-grained video annotations. Huang will conclude by discussing the broader potential of the neurosymbolic paradigm in advancing safety-critical, verifiable, and real-world applications.
Speaker Biography: Jiani Huang is a PhD candidate in computer and information science at the University of Pennsylvania. Her research focuses on neurosymbolic approaches, specifically: (1) the design and implementation of Scallop, a neurosymbolic programming language and (2) its applications across diverse fields, including natural language processing, computer vision, and medicine. Through the neurosymbolic paradigm, Huang aims to develop AI solutions that are accurate, explainable, and efficient, addressing both theoretical challenges and real-world needs. Huang was recognized as a 2023 Rising Star in Electrical Engineering and Computer Science and was a visiting scholar at Meta AI from 2022 to 2023. Her work has been published in top conferences, including the ACM SIGPLAN Conference on Programming Language Design and Implementation, the Conference on Neural Information Processing Systems, the International Conference on Machine Learning, the International Conference on Learning Representations, the AAAI Conference on Artificial Intelligence, and the Meeting of the Association for Computational Linguistics. Additionally, she coauthored a book on Scallop, which was published in the Foundations and Trends in Programming Languages series in 2024.
Computer Science and Center for Language and Speech Processing Seminar Series
March 3, 2025
Abstract: The paradigm of training large-scale foundation models has driven significant advancements in multimodal AI. However, pursuing further performance gains solely through model scaling is becoming impractical due to rising computational costs and resource limitations. Moreover, the reasoning and generation processes of these models remain mostly uninterpretable and uncontrollable, often leading to unfaithful outputs. In this talk, Jaemin Cho will discuss his efforts to make multimodal generative models more controllable and trustworthy without increasing their size. First, he will introduce faithful reasoning frameworks, in which the multimodal generation process mirrors how humans reason about and create content such as images and videos. Concretely, in these frameworks, models create a detailed plan that decomposes a complex generation task into simpler steps, as well as retrieve relevant information from multimodal knowledge bases before generating the final outputs. Next, Cho will describe fine-grained evaluation methods that assess model capabilities across multiple dimensions, such as object counting and spatial relation understanding, thereby providing a detailed understanding of the models’ strengths and weaknesses. In turn, these evaluations enable targeted model improvements that address identified weaknesses through test-time guidance or by updating training environments. Together, these directions offer a pathway toward more intelligent, reliable, and efficient multimodal AI models.
Speaker Biography: Jaemin Cho is a PhD candidate in the Department of Computer Science at the University of North Carolina at Chapel Hill. His research focuses on improving the reasoning capabilities in multimodal generation. His work has been featured at top conferences in computer vision (e.g., the Conference on Computer Vision and Pattern Recognition, the International Conference on Computer Vision, the European Conference on Computer Vision), natural language processing (e.g., Empirical Methods in Natural Language Processing, the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, the Conference on Language Modeling), and machine learning (e.g., the Conference on Neural Information Processing Systems, the International Conference on Machine Learning, the International Conference on Learning Representations, the AAAI Conference on Artificial Intelligence). His work has been recognized through multiple oral/spotlight presentations and a Best Reviewer Award at NeurIPS, a Bloomberg Data Science PhD Fellowship, and media coverage (MIT Technology Review, IEEE Spectrum, and WIRED). He also has co-organized the T4V: Transformers for Vision workshop at CVPR 2023 and 2024.
Computer Science Seminar Series
February 27, 2025
Abstract: As AI technologies are increasingly transforming how we live, work, and communicate, AI evaluation must take a human-centered approach to realistically reflect real-world performance and impact. In this talk, Sunnie S. Y. Kim will discuss how to advance human-centered evaluation—and subsequently, responsible development of AI—by integrating knowledge and methods from AI and human-computer interaction. First, using explainable AI as an example, she will highlight the challenges and necessity of human (as opposed to automatic) evaluation. Second, she will illustrate the importance of contextualized evaluation with real users, revisiting key assumptions in explainable AI research. Finally, Kim will present empirical insights into human-AI interaction, demonstrating how users perceive and act upon common AI behaviors (e.g., large language models providing explanations and sources). She will conclude by discussing the implications of these findings and future directions for responsible AI development.
Speaker Biography: Sunnie S. Y. Kim is a PhD candidate in computer science at Princeton University advised by Olga Russakovsky. She works on responsible AI and human-AI interaction—specifically, on improving the explainability and fairness of AI systems and helping people have appropriate understanding of and trust in them. Her research has been published in both AI and human-computer interaction venues (e.g., the Conference on Computer Vision and Pattern Recognition; the European Conference on Computer Vision; the ACM Conference on Human Factors in Computing Systems; the ACM Conference on Fairness, Accountability, and Transparency), and she has organized multiple workshops connecting the two communities. She has been recognized by the NSF Graduate Research Fellowship Program, the Siebel Scholars program, and Rising Stars in Electrical Engineering and Computer Science, and has interned at Microsoft Research with the Fairness, Accountability, Transparency, and Ethics in AI group. Prior to graduate school, she received a BSc in statistics and data science from Yale University and spent a year at Toyota Technological Institute at Chicago.
Computer Science and Laboratory for Computational Sensing and Robotics Seminar Series
February 26, 2025
Abstract: Automating medical interventions such as surgery through robotics holds immense potential to revolutionize health care delivery by alleviating physician workload and extending critical treatments to underserved populations. Successful automation of medical interventions demands robots that are capable of three essential abilities: environment understanding with high precision (sensing), reliable manipulation in medical environments that guarantees patient safety and minimizes failures (planning), and continuous medical knowledge accumulation to operate in diverse clinical scenarios (adaptability). However, the complexity of medical environments, restricted sensor feedback, and stringent safety constraints pose challenges in developing autonomous robotic systems that match human professionals’ expertise. In this talk, Zih-Yun “Sarah” Chiu will introduce her research in robot sensing, planning, and adaptability that achieves precise localization, safe manipulation, and flexible learning in autonomous medical interventions. First, she will discuss her approach to surgical tool localization that leverages robot kinematics and object geometry to handle uncertainty while ensuring feasibility constraints. Second, she will present how her uncertainty-aware trajectory optimization framework generates reliable robot movements for surgical manipulation, even in noisy, unpredictable environments. Next, she will highlight her efforts in robot learning that enhance knowledge accumulation across multiple surgical and general manipulation tasks, improving learning efficiency and adaptability. Finally, Chiu will demonstrate the real-world impact of her research by showcasing two autonomous medical applications: suturing, a fundamental surgical procedure, and human repositioning for medical evacuation. This talk will conclude with promising future directions in autonomous medical interventions.
Speaker Biography: Zih-Yun “Sarah” Chiu is a PhD candidate in electrical and computer engineering at the University of California, San Diego, where she works with Associate Professor Michael Yip in the Advanced Robotics and Controls Lab. Her research interests lie in high-precision robot autonomy for medical applications, including surgery and human evacuation in search-and-rescue scenarios. Chiu has developed localization, planning, and robot learning techniques that enable robots to achieve precise perception, safe manipulation, and efficient learning in complex medical environments. Her work has been recognized with a Best Paper Award in Medical Robotics at the 2023 IEEE International Conference on Robotics and Automation (ICRA) and a Best Poster Award at the 2023 ICRA Workshop, New Evolutions in Surgical Robotics. In 2024, Chiu was honored as a Rising Star in Electrical Engineering and Computer Science.
Computer Science Seminar Series
February 25, 2025
Abstract: Disasters like wildfires and wars are increasing in frequency and severity, creating environments where chaos reigns. In these moments, AI holds the potential to revolutionize disaster response—helping first responders stay safe, saving lives, and guiding critical decision-making. Yet current AI systems often fail when faced with the realities of such environments; they assume clean data from reliable sensors, predictable conditions, and well-defined tasks—assumptions that collapse in the face of noisy inputs, shifting contexts, and incomplete information. In this talk, Ritwik Gupta will present a vision for building AI systems that thrive in these complex, high-stakes scenarios. He will explore the core challenges: working with gigapixel images that defy traditional compute paradigms, understanding data from non-visible modalities like synthetic aperture radar, integrating multimodal information from disparate sensors, and making sense of rapidly changing conditions. Tackling these challenges requires fundamentally rethinking AI architectures to account for scalability, adaptability, and robustness—whether by introducing physics-aware models, sensor-in-the-loop designs, or multimodal systems capable of reasoning over fragmented and noisy inputs. Beyond technical challenges, Gupta will discuss how AI policy must evolve to bridge the gap between civilian and military applications. By addressing regulatory bottlenecks, dual-use technologies can be deployed responsibly and equitably in both disaster response and defense scenarios. This dual approach—spanning foundational AI research and policy innovation—will help unlock the potential of AI in the world’s most chaotic environments.
Speaker Biography: Ritwik Gupta is a PhD candidate at the University of California, Berkeley; the technical director for autonomy at the Defense Innovation Unit; and an advisor to the FBI on AI and AI policy. His research focuses on computer vision in complex and chaotic environments, as well as the policy implications of integrating dual-use AI into both civilian and military contexts. Gupta’s work has found widespread use in tasks such as assessing building damage after the 2023 Turkey-Syria earthquake and detecting and interdicting criminals engaged in illegal activities on the ocean. His research has been widely covered in press outlets such as TIME, the Wall Street Journal, and CNN. Gupta is a graduate fellow with the Berkeley Risk and Security Lab, a research fellow at the Berkeley Human Rights Center, and an AI policy fellow at the Center for Security in Politics. He previously led a research lab focused on AI for humanitarian assistance and disaster response at Carnegie Mellon University and investigated real-time machine learning for the Apple Vision Pro.
Computer Science and Center for Language and Speech Processing Seminar Series
February 24, 2025
Abstract: Language models are highly effective at understanding and generating text, holding immense potential as intuitive, personalized interfaces for accessing information. Expanding their ability to gather and synthesize large volumes of information will further unlock transformative applications, ranging from generative search engines to AI literature assistants. In this talk, Tianyu Gao will present his research on advancing LMs for information processing at scale. First, he will present his evaluation framework for LM-based information-seeking systems, emphasizing the importance of providing citations for verifying the model-generated answers. His evaluation highlights shortcomings in LMs’ abilities to reliably process long-form texts (e.g., dozens of webpages), which he addresses by developing state-of-the-art long-context LMs that outperform leading industry efforts while using a small fraction of the computational budget. Gao will then introduce his foundational work on using contrastive learning to produce performant text embeddings, which form the cornerstone of effective and scalable search. In addition to building systems that can process large-scale information, he will discuss his contributions to creating efficient pre-training and adaptation methods for LMs, which enable scalable deployment of LM-powered applications across diverse settings. Finally, Gao will share his vision for the next generation of autonomous information processing systems and outline the foundational challenges that must be addressed to realize this vision.
Speaker Biography: Tianyu Gao is a fifth-year PhD student in the Department of Computer Science at Princeton University, advised by Danqi Chen. His research focuses on developing principled methods for training and adapting language models, many of which have been widely adopted across academia and industry. Driven by transformative applications such as using language models as information-seeking tools, his work also advances robust evaluation and fosters a deeper understanding to guide the future development of language models. Gao led the first workshop on long-context foundation models at the 2024 International Conference on Machine Learning, won an outstanding paper award at the 2022 Annual Meeting of the Association for Computational Linguistics , and received an IBM PhD Fellowship Award in 2023. He received his BEng from Tsinghua University in 2020.
Computer Science, Electrical and Computer Engineering, and Laboratory for Computational Sensing and Robotics Seminar Series
February 19, 2025
Abstract: From autonomous vehicles navigating busy intersections to quadrupeds deployed in household environments, robots must operate safely and efficiently around people in uncertain and unstructured situations. However, today’s robots still struggle to robustly handle low-probability events without becoming overly conservative. In this talk, Haimin Hu will discuss how planning in the joint space of physical and information states (e.g., beliefs) enables robots to make safe, adaptive decisions in human-centered scenarios. He will begin by introducing a unified safety filter framework that combines robust safety analysis with probabilistic reasoning to enable trustworthy human–robot interaction. He will discuss how robots can reduce conservativeness without compromising safety by closing the interaction–learning loop. Next, Hu will show how game-theoretic reinforcement learning tractably synthesizes a safety filter for high-dimensional systems, guarantees training convergence, and reduces the policy’s exploitability. Finally, he will present an algorithmic approach to scaling up game-theoretic planning for resolving conflicts and optimizing social welfare for strategic interactions involving many agents. Hu will conclude with a vision for next-generation human-centered robotic systems that actively align with their human peers and enjoy verifiable safety assurances.
Speaker Biography: Haimin Hu is a final-year PhD candidate in electrical and computer engineering at Princeton University. His research integrates dynamic game theory with control systems safety and reinforcement learning to enable trustworthy human–robot interaction. Prior to his doctoral studies, Hu received his MSE. degree in electrical engineering from the University of Pennsylvania in 2020 and a BE degree in electronic and information engineering from ShanghaiTech University in 2018. From 2017 to 2018, he was a visiting student in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Hu has worked at the Toyota Research Institute, the Honda Research Institute, and the National Institute for Nuclear Physics in Padova, Italy, and he currently serves as an associate editor for IEEE Robotics and Automation Letters. In 2024, Hu was named a Human–Robot Interaction Pioneer by the Institute of Electrical and Electronics Engineers and the ACM.
Recording to come.
Computer Science Seminar Series
February 18, 2025
Abstract: Modern deep learning has achieved remarkable results, but the design of training methodologies largely relies on guess-and-check approaches. Thorough empirical studies of recent massive language models is prohibitively expensive, underscoring the need for theoretical insights, but classical machine learning theory struggles to describe modern training paradigms. Sadhika Malladi presents a novel approach to developing prescriptive theoretical results that can directly translate to improved training methodologies for LMs. Her research has yielded actionable improvements in model training across the LM development pipeline; for example, her theory motivates the design of MeZO, a fine-tuning algorithm that reduces memory usage by up to 12x and halves the number of GPU hours required. Throughout this talk, to underscore the prescriptiveness of her theoretical insights, Malladi will demonstrate the success of these theory-motivated algorithms on novel empirical settings published after the theory.
Speaker Biography: Sadhika Malladi is a final-year PhD student in computer science at Princeton University advised by Sanjeev Arora. Her research advances deep learning theory to capture modern-day training settings, yielding practical training improvements and meaningful insights into model behavior. She has co-organized multiple workshops, including Mathematical and Empirical Understanding of Foundation Models at the 2024 International Conference on Learning Representations and Mathematics for Modern Machine Learning at the 2024 Conference on Neural Information Processing Systems. Malladi was recently named a 2025 Siebel Scholar.
Computer Science Seminar Series
February 13, 2025
Abstract: Neurosymbolic programming combines the otherwise complementary worlds of deep learning and symbolic reasoning, enabling AI solutions that are more accurate, interpretable, and domain-aware. In this talk, Ziyang Li will present Scallop, a programming language and compiler toolchain designed for building neurosymbolic applications. Scallop allows developers to specify a suitable decomposition of learning and reasoning modules. Learning modules integrate seamlessly with modern machine learning frameworks, leveraging everything from custom neural networks to large foundation models for language, vision, and multimodal data. Reasoning modules are specified declaratively, supporting expressive logical patterns, probabilistic inference, and differentiable programming. Li will demonstrate how Scallop simplifies the development of neurosymbolic applications across diverse domains, including image and video analysis, natural language processing, cybersecurity, and bioinformatics. He will conclude with future research directions to advance neurosymbolic programming, addressing the increasing demands of safety-critical, complex, and real-world AI challenges.
Speaker Biography: Ziyang Li is a PhD candidate in computer science at the University of Pennsylvania. His research focuses on neurosymbolic programming, an emerging paradigm that aims to combine the benefits of deep learning and logical reasoning. During his PhD, he developed Scallop, a neurosymbolic programming language and compiler toolchain. Scallop has been used to develop diverse applications in the domains of computer vision, cybersecurity, natural language processing, clinical-decision making, and bioinformatics. Li was awarded an Amazon Web Services Fellowship in 2023 for his research on trustworthy AI and his work has been recognized at leading conferences such as the ACM SIGPLAN Conference onProgramming Language Design and Implementation, the Conference and Workshop on Neural Information Processing Systems, the International Conference on Learning Representations, the International Conference on Machine Learning, USENIX Security, and the IEEE Symposium on Security and Privacy. He authored a book on Scallop which was published in the Foundations and Trends in Programming Languages series in 2024.
Computer Science Seminar Series
January 23, 2025
Abstract: Realizing precision and equity in health care requires developing robust molecular and clinical models that generalize across studies. This talk will present breakthroughs in robust data analysis that enabled molecular and clinical discoveries, including novel regulatory-like naïve T cells linked to aging and variation in type 2 inflammation linked to asthma drug response.
Speaker Biography: Elior Rahmani is an assistant adjunct professor of computational medicine at the University of California, Los Angeles. He earned his PhD in computer science from UCLA in 2020 and completed his postdoctoral training at UC Berkeley from 2020 to 2022. His research focuses on developing and applying novel machine learning and statistical methods to better understand the molecular and clinical heterogeneity of complex diseases. The ultimate goal of his work is to systematically identify clinically actionable patient subgroups, enabling tailored treatments and advancing precision and equity in medicine. He is the principal investigator of an National Human Genome Research Institute-funded R21 award and a pending National Institute of General Medical Sciences R35 award.
Archive
From the calendar years 1997–2024.
- Fall 2024
- Spring 2024
- Fall 2023
- Spring 2023
- Fall 2022
- Summer 2022
- Spring 2022
- Fall 2021
- Summer 2021
- Spring 2021
- Fall 2020
- Spring 2020
- Fall 2019
- Summer 2019
- Spring 2019
- Fall 2018
- Summer 2018
- Spring 2018
- Fall 2017
- Summer 2017
- Spring 2017
- Fall 2016
- Summer 2016
- Spring 2016
- Fall 2015
- Spring 2015
- Fall 2014
- Spring 2014
- Fall 2013
- Spring 2013
- Fall 2012
- Spring 2012
- Fall 2011
- Spring 2011
- Fall 2010
- Spring 2010
- Fall 2009
- Spring 2009
- Fall 2008
- Spring 2008
- Fall 2007
- Spring 2007
- Fall 2006
- Spring 2006
- Fall 2005
- Spring 2005
- Fall 2004
- Spring 2004
- Fall 2003
- Spring 2003
- Fall 2002
- Spring 2002
- Spring 2001
- Fall 2000
- Spring 2000
- Fall 1999
- Spring 1999
- Fall 1998
- Spring 1998
- Fall 1997
- Summer 1997
- Spring 1997