
Abstract
Large language models have achieved remarkable progress by scaling training data and model sizes. However, they continue to face critical limitations, including hallucinations and outdated knowledge, which hinder their reliability—especially in expert domains such as scientific research and software development. In this talk, Akari Asai will argue that addressing these challenges requires moving beyond monolithic LMs and toward augmented LMs—a new AI paradigm that designs, trains, and deploys LMs alongside complementary modules to enhance reliability and efficiency. Focusing on her research on retrieval-augmented LMs, one of the most impactful and widely adopted forms of augmented LMs today, Asai will begin by presenting systematic analyses of current LM shortcomings and demonstrating how retrieval augmentation offers a more scalable and effective path forward. She will then discuss her work on establishing new foundations for these systems, including novel training approaches and retrieval mechanisms that enable LMs to dynamically adapt to diverse inputs. Finally, she will showcase the real-world impact of such models through OpenScholar, her group’s fully open retrieval-augmented LM for assisting scientists in synthesizing literature now used by over 30,000 researchers and practitioners worldwide. Asai will conclude by outlining her vision for the future of augmented LMs, emphasizing advancements in abilities to handle heterogeneous modalities, more efficient and flexible integration with diverse components, and rigorous evaluation through interdisciplinary collaboration.
Speaker Biography
Akari Asai is a PhD candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on overcoming the limitations of large language models by developing advanced systems such as retrieval-augmented LMs and applying them to real-world challenges, including scientific research and underrepresented languages. Her contributions have been widely recognized, earning multiple paper awards at top natural language processing and machine learning conferences, an IBM PhD Fellowship Award, and industry grants. Asai was also named a 2022 Electrical Engineering and Computer Science Rising Star and one of MIT Technology Review‘s Innovators Under 35 in Japan. Her work has been featured in outlets such as Forbes and MIT Technology Review. Beyond her research, Asai actively contributes to the NLP and ML communities as a co-organizer of high-impact tutorials and workshops, including the first tutorial on retrieval-augmented LMs at the 2023 Meeting of the Association for Computational Linguistics (ACL), as well as workshops on multilingual information access (2022 Conference of the Nations of the Americas Chapter of the ACL) and knowledge-augmented NLP (NAACL 2025).