Published:
Category:
Headshot of Ziang Xiao.
Ziang Xiao

Ziang Xiao, an assistant professor of computer science and a member of the Johns Hopkins Data Science and AI Institute, will tackle critical safety issues in AI as part of Schmidt Sciences’ new AI Safety Science program. Xiao will lead one of 27 projects selected for the $10 million program’s inaugural cohort seeking to develop well-founded, concrete, and implementable technical methods for testing and evaluating large language models (LLMs) so that they are less likely to cause harm, make errors, or be misused.  

By fostering a collaborative global research community and offering computational support from the Center for AI Safety and API access from OpenAI, the AI Safety Science program aims to deepen our understanding of the safety of systems built with LLMs and make this science an integral part of AI innovation. The program is also designed to develop robust tools to measure and evaluate risks, provide funding to support long-term research, and elevate underutilized academic expertise in AI.

Xiao’s research goal is to study and design human-AI interaction to understand humans at scale. Broadly, his current research focuses on three exciting topics: AI for social science, information seeking, and human-centered model evaluation. In terms of the last, Xiao aims to advance the science of AI evaluation and develop human-centered, robust, and contextualized evaluation methods for language technologies.

As part of the AI Safety Science program, Xiao will build measurement theory grounded framework and tools to identify gaps in current AI benchmark research and scaffold novel AI benchmark creation. He will collaborate with Susu Zhang, an assistant professor of psychology and statistics at the University of Illinois Urbana-Champaign.

The program’s awardees will convene in California later this year to share their work with one another and outside organizations interested in AI safety.

Learn more about the AI Safety Science program here.