Video is becoming a core medium for communicating a wide range of content, including educational lectures, vlogs, and how-to tutorials. While videos are engaging and informative, they lack the familiar and useful affordances of text for browsing, skimming, and flexibly transforming information. This severely limits who can interact with video content and how they can interact with it, makes editing a laborious process, and means that much of the information in videos is not accessible to everyone.
But, what are the future systems will make videos useful for all users?
In this talk, I’ll share my work creating interactive Human-AI systems that leverage multiple mediums of communication (e.g., text, video, and audio) across two main research areas: 1) helping domain-experts surface content of interest through interactive video abstractions, and 2) making videos non-visually accessible through interactions for video accessibility. First I will share core challenges of seeking information in videos from interviews with domain experts. Then, I will share new interactive systems that leverage AI, and evaluations that demonstrate system efficacy. I will conclude with how hybrid HCI-AI breakthroughs will make digital communication more effective and accessible in the future, and how new interactions can help us to realize the full potential of recent AI/ML advances.
Speaker Biography
Amy Pavel is a Postdoctoral Fellow at CMU HCII and a Research Scientist in AI/ML at Apple. Her research explores how interactive tools, augmented with machine learning techniques, can make digital communication more effective and accessible. She has published her work in conferences including UIST, CHI, ASSETS, and other ACM/IEEE venues. She previously received her Ph.D. in CS at UC Berkeley, where her work was supported by an NDSEG fellowship.