Refreshments are available starting at 10:30 a.m. The seminar will begin at 10:45 a.m.
Abstract
The ever-increasing scale of foundation models, such as ChatGPT and AlphaFold, has revolutionized AI and science more generally. However, increasing scale also steadily raises computational barriers, blocking almost everyone from studying, adapting, or otherwise using these models for anything beyond static API queries. In this talk, Tim Dettmers will present research that significantly lowers these barriers for a wide range of use cases, including inference algorithms that are used to make predictions after training, fine-tuning approaches that adapt a trained model to new data, and finally, full training of foundation models from scratch. For inference, he will describe the LLM.int8() algorithm, which showed how to enable high-precision 8-bit matrix multiplication that is both fast and memory efficient. LLM.int8() is based on the discovery and characterization of sparse outlier sub-networks that only emerge at large model scales, but are crucial for effective Int8 quantization. For fine-tuning, he will introduce the QLoRA algorithm, which pushes such quantization much further to unlock fine-tuning of very large models on a single GPU by only updating a small set of the parameters while keeping most of the network in a new information-theoretically optimal 4-bit representation. For full training, he will present SWARM parallelism, which allows collaborative training of foundation models across continents on standard internet infrastructure while still being 80% as effective as the prohibitively expensive supercomputers that are currently used. Finally, he will close by outlining his plans to make foundation models 100x more accessible, which will be needed to maintain truly open AI-based scientific innovation as models continue to scale.
Speaker Biography
Tim Dettmers’ research focuses on making foundation models, such as ChatGPT, accessible to researchers and practitioners by reducing their resource requirements. This involves developing novel compression and networking algorithms and building systems that allow for memory-efficient, fast, and cheap deep learning. These methods enable many more people to use, adapt, or train foundation models without affecting the quality of AI predictions or generations. Dettmers is a PhD candidate at the University of Washington and has won oral, spotlight, and best paper awards at conferences such as the International Conference on Learning Representations and the Conference and Workshop on Neural Information Processing Systems. He created the bitsandbytes library for efficient deep learning, which is growing at 1.4 million installations per month, and has received Google Open Source and PyTorch Foundation awards.