Automated decision-making systems are increasingly being deployed in areas with personal and societal impacts, leading to growing interest and need for AI and ML systems that are robust, explainable, fair, and so on. It is important to note that these guarantees only hold with respect to a certain model of the world, with inherent uncertainties. In this talk, I will present how probabilistic modeling and reasoning, by incorporating a distribution, offer a principled way to handle different kinds of uncertainties when learning and deploying trustworthy AI systems. For example, when learning classifiers, the labels in the training data may be biased; I will show that probabilistic circuits, a family of tractable probabilistic models, can be effective in enforcing and auditing fairness properties by explicitly modeling a latent unbiased label. In addition, I will also discuss recent breakthroughs in tractable inference of more complex queries including information-theoretic quantities, to demonstrate the full potential of probabilistic reasoning. Finally, I will conclude with my future work towards a framework to more flexibly reason about and enforce trustworthy AI/ML system behaviors.
Speaker Biography
YooJung Choi is a Ph.D. candidate in Computer Science at the University of California, Los Angeles, advised by Guy Van den Broeck. Her research is broadly in the areas of artificial intelligence and machine learning, with focus on probabilistic modeling and reasoning for automated decision-making. In particular, she is interested in theory and algorithms for tractable probabilistic inference and applying these results to address fairness, robustness, explainability, and in general aim towards trustworthy AI/ML. YooJung is a recipient of a UCLA fellowship in 2021-2022, and was selected for the Rising Stars in EECS workshop in 2020.