Faculty Research Advisor: Benjamin Van Durme
Abstract: Our common-sense knowledge about objects includes their typical visual attributes; for example, we know that bananas are typically yellow or green, not purple. Text and image corpora, being subject to reporting bias, represent this world knowledge with varying degrees of faithfulness. In this paper, we investigate to what degree unimodal (language-only) and multimodal (image and language) models capture a broad range of visually salient attributes. To this end, we create the Visual Commonsense Tests (ViComTe) dataset, covering five property types (color, shape, material, size, and visual co-occurrence) for over 5,000 subjects. We validate this dataset by showing that our grounded color data correlates much better than ungrounded text-only data with crowdsourced color judgments provided by Paik et al. (2021). We then use our dataset to evaluate pretrained unimodal models and multimodal models. Our results indicate that multimodal models better reconstruct attribute distributions, but are still subject to reporting bias. Moreover, increasing model size does not enhance performance, suggesting that the key to visual common sense lies in the data themselves.
Full paper >>