If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality, an instance of cross-modal transfer of knowledge. How is this accomplished? The Multisensory Hypothesis states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations mediate the transfer of knowledge across modality-specific representations. In this talk, I’ll present three studies, using experimental and computational methodologies, of the Multisensory Hypothesis. The first study examines visual-haptic transfer of object shape knowledge, the second study examines visual-auditory transfer of sequence category knowledge, and the final study examines a novel latent variable model of multisensory perception.
Speaker Biography
For my undergraduate studies, I attended the University of Pennsylvania where I majored in Psychology. I spent the next two years working as a Research Assistant in a biomedical research laboratory at Rockefeller University. For graduate school, I attended the University of Massachusetts at Amherst where I earned a Ph.D. degree in Computer and Information Science (graduate advisor: Andrew Barto). I then served in two postdoc positions, one in the Department of Brain & Cognitive Sciences at the Massachusetts Institute of Technology (postdoc advisor: Michael Jordan), and the other in the Department of Psychology at Harvard University (postdoc advisor: Stephen Kosslyn). I’m currently a faculty member at the University of Rochester where my title is Professor of Brain & Cognitive Sciences, of Computer Science, and of the Center for Visual Science. I am also a member of the Center for Computation and the Brain.