Technical Program

Paper Detail

Paper: PS-2B.41
Session: Poster Session 2B
Location: H Fl├Ąche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Selective enhancement of object representations through multisensory integration
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: David Tovar, Vanderbilt University, United States; Micah Murray, University of Lausanne, Switzerland; Mark Wallace, Vanderbilt University, United States
Abstract: Objects are the fundamental building blocks of how we represent the external world. These objects come in a variety of forms, with one major distinction being between those that are animate versus inanimate. Many objects are specified in a multisensory manner, yet the nature by which multisensory objects are represented by the brain, particular those that are animate versus not, remains poorly understood. Using representational similarity analysis of human EEG signals, we show that the often-found advantages for the processing of animate objects are no longer evident when they are presented in a multisensory context. Neural decoding was found to be enhanced asymmetrically for inanimate objects, which were more weakly decoded under unisensory conditions. A distance-to-bound analysis provided critical links between neural decoding and behavior. Improved neural decoding for visual and audiovisual objects was associated with faster behavior, and decoding differences between visual and audiovisual objects predicted reaction time differences between them. Collectively, these findings show that neural representational space and the encoding of objects is malleable and distinct under unisensory and more real-world multisensory contexts.