Affordable Access

Publisher Website

Selective Enhancement of Object Representations through Multisensory Integration.

  • Tovar, David A1, 2
  • Murray, Micah M3, 4, 5, 6
  • Wallace, Mark T7, 2, 6, 8, 9, 10
  • 1 School of Medicine, Vanderbilt University, Nashville, Tennessee 37240 [email protected]
  • 2 Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240.
  • 3 The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), 1011 Lausanne, Switzerland. , (Switzerland)
  • 4 Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, 1015 Lausanne, Switzerland. , (Switzerland)
  • 5 Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, 1002 Lausanne, Switzerland. , (Switzerland)
  • 6 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240.
  • 7 School of Medicine, Vanderbilt University, Nashville, Tennessee 37240.
  • 8 Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240.
  • 9 Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240.
  • 10 Department of Pharmacology, Vanderbilt University, Nashville, Tennessee 37240.
Published Article
Journal of Neuroscience
Society for Neuroscience
Publication Date
Jul 15, 2020
DOI: 10.1523/JNEUROSCI.2139-19.2020
PMID: 32499378


Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses. Copyright © 2020 the authors.

Report this publication


Seen <100 times