Technical Program

Paper Detail

Paper: PS-1B.32
Session: Poster Session 1B
Location: H Fläche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Unfolding of multisensory inference in the brain and behavior
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Yinan Cao, University of Oxford, United Kingdom; Hame Park, University of Bielefeld, Germany; Bruno L. Giordano, Centre National de la Recherche Scientifique and Aix-Marseille Université, France; Christoph Kayser, University of Bielefeld, Germany; Charles Spence, Christopher Summerfield, University of Oxford, United Kingdom
Abstract: Human multisensory inference has recently been characterized as involving fusion, segregation, or a flexible arbitration between fusion and segregation by virtue of sensory causal inference (CI; see Rohe & Noppeney, 2015). Theoretical work suggests that this inference could be a monolithic process implemented in reciprocally-coupled neuronal assemblies (Zhang et al., 2019). An alternative view, however, is that the computations are structured in time, so that different processes dominate at different post-stimulus latencies. There is emerging neural evidence for this view (Aller & Noppeney, 2018; Cao et al., 2019). Furthermore, behavioral studies also suggested that fusion may be a rather automatic process, e.g., crossmodal biases tend to be stronger when participants respond faster or after acquiring only little sensory evidence (Noppeney et al., 2010). By contrast, CI requires additional processing time as it capitalizes on evaluating the degree of sensory discrepancy, maintaining beliefs over latent causes, and possibly exploring distinct decision strategies. Here, across three studies combining psychophysics, computational modelling, and representational similarity analysis (RSA) to source-resolved human magnetoencephalographic data, we show that multisensory inference unfolds in time, by rapidly deriving a fused sensory estimate for computational expediency and, later and if required, filtering out irrelevant signals based on the inferred sensory cause(s).