Technical Program

Paper Detail

Paper: PS-2A.45
Session: Poster Session 2A
Location: H Lichthof
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Linking apparent position to population receptive field estimates using a visual field projection model
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1185-0
Authors: Marian Schneider, Ingo Marquardt, Shubarti Sengupta, Federico De Martino, Rainer Goebel, Maastricht University, Netherlands
Abstract: In illusions called motion-induced position shifts (MIPS), a coherent motion signal shifts the apparent location of a stimulus in the direction of motion. MIPS allow for studying the perception mechanism underlying object localisation because they dissociate the physical from the perceived position of a stimulus. Here, we propose a bottom-up approach to modelling position perception that links apparent position to population receptive field estimates motivated by empirical data. We recorded psychophysical and functional magnetic resonance imaging (fMRI) data while systematically varying two factors: the motion direction of the stimulus carrier pattern (inward, outward and flicker motion) and the contrast of the mapping stimulus (low and high stimulus contrast). We observed that, while physical positions were identical across all conditions, presence of low-contrast motion, but not high-contrast motion, shifted perceived stimulus position in the direction of motion. Correspondingly, we found that pRF estimates in early visual cortex were shifted against the direction of motion for low-contrast stimuli but not for high stimulus contrast. We propose a model built on the assumption that activation of pRF units can be linked to apparent position via visual field projections. Our model replicates the perceptual position shifts.