Technical Program

Paper Detail

Paper: PS-1B.55
Session: Poster Session 1B
Location: H Fl├Ąche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Explaining Scene-selective Visual Areas Using Task-specific Deep Neural Network Representations
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Kshitij Dwivedi, Singapore University of Technology and Design, Singapore; Michael Bonner, Johns Hopkins University, Baltimore, MD, United States; Gemma Roig, Singapore University of Technology and Design, Singapore
Abstract: Deep neural networks (DNNs) are currently the models that account for higher variance of the responses from the human visual cortex. In this work, we aim to explore the power of DNNs as a tool to gain insights into functions of visual brain areas. Particulary, we focus on scene selective visual areas. We use a set of DNNs trained to perform different visual tasks, comprising 2D, 3D and semantic aspects of scene perception, to explain fMRI responses in early visual cortex (EVC) and scene selective visual areas (OPA, PPA). We find that EVC representation is more similar to early layers of all DNNs and deeper layers of 2D-task DNNs. OPA representation is more similar to deeper layers of 3D DNNs, whereas PPA representation to deeper layers of semantic DNNs. We extend our study to performing searchlight analysis using such task specific DNN representations to generate task-specificity maps of visual cortex, and visualize their overlap with existing ROI parcels. Our findings suggest that DNNs trained on a diverse set of visual task can be used to gain insights into functions of visual cortex. Our approach has the potential to be applied beyond visual areas.