Technical Program

Paper Detail

Paper: GS-4.1
Session: Contributed Talks 7-8
Location: H0104
Session Time: Sunday, September 15, 11:50 - 12:30
Presentation Time:Sunday, September 15, 11:50 - 12:10
Presentation: Oral
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Self-supervised Neural Network Models of Higher Visual Cortex Development
Manuscript:  Click here to view manuscript
View Video: Video
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1393-0
Authors: Chengxu Zhuang, Stanford University, United States; Siming Yan, Peking University, China; Aran Nayebi, Daniel Yamins, Stanford University, United States
Abstract: Deep convolutional neural networks (DCNNs) optimized for visual object categorization have achieved success in modeling neural responses in the ventral visual pathway of adult primates. However, training DCNNs has long required large-scale labelled datasets, in stark contrast to the actual process of primate visual development. Here we present a network training curriculum, based on recent state-of-the-art self-supervised training algorithms, that achieves high levels of task performance without the need for unrealistically many labels. We then compare the DCNN as it evolves during training both to neural data recorded from macaque visual cortex, and to detailed metrics of visual behavior patterns. We find that the self-supervised DCNN curriculum not only serves as a candidate hypothesis for visual development trajectory, but also produces a final network that accurately models neural and behavioral responses.