Technical Program

Paper Detail

Paper: PS-1B.10
Session: Poster Session 1B
Location: H Fläche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Orientation representations in convolutional neural networks are more discriminable around the cardinal axes
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1122-0
Authors: Margaret Henderson, John Serences, University of California, San Diego, United States
Abstract: Convolutional neural networks (CNNs) share some similarity in representational structure to the primate ventral visual stream, however less is known about whether low-level visual features are represented in the same way by CNNs and the brain. Here, we focus on orientation perception, a well-understood aspect of the primate visual system. We asked whether convolutional neural networks trained to perform object recognition on a natural image database would exhibit an “oblique effect” such that cardinal (vertical and horizontal) orientations are represented with higher precision than oblique (diagonal) orientations, as has been measured in the primate brain. We obtained activation patterns from two networks (NASnet and Inception-V3) presented with oriented grating stimuli, and used a Euclidean distance metric to measure the discriminability between patterns corresponding to different pairs of orientations. In agreement with human perception, we find that the discriminability of representations generally peaks around the cardinal axes. This finding suggests that cardinality effects in human visual perception are not dependent on a hard-wired anatomical bias, but can instead emerge through experience with the statistics of natural images.