Technical Program

Paper Detail

Paper: PS-2B.61
Session: Poster Session 2B
Location: H Fläche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Conjunctive Coding of Color and Shape in Convolutional Neural Networks
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1378-0
Authors: JohnMark Taylor, Harvard University, United States; Yaoda Xu, Yale University, United States
Abstract: Understanding how the visual system conjunctively codes color and shape has long fascinated cognitive psychologists, cognitive neuroscientists and neurophysiologists. Recent developments in convolutional neural networks (CNNs) provide us with an excellent opportunity to examine how color and shape conjunctions may be coded in an artificial, feedforward system only trained to perform object recognition. To determine whether CNNs encode color and shape independently or in an interactive manner, we used representational similarity analysis to characterize the responses of Alexnet to a collection of 540 different objects, each presented in 36 different colors. We found that whereas lower layers of Alexnet encode colors in a similar manner across different objects, in higher layers the color spaces associated with different objects are more distinct. Interestingly, the similarity between the color spaces of different objects was only weakly (though significantly) associated with the objects’ shape similarity. These results demonstrate that rather than being encoded in an orthogonal manner, color and shape processing becomes increasingly interactive in higher layers of a CNN, suggesting that feedforward networks optimized for object recognition will naturally develop conjunctive coding of color and shape.