Technical Program

Paper Detail

Paper: PS-1A.41
Session: Poster Session 1A
Location: H Lichthof
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Incorporating Feedback in Convolutional Neural Networks
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Christian Jarvers, Heiko Neumann, Ulm University, Germany
Abstract: Convolutional neural networks (CNNs) are a state-of-the-art machine learning method, partially inspired by the hierarchical structure of cortex. They typically process information from input to output in a feedforward manner. It has been shown that incorporating feedback pathways can improve their performance and robustness. However, little is known about why feedback helps and how feedforward and feedback signals are best combined. Here, we compare feedforward and feedback networks using a multi-digit classification task, quantifying performance as well as robustness against image noise. We show that the advantage of feedback networks which add the feedback to the feedforward signal is largely due to the increased receptive field size of their neurons. In addition, we show that networks which use modulating or subtractive feedback (inspired by theories of feedback processing in cortex) outperform additive architectures and have increased robustness against noise. These results provide a first step towards using feedback in convolutional neural networks more effectively.