Paper: | PS-2B.60 | ||
Session: | Poster Session 2B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Task-Dependent Attention Allocation through Uncertainty Minimization in Deep Recurrent Generative Models | ||
Manuscript: | Click here to view manuscript | ||
License: | ![]() This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1202-0 | ||
Authors: | Kai Standvoss, Silvan Quax, Marcel van Gerven, Radboud University, Netherlands | ||
Abstract: | Allocating visual attention through saccadic eye movements is a key ability of intelligent agents. Attention is both influenced through bottom-up stimulus properties as well as top-down task demands. The interaction of these two attention mechanisms is not yet fully understood. A parsimonious reconciliation posits that both processes serve the minimization of predictive uncertainty. We propose a recurrent generative neural network model that predicts a visual scene based on foveated glimpses. The model shifts its attention in order to minimize the uncertainty in its predictions. We show that the proposed model produces naturalistic eye-movements focusing on salient stimulus regions. Introducing the additional task of classifying the stimulus, modulates the saccade patterns and enables effective image classification. Given otherwise equal conditions, we show that different task requirements cause the model to focus on distinct, task-relevant regions. The results provide evidence that uncertainty minimization could be a fundamental mechanisms for the allocation of visual attention. |