Technical Program

Paper Detail

Paper: PS-1A.71
Session: Poster Session 1A
Location: H Lichthof
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: RNNs develop history biases in an expectation-guided two-alternative forced choice task
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1272-0
Authors: Manuel Molano-Mazon, IDIBAPS, Spain; Guangyu Robert Yang, Columbia University, United States; Ainhoa Hermoso-Mendizabal, Jaime de la Rocha, IDIBAPS, Spain
Abstract: Understanding how expectations bias perceptual decisions constitutes an unavoidable step towards deciphering how we make decisions. Here, we trained Recurrent Neural Networks (RNNs) in a novel two-alternative forced-choice (2AFC) task where both the current sensory evidence and the recent trial history provide information about the identity of the correct choice. We found that RNNs learned both to integrate the stimuli and to capitalize on the serial correlations of the trial sequence by developing history biases. Interestingly, during early stages of training, all networks reset their biases after an error response, which is consistent with data from rats performing the same task. At later stages of the training, approximately half of the networks moved from this initial, sub-optimal, strategy and developed after-error biases. A more detailed characterization of these different behaviors revealed that the percentage of networks showing after-error reset could be increased by limiting the resources of the networks, such as reducing their size, the information they receive or the training time. Together, these results suggest that rats develop a sub-optimal but easier to reach strategy to solve the task due to some limiting factor such as lack of computational capacity or time constraints.