Paper: | PS-2A.33 | ||
Session: | Poster Session 2A | ||
Location: | H Lichthof | ||
Session Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Rational Arbitration of Hippocampal Replay | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1385-0 | ||
Authors: | Mayank Agrawal, Marcelo Mattar, Nathaniel Daw, Jonathan Cohen, Princeton University, United States | ||
Abstract: | It has recently been proposed that hippocampal replay can be explained in a reinforcement learning setting as the instantiation of the Dyna framework. This formulation lends itself to a dual-process model: an agent can choose to either act or replay at every time step. Here, we extend the proposed model by adding a controller that arbitrates between replaying and acting in order to maximize reward rate. That is, rather than give a fixed budget of replays to perform in both the starting and final state, we allow the agent to dynamically decide how much to replay in all states. The first result is that, in a Gridworld task, this algorithm is able to converge to the optimal policy faster. Second, by tracking the amount of replay selected per trial, we observe that there is only a narrow range of trials in which replay is beneficial. We propose this model as both a more efficient use of the Dyna framework as well as a normative model of how rational animal and human agents should replay. |