Paper: | PS-2B.11 | ||
Session: | Poster Session 2B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Temporal Difference Learning for Recurrent Neural Networks | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1392-0 | ||
Authors: | Risheek Garrepalli Garrepalli, Oregon State University, United States | ||
Abstract: | Truncated back-propagation through time(TBPTT) is one of the most common methods for training artificial recurrent neural networks(RNNs) for temporal credit assignment(TCA). There have been various proposed theories on how neural circuits in the brain might approximate back-propagation algorithm in solving credit assignment problem for feedforward neural networks, but it remains unclear how the equivalent of TBPTT could be implemented in brains. Temporal difference learning with eligibility traces is a key universal approach used in reinforcement learning for multi-step value prediction problems(TCA). In this work, we apply TD learning with eligibility trace to train RNNs which is biologically plausible as it encodes errors locally, does not need a separate backward computation and hence does not have an issue of weight symmetry. |