Paper: | PS-1A.46 | ||
Session: | Poster Session 1A | ||
Location: | H Lichthof | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Toolbox for the Reinforcement Learning Drift Diffusion Model | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1380-0 | ||
Authors: | Mads Pedersen, University of Oslo, Norway; Michael Frank, Brown University, United States | ||
Abstract: | The continuous development of computational models drives understanding of cognitive mechanisms and their neurobiological underpinnings. Here we extend HDDM, an open source python toolbox for Bayesian hierarchical parameter estimation of the drift diffusion model, to also support reinforcement learning (RL). Moreover, our extension affords the ability to model instrumental learning paradigms in which the choice rule is replaced with the DDM (RLDDM), thus account for evolution of both choices and RT distributions with learning. RLDDM simultaneously estimates parameters of learning and dynamic decision processes by assuming decisions are made by accumulating evidence of the difference in expected rewards between choice options until reaching a decision threshold. Here we validate the model with a parameter recovery test and illustrate the usability of the toolbox, with posterior predictive checks, by fitting pre-collected data on an instrumental learning task. |