Technical Program

Paper Detail

Paper: PS-1B.53
Session: Poster Session 1B
Location: H Fläche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Test-retest reliability of canonical reinforcement learning models
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Laura Weidinger, Max-Planck-Institute for Human Development, Germany; Andrea Gradassi, Lucas Molleman, Wouter van den Bos, University of Amsterdam, Netherlands
Abstract: Reinforcement learning (RL) paradigms are commonly used in Cognitive Science research on human learning. These paradigms are often used in combination with computational models to estimate individual differences in learning parameters. Recently, it has been proposed that such parameter estimates can be used to better understand psychiatric conditions (Montague, Dolan, Friston, & Dayan, 2012). However, to be used as such, it is essential that the test-retest reliability of these paradigms and computational models is established. The present study seeks to close this gap by investigating the test-retest reliability of standard RL models in the context of two canonical paradigms: a probabilistic RL task with gain and loss feedback and a reversal learning task (Cools, Clark, Owen, & Robbins, 2002; Frank, Seeberger, & O’reilly, 2004). This study obtained test results from n=150 participants for each task via the online testing platform Amazon Mechanical Turk with a between-test interval of five weeks. Several standard versions of Rescorla Wagner models are fitted to the choice data in R to study the test-retest reliability of resulting parameter estimates. Test-retest reliability is studied in regard to behavioral measures and model parameters.