Technical Program

Paper Detail

Paper: PS-1A.43
Session: Poster Session 1A
Location: H Lichthof
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: ABC-NN: Approximate Bayesian Computation with Neural Networks to learn likelihood functions for efficient parameter estimation
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Alexander Fengler, Michael Frank, Brown University, United States
Abstract: In cognitive neuroscience, computational modeling offers a principled interpretation of the functional demands of cognitive systems. Bayesian parameter estimation provides information about the full posterior distribution over likely parameters. Importantly, the set of models with known likelihoods is dramatically smaller than the set of plausible generative models. Approximate Bayesian Computation (ABC) methods facilitate sampling from posterior parameters for models specified only up to a data generating process, overcoming this limitation to afford bayesian estimation of complex stochastic models (Wood, 2010, Beaumont, 2010, Askert, 2015, Turner 2014). Relying on model simulations to generate synthetic likelihoods, these methods however come with substantial computational cost at inference where simulations are typically conducted at each step in a MCMC algorithm. We propose a method that learns an approximate likelihood over the parameter space of interest, using multilayered perceptrons (MLPs). This incurs a single upfront cost, but the resulting network comprises a usable likelihood function that can be freely used in standard inference algorithms. We test this approach in the context of drift diffusion models, a class of cognitive process models commonly used in the cognitive sciences to jointly account for choice and reaction time data in a variety of experimental settings (Ratcliff, 2016).