Technical Program

Paper Detail

Paper: PS-1A.7
Session: Poster Session 1A
Location: H Lichthof
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Approximate Inference through Active Sampling of Likelihoods Accounts for Hick's Law and Decision Confidence
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Xiang Li, New York University, United States; Luigi Acerbi, University of Geneva, Switzerland; Wei Ji Ma, New York University, United States
Abstract: In $N$-alternative Bayesian categorization, computing exact likelihoods and posteriors might be hard for the brain. We propose an approximate inference framework with active sampling inspired by Bayesian optimization. While it is common in Bayesian models to assume that the agent makes noisy measurements of a state of the world, here we use a more general (and more abstract) starting point. We assume that the true (ideal-observer) likelihoods and posteriors of the categories are unknown to the agent. The agent sequentially makes noisy measurements of those likelihoods, one category at a time, thus refining their beliefs over the true likelihoods and their belief over the true posterior probabilities. To decide whether to make another measurement, the agent simulates the consequences of doing so for the latter belief. This framework accounts for two types of empirical findings. First, we find that the average number of measurements grows approximately logarithmically with $N$, reminiscent of Hick's law. Second, we account for a puzzling recent finding that decision confidence follows the difference between the two highest posteriors, rather than the highest posterior itself. Our framework provides a novel approach to explain human categorization by combining approximate inference with active sampling.