Technical Program

Paper Detail

Paper: GS-6.1
Session: Contributed Talks 11-12
Location: H0104
Session Time: Monday, September 16, 11:50 - 12:30
Presentation Time:Monday, September 16, 11:50 - 12:10
Presentation: Oral
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Using Inverse Reinforcement Learning to Predict Goal-directed Shifts of Attention
Manuscript:  Click here to view manuscript
View Video: Video
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1086-0
Authors: Gregory Zelinsky, Stony Brook University, United States
Abstract: Understanding how goal states control behavior is a question intersecting attention, action, and recognition, and one that is ripe for interrogation by new methods from machine learning. This study uses inverse-reinforcement learning (IRL) to learn the reward function and policy underlying the simplest of goal-directed actions—shifts of gaze—in the service of the simplest of goals—finding a desired target category. Training this IRL model of categorical search required the creation of a large-scale dataset of images (4,366) that are labeled with the fixations of people searching for one of two target-category goals (microwaves or clocks). The IRL model is evaluated against a test dataset consisting of the fixations of 60 people searching for either a microwave (n=30) or clock (n=30) in the same images. The IRL model successfully predicted behavioral search efficiency and fixation density maps using multiple metrics. Moreover, reward maps and action maps recovered by the IRL model revealed target-specific patterns that reflect, not just attention guidance to target features, but also guidance by scene context (e.g., clocks are often on walls). Using methods from machine learning it is now possible to learn the reward functions that more broadly capture the target-object context.