Paper: | PS-2B.3 | ||
Session: | Poster Session 2B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Accelerated Texforms: Alternative Methods for Generating Unrecognizable Object Images with Preserved Mid-Level Features | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1412-0 | ||
Authors: | Arturo Deza, Yi-Chia Chen, Harvard University, United States; Bria Long, Stanford University, United States; Talia Konkle, Harvard University, United States | ||
Abstract: | Texforms are images that preserve the coarse shape and texture information of objects, while rendering them unrecognizable at the basic level (Long, Konkle, Cohen, & Alvarez, 2016). These stimuli have been valuable to test whether cognitive and neural processes depend on explicit recognition of the objects. However, to generate these images, the current implementation and computational complexity of the model requires approximately 4-6 hours per object – thus preventing data-hungry experiments that may require generating thousands of texforms. Our contribution in this work includes the introduction of 2 new texform generation methods that accelerate the rendering time from hours to minutes or seconds respectively. The first we call Fast-FS-Texform where we accelerate the rendering time of the Freeman and Simoncelli (2011) model and increase the output resolution by placing a simulated point of fixation outside of the visual field. The second, which we call NeuroFovea-Texform, is an adaptation of the newly proposed metamer model of Deza, Jonnalagadda, and Eckstein (2019) which leverages a VGGNet and foveated style transfer. We show qualitative and quantitative results of both new methods opening the door to data-intensive texform experiments. |