Paper: | PS-2A.67 | ||
Session: | Poster Session 2A | ||
Location: | H Lichthof | ||
Session Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation Time: | Sunday, September 15, 17:15 - 20:15 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | A Computational Model for Combinatorial Generalization in Physical Perception from Sound | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1276-0 | ||
Authors: | Yunyun Wang, Massachusetts Institute of Technology, Tsinghua University, China; Chuang Gan, MIT-IBM Watson AI Lab, United States; Max Siegel, Zhoutong Zhang, Jiajun Wu, Joshua Tenenbaum, Massachusetts Institute of Technology, United States | ||
Abstract: | Humans possess the unique ability of combinatorial generalization in auditory perception: given novel auditory stimuli, humans perform auditory scene analysis and infer causal physical interactions based on prior knowledge. Could we build a computational model that achieves human-like combinatorial generalization? In this paper, we present a case study on box-shaking: having heard only the sound of a single ball moving in a box, we seek to interpret the sound of two or three balls of different materials. To solve this task, we propose a hybrid model with two components: a neural network for perception, and a physical audio engine for simulation. We use the outcome of the network as an initial guess and perform MCMC sampling with the audio engine to improve the result. Combining neural networks with a physical audio engine, our hybrid model achieves combinatorial generalization efficiently and accurately in auditory scene perception. |