Paper: | PS-1A.31 | ||
Session: | Poster Session 1A | ||
Location: | H Lichthof | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Modeling echo-target acquisition in blind humans | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1429-0 | ||
Authors: | Santani Teng, Giovanni Fusco, Smith-Kettlewell Eye Research Institute, United States | ||
Abstract: | Echolocating organisms ensonify their surroundings, then extract object and spatial information from the echoes. This behavior has been observed in some blind humans, but the computations underlying this strategy remains extremely poorly understood. Here we monitored the movements and echo emissions of an expert blind echolocator performing a target detection and localization task. We found that the precision of responses depended significantly on the size of the target and availability of active echo cues. Characterizing human echolocation in this way would place it in the context of other active sensing behaviors, constrain the types of perceptual mechanisms mediating its behavior, and at a practical level, could serve as a basis for optimizing therapeutic training interventions. |