Paper: | PS-1B.9 | ||
Session: | Poster Session 1B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Temporal Segmentation for Faster and Better Learning | ||
Manuscript: | Click here to view manuscript | ||
License: | ![]() This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1133-0 | ||
Authors: | Brad Wyble, Pennsylvania State University, United States; Howard Bowman, University of Birmingham, United Kingdom | ||
Abstract: | The human visual system faces an extraordinary challenge in building memories from a continuous stream of visual data without the opportunity to store it and process offline. This suggests a crucial role for visual attention in attending to specific moments in time. This project outlines key data and theories related to human temporal attention. The focus of this submission is on bridging the divide between the human visual system and artificial models by explicating segmentation mechanisms that accelerate the ability to learn the structure of the world through continuous vision. A computational model that simulates the temporal dynamics of visual attention in humans indicates a role for attention in on-demand temporal segmentation of incoming information at the sub-second scale. We predict that in humans this segmentation plays a key role in 1) simplifying visual information for learning about object kinds through compression 2) segmenting information from neighboring fixations and 3) encoding the temporal sequence of events. Such segmentation is likely to also play a key role in allowing artificial systems to learn from visual input in an online fashion, even though their specific temporal constraints are not shared with biology. |