Technical Program

Paper Detail

Paper: GS-3.2
Session: Contributed Talks 5-6
Location: H0104
Session Time: Sunday, September 15, 09:50 - 10:30
Presentation Time:Sunday, September 15, 10:10 - 10:30
Presentation: Oral
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Automatically inferring task context for continual learning
Manuscript:  Click here to view manuscript
View Video: Video
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1417-0
Authors: Jasmine Collins, Kelvin Xu, Bruno Olshausen, Brian Cheung, University of California Berkeley, United States
Abstract: While neural network research typically focuses on models that learn to perform well within the context of a single task, models that operate in the real world are often required to learn multiple tasks or tasks that change under different contexts. Furthermore, in the real world the learning signal for each of these tasks usually arrives in sequence, rather than simultaneously in a batch, as in the deep learning setting. We propose a method to infer when the task context has changed when learning from a continual datastream, and to adjust the model's learning accordingly to prevent interference between learned tasks. We show how to automatically infer the context of a previously learned task for use in the future (e.g. during model evaluation). These preliminary results show that learning autonomously in a continually changing environment is possible in neural network models. This learning is better suited to how data naturally arrives in a real world environment.