Paper: | PS-1B.30 | ||
Session: | Poster Session 1B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Human uncertainty improves object classification | ||
Manuscript: | Click here to view manuscript | ||
License: | ![]() This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1054-0 | ||
Authors: | Joshua Peterson, Ruairidh Battleday, Thomas Griffiths, Princeton University, United States | ||
Abstract: | Despite the continued improvement in deep network classifiers, humans remain the enduring gold standard for strong generalization and robustness. In this work, we show that incorporating more human-like perceptual uncertainty into classification models can help narrow this gap. In particular, we show that training state-of-the-art convolutional neural networks with human-derived distributions over labels, as opposed to ground-truth labels, improves their generalization to out-of-sample datasets and robustness to adversarial attacks. These findings suggest that more accurately capturing uncertainty over image labels is critical to forming a robust visual model of the world. To facilitate further advancements of this kind, we propose our human-derived "soft" label distributions for the CIFAR10 test set, which we call CIFAR10H, as a new benchmark. |