Paper: | PS-1A.8 | ||
Session: | Poster Session 1A | ||
Location: | H Lichthof | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Humans cannot decipher adversarial images: Revisiting Zhou and Firestone (2019) | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1298-0 | ||
Authors: | Marin Dujmović, Gaurav Malhotra, Jeffrey Bowers, University of Bristol, United Kingdom | ||
Abstract: | In recent years, deep convolutional neural networks (DCNNs) have shown extraordinary success in object recognition tasks. However, they can also be fooled by adversarial images (stimuli designed to fool networks) that do not appear to fool humans. This has been taken as evidence that these models work quite differently than the human visual system. However, Zhou and Firestone (2019) carried out a study where they presented adversarial images which fool DCNNs to humans and found that, in many cases, humans chose the same label for these images as DCNNs. They take these findings to support the claim that human and machine vision is more similar than commonly claimed. Here we report two experiments that show that the level of agreement between human and DCNN classification is driven by how the experimenter chooses the adversarial images and how they choose the labels given to humans for classification. Based on how one chooses these variables, humans can show a span of agreement levels with DCNNs; from well below to well above levels expected by chance. Overall, our results do not support a view of large systematic overlap between human and computer vision. |