Paper: | PS-1B.17 | ||
Session: | Poster Session 1B | ||
Location: | H Fläche 1.OG | ||
Session Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation Time: | Saturday, September 14, 16:30 - 19:30 | ||
Presentation: | Poster | ||
Publication: | 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany | ||
Paper Title: | Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning? | ||
Manuscript: | Click here to view manuscript | ||
License: | This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
||
DOI: | https://doi.org/10.32470/CCN.2019.1078-0 | ||
Authors: | Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen, CerCo - CNRS, France | ||
Abstract: | In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feed-forward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in neural networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, our results suggest that explicit learning is best modeled by recurrent architectures, whereas feedforward networks better capture the dynamics involved in implicit learning.An extended version of this work is available at: https://arxiv.org/abs/1902.04861 |