Technical Program

Paper Detail

Paper: PS-2B.39
Session: Poster Session 2B
Location: H Fläche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: How the brain encodes meaning: Comparing word embedding and computer vision models to predict fMRI data during visual word recognition
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1088-0
Authors: Ning Mei, Usman Sheikh, Basque Center on Brain, Cognition, and Language, Spain; Roberto Santana, University of Basque Country, Spain; David Soto, Basque Center on Brain, Cognition, and Language, Spain
Abstract: The brain representational spaces of conceptual knowledge remain unclear. We addressed this question in a functional MRI study in which 27 participants were required to either read visual words or think about the concepts that words represented. To examine the properties of the semantic representations in the brain, we tested different encoding models based on word embeddings models -FastText (Bojanowski, Grave, Joulin, & Mikolov, 2017), GloVe (Pennington, Socher, & Manning, 2014), word2vec (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013)-, and, image vision models -VGG19 (Simonyan & Zisserman, 2014), MobileNetV2 (Howard et al., 2017), DenseNet121 (Huang, Liu, Van Der Maaten, & Weinberger, 2017)- fitted with the image referents of the words. These models were used to predict BOLD responses in putative substrates of the semantic network. We fitted and predicted the brain response using the feature representations extracted from the word embedding and computer vision models. Our results showed that computer vision models outperformed word embedding models in explaining brain responses during semantic processing tasks. Intriguingly, this pattern occurred independently of the task demand (reading vs thinking about the words). The results indicated that the abstract representations from the embedding layer of computer vision models provide a better semantic model of how the brain encodes word meaning.