Technical Program

Paper Detail

Paper: PS-2B.2
Session: Poster Session 2B
Location: H Fläche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Quantitatively comparing predictive models with the Partial Information Decomposition
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1142-0
Authors: Christoph Daube, University of Glasgow, United Kingdom; Bruno Giordano, Centre National de la Recherche Scientifique, France; Phillippe Schyns, Robin Ince, University of Glasgow, United Kingdom
Abstract: There is increasing focus in cognitive and computational neuroscience on the use of encoding and decoding models to gain insight into cognitive processing. Frequently, encoding models are fit to a number of different features sets, and the out-of-sample predictive performance of the resulting models is compared. However, to gain the maximum benefit from this modelling, we need to go beyond simply ranking model performance in terms of absolute predictive power. We also need to directly compare and relate the predictions between models, to gain insight into which models are predicting common vs unique aspects of the neural response. The Partial Information Decomposition (PID) provides a principled theoretical framework to address this question, as it decomposes the total predictive performance of two models into redundant (overlapping), unique, and synergistic parts. We show that like classical information theoretic quantities, variance decomposition approaches conflate synergy and redundancy and so could provide a misleading view of the unique predictive power of a model. We also suggest how the use of encoding models and PID can help interpret decoding models.