Technical Program

Paper Detail

Paper: PS-2B.46
Session: Poster Session 2B
Location: H Fl├Ąche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Neural Topic Modelling
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Pamela Hathway, Dan F.M. Goodman, Imperial College London, United Kingdom
Abstract: We introduce neural topic modelling - an unsupervised, scalable and interpretable neural data analysis tool which can be applied across different spatial and temporal scales. The aim is an approach that can handle the ever-increasing number of neurons recorded by high channel count multi-electrode arrays. Neural topic modelling is based on latent Dirichlet allocation, a method routinely used in text mining to find latent topics in texts. The spike trains are converted into "neural words" - the presence or absence of discrete events (e.g. neuron 1 has a higher firing rate than usual). Neural topic modelling results in a number of topics (probability distributions over words) which best explain the given co-occurrences of neural words over time. Applied to an electrophysiological dataset of mouse visual cortex, hippocampus and thalamus neurons, neural topic modelling groups neural words into topics which exhibit common attributes such as overlapping receptive fields or proximity on the recording electrode. It recovers these relationships despite receiving no knowledge about the cortex topography or about the spatial structure of the stimuli. Choosing neural activity patterns as neural words that are relevant to the brain makes the topics interpretable by both the brain and researchers, setting neural topic modelling apart from other machine learning approaches.