Technical Program

Paper Detail

Paper: PS-1B.14
Session: Poster Session 1B
Location: H Fläche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Synaptic plasticity with correlated feedback: knowing how much to learn.
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Authors: Alexander Antrobus, Peter Latham, University College London, United Kingdom
Abstract: Learning synaptic weights is difficult. Connections receive global error signals, which are low dimensional and noisy, and local signals which lack information about relative contributions to the error. In this setting, it makes sense for connections to learn not just their ‘best guess’ of their weight, but also how confident they should be in this guess: i.e. to infer a distribution over their target weight. This idea was developed in (Aitchison & Latham, 2015) and (Aitchison, Pouget, & Latham, 2017). Similar concepts appear in (Hiratani & Fukai, 2018). In the aforementioned works the update equations are discrete in time and the likelihoods used in inference are Markov. This is not how things are in biology, where signals are continuous in time and temporally correlated. Here we consider a non-Markov setting, deriving coupled ODEs which describe how the parameters of the posterior evolve as more data is observed. We use a local temporal-smoothing method to deal with the continuous feedback and discrete presynaptic spike events. We show that the window of smoothing can be chosen in a principled way: maximising the per-spike decrease in uncertainty. We find our algorithm works better than the leaky delta rule with optimised learning rate. More importantly, for the simple model described, the method accurately predicts posterior variance.