Reconstructing Neural Networks from Their Spike Trains with Transfer Entropy
Felix Goetze1,2*, Pik-Yin Lai1
1Department of Physics, National Central University, Chung-Li, Taiwan
2Taiwan International Graduate Program for Molecular Science and Technology, Institute for Atomic and Molecular Sciences, Academia Sinica, Taipei, Taiwan
* presenting author:Felix Goetze, email:afgoetze@gmail.com
The effective connectivity of spike train recordings from large populations of neurons is analyzed by estimating the information transfer in these networks using transfer entropy[1]. This quantifies the directed interactions between neurons as a model-free method even for non-linear interactions. High information transfer between two spike trains is evidential for an underlying excitatory synapse between the neurons, but even inhibitory synapses show significant information transfer. We extend the effective connectivity analysis by revealing whether the information transfer is coming from an excitatory or an inhibitory synapse. To distinguish these type of interactions we analyze the local transfer entropies[2] of each interaction and define the sorted local transfer entropy as the discriminating quantity. We improve these network reconstructions by applying state-conditioning[3] to the entropy estimations to remove the network effects during highly synchronized events, which leads to a better delay and interaction strengh estimation. We further show that neurons can be correctly classified into excitatory and inhibitory from their estimated outgoing connection types. We test these techniques on simulated spike trains from random networks of Izhikevich neurons with random synaptic delays and spike-timing-dependent plasticity evolved connection weights like in a previous study[4], showing that we can distinguish interaction types, improve the reconstruction and classify the neurons.

[1] Nigam, S., Shimono, M., Ito, S., Yeh, F.-C., Timme, N., Myroshnychenko, M., Lapish, C. C., Tosi, Z., Hottowy, P., Smith, W. C. & others (2016). The Journal of Neuroscience. 36, 670–684. [2] Lizier, J. T., Prokopenko, M. & Zomaya, A. Y. (2008). Phys. Rev. E. 77, 026110.
[3] Stetter, O., Battaglia, D., Soriano, J., Geisel, T. & Beggs, J. (2012). PLoS Computational Biology. 8, e1002653.
[4] Ito, S., Hansen, M. E., Heiland, R., Lumsdaine, A., Litke, A. M., Beggs, J. M. & Zochowski, M. (2011). PLoS ONE. 6, e27431.


Keywords: transfer entropy, neural networks, inverse problem