Home > Conferences > CNS*2017ITW
1920 July, 2017 Antwerp, Belgium 
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
For the program of the past IT workshops see the Previous workshops section.
Workshop dinner: Planned for Wed 19/7, 19:30 at CaféRestaurant Bourla, Graanmarkt 7 (google maps). Please join us there  Viola will count numbers at the session this afternoon, but you should be able to just turn up anyway.
The workshop will be held as a part of the wider CNS*2017 meeting, in Antwerp, Belgium. Please see the CNS*2017 website for registration to the workshops (this is required to attend).
Our program is as shown in the following table.
You can also download a PDF copy of the program.
Wednesday, July 19  Thursday, July 20  

Session: Information dynamics I Chair: Joseph Lizier 
Session: Information transfer and continuous time Chair: Taro Toyoizumi 

09:0009:45  Raul Vicente University of Tartu "Estimating and applying partial information decomposition to complex systems" 
Lionel Barnett University of Sussex "Information Transfer in Continuous and Discrete Time" 
09:4510:30  Jil Meier Delft University of Technology "The epidemic spreading model and the direction of information flow in brain networks" 
Pedro Martinez Mediano Imperial College London "Integrated Information Theory Without the Hot Air" 
10:3011:00  Break  Break 
Session: Learning and inference Chair: Justin Dauwels 
Session: Information dynamics II Chair: Raul Vicente 

11:0011:45  Karl Friston University College London "Active inference and artificial curiosity" 
Viola Priesemann Max Planck Institute for Dynamics and Selforganization, Goettingen "Quantifying Information Storage and Modification in Neural Recordings" 
11:4512:30  Taro Toyoizumi RIKEN Brain Science Institute "A Local Learning Rule for Independent Component Analysis" 
Demain Battaglia AixMarseille University "Discrete information processing states in anesthetized rat recordings" 
12:3014:00  Lunch  Lunch 
Session: Contributions Chair: Alexander G. Dimitrov 
Session: Directed information Chair: Demian Battaglia 

14:0014:45  Fleur Zeldenrust Radboud Universiteit "Estimating the information extracted by a single spiking neuron from a continuous input time series" 
Selin Aviyente Michigan State University "Directed Information: Application to EEG during cognitive control" 
14:4515:30  Mehrdad Salmasi LudwigMaximiliansUniversity Munich "Information rate of a synapse during shortterm depression" 
Adrià Tauste Universitat Pompeu Fabra "Directed information flow within the thalamocortical network" 
15:3016:00  Break  Break 
Session: Coding, spikes and energy Chair: Lubomir Kostal 
Session: Information decompositions Chair: Viola Priesemann 

16:0016:45  Robin Ince University of Glasgow "Quantifying representational interactions in neuroimaging and electrophysiological data using information theory" 
Daniele Marinazzo University of Ghent "Synergy and redundancy in dynamical systems: towards a practical and operative definition" 
16:4517:30  Renaud Jolivet University of Geneva "Energyefficient information transfer at synapses" 
Lubomir Kostal Academy of Sciences of the Czech Republic "Reference frame independence as a constraint on the mutual information decomposition" 
17:3018:00  Rodrigo Cofre University of Geneva and Universidad de Valparaíso "Information Entropy Production and Large deviations of Maximum Entropy Processes from Spike Trains" 
Joseph T. Lizier The University of Sydney "An estimator for transfer entropy between spike trains" 
Selin Aviyente  "Directed Information: Application to EEG during cognitive control"
Effective connectivity refers to the influence one neural system exerts on another and corresponds to the parameter of a model that tries to explain the observed dependencies. In this talk, we will present a recently proposed informationtheoretic measure, Directed information (DI), for capturing the causality relationships in the brain. Compared to traditional causality detection methods based on linear models, directed information is a modelfree measure and can detect both linear and nonlinear causality relationships. However, the effectiveness of using DI for capturing the causality in different models and neurophysiological data is tempered by the challenges of estimating it from limited data. In this talk, we address these issues by evaluating the performance of directed information on both simulated data sets and electroencephalogram (EEG) data to illustrate its effectiveness for quantifying the effective connectivity in the brain. We further illustrate an application of this measure in the detection of directed community structures from EEG data during cognitive control.
Lionel Barnett  "Information Transfer in Continuous and Discrete Time"
Inference of information transfer in complex neural systems from neurophsysiological recordings is an increasingly prevalent approach to directed neural functional analysis. Neurophsysiological recordings (such as EEG, MEG, fMRI, ECoG, etc.) are generally obtained by subsampling at discrete time intervals a continuoustime analogue signal associated with some underlying biophysiological process. Popular measures for quantifying information transfer such as transfer entropy and Granger causality, however, have been propounded almost exclusively in discrete time. How, then, do estimates of information transfer thus derived relate to "ground truth" information transfer at the continuoustime biophysical level? Our recent study [1] analyses information transfer in continuous time, revealing a complex interaction between neural time scales, prediction horizon and sampling rate, which mediates the ability to detect and reliably infer information transfer at the neural level from discretelysampled analogue signals. These results have cogent implications for statistical inference of directed functional connectivity from neurophsysiological data recordings.
[1] L. Barnett and A. K. Seth (2016): Detectability of Granger causality for subsampled continuoustime neurophysiological processes. J. Neurosci. Methods 275. (Preprint)
Demain Battaglia  "Discrete information processing states in anesthetized rat recordings"
How does information flow through cortical networks: via fixed and discrete internal pathways or via routes that are continuously dynamically reconfigured? And if functional networks of interaction are "liquid", is there some syntax underlying the way in which they reconfigure across time? Here we show, through information theoretical analysis of LFP and single unit recordings in anaesthetized rats, that functional connectivity maps between entorhinal cortex and CA1 neurons display dynamic reconfiguration between theta and slow oscillations in vivo. At the level of field potentials, directed functional connectivity within and between cortical layers depends on the activated oscillatory states, but it is relatively stable within each of these oscillatory states. In contrast, at the single cell level, information is shared through much more volatile networks, which continue nevertheless to be sampled from statespecific ensembles.
Beyond the dependency on the activated oscillatory modes, we identify switching between a larger number of well distinct functional states associated to alternative information processing properties. In particular, we extract discrete information storage and information sharing states, in which different statespecific sets of neurons perform different primitive information processing operations. Switching between these information processing states is not necessarily aligned with transitions between theta or slowwave oscillatory epochs but display a much richer dynamics.
Using statistical approaches linked to the minimum description length framework we study the computational complexity associated to the spontaneous state switching dynamics observed during the anaesthetized recordings. By estimating "grammars" able to generate the observed state sequences, modelled as suitable symbolic sequences, we show that the syntactic complexity of state sequences is higher during theta rather than slowwave epochs and for the functional dynamics of enthorinal cortex rather than CA1.
Rodrigo Cofre  "Information Entropy Production and Large deviations of Maximum Entropy Processes from Spike Trains"
Experimental recordings of the collective activity of interacting spiking neurons exhibit random behavior and memory effects. Therefore, the stochastic process modeling this activity require showing some degree of irreversibility. We use a generalization of the classical information theory called thermodynamic formalism to build a framework, in the context of spike train statistics, to quantify the degree of irreversibility of any parametric maximum entropy measure under arbitrary constraints. We provide an explicit formula for the information entropy production of the inferred Markov maximum entropy process. We provide examples to illustrate our results and discuss the importance of the irreversibility for modeling the spike train statistics.
Additionally, we review large deviations techniques useful to accurately describe statistical properties in terms of sampling size and maximum entropy parameters. In particular, we focus on the fluctuation of average values of observables, irreversibility and the identifiability problem of maximum entropy Markov chains. We illustrate these applications using simple examples of relevance in this field.
Karl Friston  "Active inference and artificial curiosity"
This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will use simulations of abstract rulelearning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiositydirected behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.
Robin Ince  "Quantifying representational interactions in neuroimaging and electrophysiological data using information theory"
There is growing recognition of the importance of considering the information content of experimentally recorded neural signals, rather than studying only differences in activation levels between conditions. This requires experimental designs with systematic stimulus sampling as well as analysis techniques that can go beyond standard pairwise measures of dependence to relate representations between different neural signals, as well as behavioural responses. I will present GaussianCopula Mutual Information (GCMI) [1], a mutual information estimator that has a number of advantages for practical data analysis. I will demonstrate how this estimator can be used to quantify representational interactions through coinformation and the Partial Information Decomposition [2], both approaches which can quantify redundancy and synergy between neural representations or between stimulus features. As an analysis tool, these information theoretic techniques can address, via redundancy, the same conceptual questions as Representational Similarity Analysis, or crossdecoding methods, but are more broadly applicable to a wide variety of experimental designs. Crucially, they can also quantify synergistic interactions, which no existing approaches address. I will illustrate with several examples: temporal interactions in eventrelated EEG, multisensory audiovisual stimulus interactions in MEG, behaviourbrain interactions revealing taskrelevant feature coding with MEG, and multimodal data fusion of simultaneously recorded EEG and fMRI. [1] Ince et al. (2017) Human Brain Mapping doi:10.1002/hbm.23471; [2] Ince (2016) arXiv 1602.05063.
Renaud Jolivet  "Energyefficient information transfer at synapses"
The nervous system consumes a disproportionate fraction of the resting body's energy production. In humans, the brain represents 2% of the body's mass, yet it accounts for ~20% of the total oxygen consumption. Expansion in the size of the brain relative to the body and an increase in the number of connections between neurons during evolution underpin our cognitive powers and are responsible for our brains' high metabolic rate. Despite the significance of energy consumption in the nervous system, how energy constrains and shapes brain function is often underappreciated. I will illustrate the importance of brain energetics and metabolism, and discuss how the brain trades information for energy savings in the visual pathway. Indeed, a significant fraction of the information those neurons could transmit in theory is not passed on to the next step in the visual processing hierarchy. I will discuss how this can be explained by considerations of energetic optimality.
Lubomir Kostal  "Reference frame independence as a constraint on the mutual information decomposition"
The value of Shannon's mutual information is commonly used to describe the total amount of information that the neural code transfers between the ensemble of stimuli and the ensemble of neural responses. In addition, it is often desirable to know which stimulus features or which response features are most informative. The literature offers several different decompositions of the mutual information into its stimulus or responsespecific components, such as the specific surprise or the uncertainty reduction, but the number of mutually distinct measures is in fact infinite. We attempt to resolve this ambiguity by requiring the specific information measures to be invariant under invertible coordinate transformations of the stimulus and the response ensembles. We also discuss the impact of the reference frame change on the ultimate decoding accuracy.
Joseph Lizier  "An estimator for transfer entropy between spike trains"
The nature of a directed relationship (or lack thereof) between neural entities is a fundamental topic of inquiry in computational neuroscience. Information theory provides the primary tool, transfer entropy (TE), for analysis of such directed relationships in a nonlinear, modelfree fashion, by measuring the predictive gain about state transitions in a target timeseries from observing some source timeseries. While the TE has been used extensively to analyse recordings from fMRI, MEG and EEG, fewer applications have been made to spiking timeseries. Temporal binning before computing TE on resulting binary discretetime series is the default approach here, leaving open questions around whether one can achieve estimates avoiding temporal binning and working directly on (continuousvalued) timestamps of spikes, and whether such estimates would be more accurate. Recent theoretical developments have suggested a path forward here, and we build on these to propose an estimator for a pointprocess formulation of TE, remaining in the continuoustime regime by harnessing a nearestneighbours approach to matching (rather than binning) interspike interval (ISI) histories and future spiketimes. By retaining as much information about ISIs as possible, this estimator is expected to improve on properties of TE such as robustness to noise and undersampling, bias removal, and sensitivity to strength of relationship, etc.
Daniele Marinazzo  "Synergy and redundancy in dynamical systems: towards a practical and operative definition"
The presence of redundancy and/or synergy in multivariate time series data hinders the estimation directed dynamical connectivity from each driver variable to a given target. We show that an unnormalized definition of Granger causality can point to redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. Thus, the pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits. This decomposition is then extended to detect connectivity across different time scales. I will describe the theoretical framework and descriev the application of the proposed approach to neuroimaging data (EEG and fMRI).
Pedro Martinez Mediano  "Integrated Information Theory Without the Hot Air"
Integrated Information Theory (IIT) is a branch of Information Theory that seeks to quantify the extent to which a system of interacting components acts as a single unified entity, processing information in a way that none of its subsystems could alone. It was originally proposed by Balduzzi and Tononi in 2008, and it has seen a strong increase in momentum in the recent years. The main drive behind the development of IIT is also its greatest criticism  it was conceived as a fundamental theory of consciousness, and its strong claims have sharply separated the research community into "pro" and "against" camps. In this talk I will describe the basics of IIT and dissociate its purely informationtheoretical aspects from its contentious claims about the nature of consciousness. Then, devoid of this heavy philosophical burden, IIT seems more appealing as an informationtheoretic framework to study the complexity of biological or artificial systems. Finally, I will review recent research trends that share this view and work towards a practical measure of information integration in large nonlinear systems, regardless of others' metaphysical interpretations of it.
Jil Meier  "The epidemic spreading model and the direction of information flow in brain networks"
The interplay between structural connections and emerging information flow in the human brain remains an open research problem. A recent study observed global patterns of directional information flow in empirical data using the measure of transfer entropy. For higher frequency bands, the overall direction of information flow was from posterior to anterior regions whereas an anteriortoposterior pattern was observed in lower frequency bands. In this study, we applied a simple SusceptibleInfectedSusceptible (SIS) epidemic spreading model on the human connectome with the aim to reveal the topological properties of the structural network that give rise to these global patterns. We found that direct structural connections induced higher transfer entropy between two brain regions and that transfer entropy decreased with increasing distance between nodes (in terms of hops in the structural network). Applying the SIS model, we were able to confirm the empirically observed opposite information flow patterns and posterior hubs in the structural network seem to play a dominant role in the network dynamics. For small time scales, when these hubs acted as strong receivers of information, the global pattern of information flow was in the posteriortoanterior direction and in the opposite direction when they were strong senders. Our analysis suggests that these global patterns of directional information flow are the result of an unequal spatial distribution of the structural degree between posterior and anterior regions and their directions seem to be linked to different time scales of the spreading process.
Viola Priesemann  "Quantifying Information Storage and Modification in Neural Recordings"
Information theory offers a versatile approach to study computational properties in neural networks. It is particularly useful when characterizing processing in higher brain areas or in vitro, where the semantic information content is not known. Moreover, it allows to quantify the core components of information processing, i.e. storage, transfer and modification, and compare them across different systems. Neural data are inherently high dimensional , which makes state space reconstruction a challenge. To approach this challenge, we propose a novel embedding scheme, and apply it to quantify active storage in vitro and in vivo. For both systems, we disentangle for the first time linear and nonlinear contributions, and find 1/3 to be due to nonlinear mechanisms. This stored information is highly redundant among neurons in vitro, but not in vivo. In a second study, we investigate the developmental trajectory of information modification in vitro, using partial information decomposition. Modification rises with maturation, but ultimately collapses when redundant information among neurons takes over. This indicates that in vitro systems initially develop processing capabilities, but being a system isolated from the outside world, their increasing recurrency may lead to redundant processing.
Mehrdad Salmasi  "Information rate of a synapse during shortterm depression"
Shortterm synaptic depression is a ubiquitous feature of neuronal activity. A central functional role of depression is hypothesized to be the modulation of the synaptic information rate. To study synaptic information efficacy, we model a release site by a binary asymmetric channel (BAC) and distinguish between the two release mechanisms of the synapse, i.e., spikeevoked release and spontaneous release. Shortterm depression is incorporated into the model by assuming that the state of the BAC channel is determined by the release history profile of the release site. We derive the mutual information rate of the synapse analytically. In addition, we take into account the energy cost of the synaptic release and calculate the energynormalized information rate of the synapse during shortterm depression. We prove that shortterm depression can enhance both the mutual information rate and energynormalized information rate of the synapse, provided that spontaneous release is depressed more than spikeevoked release. We then consider synapses with multiple release sites and calculate the number of release sites that a synapse need to optimally transfer information. We show how the input spike rate, spontaneous release, and reliability of the synapse affect the optimal number of release sites.
Adrià Tauste  "Directed information flow within the thalamocortical network"
Information flow within the thalamocortical network is known to be central for sensory information processing. In particular, the sensory thalamus is connected to the cortex by neuronal connections that allow for feedforward (thalamus to cortex) and feedback (cortex to thalamus) functional interactions. The common understanding is that during perception the sensory thalamus relays sensory information from the peripheral nervous system in a feedforward manner. However, this perspective has been challenged by several studies showing the importance of feedback interactions in perceptual tasks involving attentional and sensory coding processes. Therefore, to assess the proper balance of information flow during sensory processing, we analyzed the directed information between simultaneously recorded neurons in the Ventral Posterior Lateral Nucleus in the somatosensory thalamus (VPL) and the Somatosensory area 1 (S1) while a trained monkey judged the presence or absence of a vibrotactile stimulus of variable amplitude. Specifically, we applied a nonparametric method to test directed information estimates over 83 thalamocortical neuron pairs in more than 6000 correct trials. Our results were able to disentangle feedforward from feedback thalamocortical interactions and characterize the different modulatory effect of the stimulus amplitude on each information flow.
Tatjana Tchumatchenko  "Information coding of mean and variance modulating signals in cortical neurons"
Arriving sensory stimuli can modulate the average somatic current as well as its variability in neurons. However, it has so far been difficult to quantify how efficient mean and variance coding is with respect to the information bandwidth. Here, calculate the mutual information for both modulation schemes and quantify the difference in information transmission for mean modulating and variance modulating signals. We show that information content can grow if temporal noise correlations are increased but it has the opposite effect for variance modulating signals. Finite spike initiation time reported in cortical neurons results to a cutoff for the information carrying frequencies in the presence of colored noise background.
Taro Toyoizumi  "A Local Learning Rule for Independent Component Analysis"
Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, socalled local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering.
Raul Vicente  "Estimating and applying partial information decomposition to complex systems"
Partial information decompositions have been proposed to resolve the distribution of information in computing systems into unique, shared and synergistic contributions. In this talk we will address two topics in this framework: 1) some theoretical and practicals aspects of a numerical estimator of partial information decomposition (as defined by Bertschinger et al. in 2014), and 2) the application of such estimator to characterize complex systems, including the Ising model and elementary cellular automata.
Fleur Zeldenrust  "Estimating the information extracted by a single spiking neuron from a continuous input time series"
Understanding the relation between (sensory) stimuli and the activity of neurons (i.e. `the neural code') lies at heart of understanding the computational properties of the brain. However, quantifying the information between a stimulus and a spike train has proven to be challenging. We propose a new (in vitro) method to measure how much information a single neuron transfers from the input it receives to its output spike train. The input is generated by an artificial neural network that responds to a randomly appearing and disappearing `sensory stimulus': the hidden state. The sum of this network activity is injected as current input into the neuron under investigation. The mutual information between the hidden state on the one hand and spike trains of the artificial network or the recorded spike train on the other hand can easily be estimated due to the binary shape of the hidden state. The characteristics of the input current, such as the time constant as a result of the (dis)appearance rate of the hidden state or the amplitude of the input current (the firing frequency of the neurons in the artificial network), can independently be varied. As an example, we apply this method to pyramidal neurons in the CA1 of mouse hippocampi and compare the recorded spike trains to the optimal response of the `Bayesian neuron' (BN). We conclude that like in the BN, information transfer in hippocampal pyramidal cells is nonlinear and amplifying: the information loss between the artificial input and the output spike train is high if the input to the neuron (the firing of the artificial network) is not very informative about the hidden state. If the input to the neuron does contain a lot of information about the hidden state, the information loss is low. Moreover, neurons increase their firing rates in case the (dis)appearance rate is high, so that the (relative) amount of transferred information stays constant.
This workshop has been run at CNS for over a decade now  links to the websites for the previous workshops in this series are below:
Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CCBY3.0