Home > Conferences > CNS*2022-ITW

CNS*2022 Workshop on Methods of Information Theory in Computational Neuroscience

Information in the brain. Modified from an original credited to dow_at_uoregon.edu (distributed without restrictions)

19-20 July, 2022

Melbourne, Australia


Aims and topics

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

For the program of the past IT workshops see the Previous workshops section.

Location and Registration

The workshop will be held as a part of the wider CNS*2022 meeting, held in-person in Melbourne, Australia. Please see the CNS*2022 website for for registration to the workshops (registration for at least the workshops component of CNS*2022 is required to attend).

Best presentation award sponsor: Entropy


We would like to thank the Entropy journal for sponsoring our Best Presentation Award for ECRs. which we have awarded to:

with Highly Commended citations to: Congratulations to our winners! Please see photos of the presentation below.

Organising committee


The currently confirmed speaker list is as follows:

Call for contributed talks

Now closed!


Our program is as shown in the following table.

Times in AEST (UTC+10) Tuesday, July 19 Wednesday, July 20
09:00-09:45 Main conference Demi Gao
The University of Melbourne
"Towards personalised cochlear implants: quantifying hearing performance using information theory"
09:45-10:30 Tomás̆ Bárta
Academy of Sciences of the Czech Republic
"Maximally informative coupling in a balanced excitatory-inhibitory neuronal network"
10:30-11:00 Morning tea break
11:00-11:45 Joseph Lizier
The University of Sydney
"Analytic relationship of information processing and synchronizability to network structure and motifs"
11:45-12:15 Aria Nguyen
The University of Sydney
"A feature-based information theoretic approach to detect large-scale interactions in neural systems"
12:15-13:40 Lunch break Lunch break
13:40-13:50 Welcome to the workshop
13:50-14:35 Demian Battaglia
Aix-Marseille University
"Decomposing neural circuit function into information processing primitives"
Arata Shirakami / Masanori Shimono
Kyoto University
"Whole brain comparison of E/I categorized informatic microconnectome and the application"
14:35-15:20 Giovanni Rabuffo
Aix-Marseille Université
"Nonlocal 'edge' interactions reconfigure the gradient of cortical timescales"
Leonardo Novelli
Monash University, Melbourne
"Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approaches"
15:20-16:00 Afternoon tea break Afternoon tea break
Tue 16:00-16:30
Wed 16:00-16:45
Tatiana Kameneva
Swinburne University
"Neuroprostheses: method to evaluate the information content of stimulation strategies"
Jason Pina
York University
"Cutting through the noise: A method for improving regression and correlation coefficient estimates in the presence of measurement error"
Tue 16:30-17:00
Wed 16:45-17:15
Naotsugu Tsuchiya
Monash University, Melbourne
"Are we experiencing colours in the world in the same way? An optimal transport of qualia structures between people"
Panel discussion
Topic TBA
Tue 17:00-17:30
Wed 17:15-17:30
Samy Castro Novoa
University of Strasbourg
"The canonic cortical circuit may be tailored to maximise high-order inter-population synergies"
Wrap-up and ECR Best Presentation award


Tomás̆ Bárta - "Maximally informative coupling in a balanced excitatory-inhibitory neuronal network"
The balance of excitation and inhibition (E-I balance) greatly affects neural input integration. For example, the balanced state leads to highly irregular spike trains of individual neurons, as observed in-vivo. From the rate coding perspective, representing information in irregular spike trains is inefficient. It is therefore still unclear why the brain chooses to represent information in this manner. We seek to elucidate this paradox by looking at the regularity of the population activity, instead of the single neuron activity.
We studied a randomly connected network of integrate-and-fire neurons with excitatory and inhibitory subpopulations. We varied the coupling strength between the two subpopulations and found that while increasing the coupling strength has only little effect on the Fano factor of individual neurons, it considerably decreases the Fano factor of the pooled response of many neurons, likely due to desynchronization effects of the inhibitory population on the network. Therefore, from the rate coding perspective, the inhibitory population increases the signal to noise ratio of the neural response.
To quantify the effect of the increased signal to noise ratio on information transmission, we used the neuronal firing rates and synaptic currents to calculate the metabolic cost of the activity and calculated the information-metabolic efficiency of the network as an information channel. This procedure can be then used to estimate the optimal coupling strength between the excitatory and inhibitory subpopulations based on the efficient coding principles.

Demian Battaglia - "Decomposing neural circuit function into information processing primitives"
Cognitive functions must arise from the coordinated activity of neural populations distributed over large-scale brain networks. However, it is challenging to understand (and measure) how specific aspects of neural dynamics translate into operations of information processing, and, ultimately, cognitive function. An obstacle is that simple circuit mechanisms -- such as self-sustained or propagating activity and nonlinear summation of inputs -- do not directly give rise to high-level functions, even if they do, nevertheless, already implement simple transformations of the information carried by neural activity.
Here we show that distinct neural circuit functions, such as stimulus representation, working memory or selective attention stem from different combinations and types of low-level manipulations of information, or information processing primitives. To prove this hypothesis, we combine approaches from information theory with computational simulations of canonical neural circuits involving one or more interacting brain regions and emulating well-defined cognitive functions. More specifically we track the dynamics of information emergent from dynamic patterns of neural activity, using suitable quantitative metrics to detect where and when information is actively buffered ("active information storage"), transferred ("information transfer") or non-linearly merged ("information modification"), as different possible modes of low-level processing. We thus find that neuronal subsets maintaining representations in working memory or performing attention-related gain modulation are signaled by their boosted involvement in operations of, respectively, active information storage or information modification.
Thus, information dynamics metrics, beyond detecting which network units participate in cognitive processing, also promise to specify how they do it, i.e. through which type of primitive computation, a capability that could be exploited for the parsing of actual experimental recordings.

Samy Castro Novoa - "The canonic cortical circuit may be tailored to maximise high-order inter-population synergies"
Inter-regional oscillatory coherence mediates flexible cortico-cortical interactions and bottom-up and top-down influences along the cortical hierarchy rely, respectively, on faster or slower frequency bands. However, besides the experimental observation that directed inter-regional functional connectivity (FC) does eventually exploit multiple frequencies, it is not completely clear why this should be the case. Simple explanations for the frequency-specificity of directed FC are expressed in terms of the layered organisation of the cortex. Indeed anatomical connections ascending or descending the cortical hierarchy have different source and target cortical layers and different layers have heterogeneous fractions of interneurons with faster or slower resonance frequencies. Our computational modelling furthermore suggests that interneuronal diversity is not necessary for frequency-specific FC, as, in certain dynamical regimes, inter-layer interactions are sufficient to cause deeper layers to oscillate at a slower frequency even when all included interneurons are resonating at a fast frequency.
Remarkably, frequency-specific FC as found in experiments emerge as a "free lunch" when cortical layers are precisely wired according to the empirically observed cortical canonical circuit but not for arbitrary connectomes. The phenomenon of frequency-specific FC is thus exceptional rather than ordinary, unlikely to arise by chance and possibly emerging from the evolutionary and developmental selection of specific, non-random circuit wirings. But what could be the cost function whose optimisation drives this selection process and thus shapes the canonic circuit? The existence of frequency-specific directed FC could be a desirable goal by itself (e.g., for the advantages it confers in terms of predictive coding). However, it could also be a "spandrel", i.e. an unlooked-for and incidental but unavoidable condition for the maximisation of some other goal. Here we hypothesise that the canonic circuit wiring is in reality optimised to achieve strong informational complexity and higher-order synergies between neuronal populations in different cortical layers and regions. To support our hypothesis, we construct and explore the dynamical regimes of hundreds of thousands of semi-randomized canonical cortical circuits. We first find that only less than 0.2% of the tested connectomes are "good", i.e. include a phase with empiric-like frequency-specific FC. Computing O-entropy and S-entropy (Rosas et al., PRE 2019) for different dynamical working points and connectomes, we reveal that such "good" connectomes are also associated with maximal S-entropy, proportional to "complexity" in the sense of Tononi, Edelman & Sporns (TICS 1998), and more negative O-entropy, denoting a dominance of synergistic on redundant higher-order interactions and coexistence of functional segregation and integration. Dynamical regimes with extrema of S- and O-entropy furthermore co-localize with regimes also giving rise to frequency-specific directed FC. The fact that frequency-specificity of FC emerges could thus be not an aim per se but a trait of non-trivial dynamical regimes occurring only in canonic connectomes selected because they boost high-order and multi-scale complex interactions.

Demi Gao - "Towards personalised cochlear implants: quantifying hearing performance using information theory"
Despite the development and success of cochlear implants over several decades, wide inter-subject variability in speech perception is reported. This suggests that cochlear implant user-dependent factors limit speech perception at the individual level. Clinical studies have demonstrated the importance of the number, placement, and insertion depths of electrodes on speech recognition abilities. However, these do not account for all inter-subject variability and to what extent these factors affect speech recognition abilities has not been studied. We unified information theoretic method and machine learning technique to quantitatively study the extent to which key factors limit the hearing performance with cochlear implants. The approach provides insights into personalised strategies for improving speech recognition outcomes.

Tatiana Kameneva - "Neuroprostheses: method to evaluate the information content of stimulation strategies"
We propose a framework to evaluate the information content of different stimulation strategies used in neuroprosthetic implants. We analyze the responses of retinal ganglion cells to electrical stimulation using an information theory framework. This methodology allows us to calculate the information content by looking at the consistency of neural responses generated across multiple repetitions of the same stimulation protocol.
[1] K. Mengl, H. Meffin, R.M. Ibbotson, T. Kameneva, "Neuroprostheses: method to evaluate the information content of stimulation strategies", 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, United States, 18-21 July 2018, paper no. 8513122; doi:10.1109/EMBC.2018.8513122

Joseph Lizier - "Analytic relationship of information processing and synchronizability to network structure and motifs"
The relation between network structure and function is of central importance to network neuroscience, and to network science more generally. In this talk, we explore the use of one particular mathematical framework to relate three measures of dynamics on networks -- information storage, information transfer, and synchronisation -- to the underlying structure of that network. In particular, we focus on quantifying how the process motif structures that a node participates in dictates these dynamics on it. When considering information transfer for example, we reveal mathematically how in-degrees and clustered structure can influence the measure of transfer entropy between source and target nodes, which has impacts on inferred network structure.

Aria Nguyen - "A feature-based information theoretic approach to detect large-scale interactions in neural systems"
Quantifying relationships between elements of complex systems is critical to understanding their distributed dynamics. Many methods to infer dependencies between pairs of time series exist, such as Pearson correlation and transfer entropy where the measure of dependency is calculated directly from time-series data. But in many systems the elements interact in complex ways on different timescales, making it challenging to learn and interpret statistical relationships directly from the time-series. A promising alternative involves transforming local segments of a time series into interpretable dynamical summary statistics, or 'features'. In this work, we introduce a feature-based adaptation of conventional pairwise dependency methods, which allows us to efficiently detect and interpret dependencies in a complex dynamical system when the interactions are mediated by properties of the dynamics.
In our simulation studies, we generated interactions between processes driven by stochastic, autoregressive, and nonstationary oscillations, with time-series features of the 'source' dynamics influencing the 'target' dynamics. We use mutual information and transfer entropy to measure dependencies between raw source and target data, and features of source and target data, while applying a statistical testing framework to control for false positives across the multiple feature comparisons. We find that the feature-based measurements can detect the interactions between source and target data at much shorter lengths of time series and on longer timescale of interactions where the measurements from the raw space struggles to. We anticipate this method being useful for many applications involving the characterization of dynamic interactions underlying neural systems.

Leonardo Novelli - "Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approaches"
Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.
[1] L. Novelli, J.T. Lizier, "Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approaches", Network Neuroscience, 5(2):373-404, 2021; doi:10.1162/netn_a_00178

Jason Pina - "Cutting through the noise: A method for improving regression and correlation coefficient estimates in the presence of measurement error"
A key challenge in neuroscience is estimating relationships between biological, behavioral, or cognitive variables in the presence of noise. Such noise, or measurement error, arises from uncertainties due to either recording device limitations or intrinsic biological variability, both of which are ever-present in neuroscience experiments. This noise can greatly reduce estimated linear regression and correlation coefficients, as well as the fraction of explained variance (or $R^2$ value), compared to their true values. In many neuroscience experiments, data that can be leveraged to eliminate this bias is already collected, as the relevant variables are often averages of multiple observations. We present a simple, easy-to-implement method that utilizes these multiple measurements to estimate the noise variance and allow for the regression dilution effect to be removed. Using simulated data, we show that the confidence intervals from our unbiased estimator indeed consistently capture the underlying regression and correlation coefficients, in sharp contrast with those from the uncorrected estimates. We then apply our method to neuronal responses in 2-photon calcium imaging data from recent experimental work. Our estimator leads to appreciably larger regression statistically significant estimates, providing additional perspective on how brains respond to and learn from novel, unexpected stimuli.

Giovanni Rabuffo - "Nonlocal 'edge' interactions reconfigure the gradient of cortical timescales"
A hierarchy of timescales with a back-to-front cortical gradient has been proposed to underlie information processing in the brain, with posterior areas retaining information for short durations, and associative areas displaying slower information decay. Such back-to-front gradient of timescales considers local (nodal) information processing only. However, acquisition, integration and interpretation of inputs are distributed and dynamical processes, relying on the interactions occurring between regions (functional edges). Hence, we hypothesize that the corresponding time-scales are inherent to the coordinated activity between regions, and not to local processing alone. Using edgewise connectivity on MEG signals, we demonstrate a reverse front-to-back gradient when non-local interactions are prominent.

Arata Shirakami / Masanori Shimono - "Whole brain comparison of E/I categorized informatic microconnectome and the application"
Measures of information content and flow can be seen as a rewriting of thermodynamic entropy, begun by Claude E. Shannon. Entropy is the measure that describes the "arrow of time" in the physical world, and its representation as a flow of information is the key to not only understanding the function of our brains in an integrated manner, but also how we, living organisms, live against the laws of the physical world. We have previously reported here the interaction of neurons in the somatomotor cortex as an information flow that is reasonably consistent with physiological synaptic connections, and evaluated the topology of the interaction network among these neurons. In this presentation, we extend that work to (1) classify both excitatory and inhibitory connections and evaluate the topology of their interaction networks, (2) apply that analysis to data from across cortical regions to reveal differences in the topology of local circuits among cortical regions. (3) Moreover, we apply a technique that automatically compresses the information contained in that topology, and (4) we report the results of our application to the analysis of diseased animals that have experienced social stress. In general, the uniqueness of the frontal cortex was revealed in a way that was naturally extracted from the data.

Naotsugu Tsuchiya - "Are we experiencing colours in the world in the same way? An optimal transport of qualia structures between people"
Conscious experience has been suggested to be linked with some types of information. In particular, qualia or quality of conscious contents have been speculated to be inextricably related to some types of information structure (e.g., integrated information theory of consciousness). In this context, "is my 'red' your 'green'?" is an example of the philosophical problem of inverted qualia. If a given quality of an experience can be completely characterised through its potential relationships with other qualities, it may provide a potential path to an answer. This relational idea of qualia is inspired by a mathematical formulation: the Yoneda lemma in category theory. This relational scheme implies a way in which qualia inversion could be ruled out: if two individuals possess the same similarity relationships between their colour experiences, then those individuals experience the same colours. This is especially clear if the underlying structures are inhomogeneous. To test whether this constraint exists empirically, we collected similarity ratings for a sample of 93 colours across 487 online participants. Instead of providing judgments for all possible pairs, individual participants reported on a subset of the combinations, which we randomly aggregated to generate two independent similarity matrices. As speculated, when sufficient colours are examined to reveal complexity in the similarity matrices we were able to 'align' the two using an unsupervised optimal transport algorithm with near-perfect performance. Our results imply that inverted qualia could only hold for simplistic, low-dimensional qualia structures, which may not find any real-world correspondence. We discuss a potential implication of our results in the context of structure of information and a possibility of information theory to contribute to science of consciousness.


Workshop chair Joe Lizier introduces the workshop
MDPI Entropy Best ECR Presentation winner Demi Gao (middle) with organisers Abdullah Makkeh (left) and Joseph Lizier (right)
MDPI Entropy Best ECR Presentation winner Demi Gao (2nd right) with Highly Commended speakers Jason Pina (left) and Leonardo Novelli (middle), and organisers Abdullah Makkeh (2nd left) and Joseph Lizier (right)

Previous workshops

This workshop has been run at CNS for over a decade now -- links to the websites for the previous workshops in this series are below:

  1. CNS*2021 Workshop, July 6-7, 2021, Online!
  2. CNS*2020 Workshop, July 21-22, 2020, Online!
  3. CNS*2019 Workshop, July 16-17, 2019, Barcelona, Spain.
  4. CNS*2018 Workshop, July 17-18, 2018, Seattle, USA.
  5. CNS*2017 Workshop, July 19-20, 2017, Antwerp, Belgium.
  6. CNS*2016 Workshop, July 6-7, 2016, Jeju, South Korea.
  7. CNS*2015 Workshop, July 22-23, 2015, Prague, Czech Republic.
  8. CNS*2014 Workshop, July 30-31, 2014, Québec City, Canada.
  9. CNS*2013 Workshop, July 17-18, 2013, Paris, France.
  10. CNS*2012 Workshop, July 25-26, 2012, Atlanta/Decatur, GA, USA.
  11. CNS*2011 Workshop, July 27-28, 2011, Stockholm, Sweden.
  12. CNS*2010 Workshop, July 29-30, 2010, San Antonio, TX, USA.
  13. CNS*2009 Workshop, July 22-23, 2009, Berlin, Germany.
  14. CNS*2008 Workshop, July 23-24, 2008, Portland, OR, USA.
  15. CNS*2007 Workshop, July 11-12, 2007, Toronto, Canada.
  16. CNS*2006 Workshop, June 19-20, 2006, Edinburgh, U.K.

Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CC-BY-3.0