Home > Conferences > CNS*2020-ITW
21-22 July, 2020
Watch recordings of the talks on YouTube!
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
For the program of the past IT workshops see the Previous workshops section.
The workshop will be held as a part of the wider CNS*2020 meeting, online due to covid-19. Please see the CNS*2020 website for FREE registration and online access to the workshops (this is required to attend).
The workshop will be held in 3 sessions, 21 July 15:00-18:30 CEST (UTC+2, Berlin summer time), then 22 July 09:00-12:30 CEST and 15:00-18:30 CEST.
We would like to thank the Entropy journal for sponsoring our Best Presentation Award for ECRs, which we have awarded jointly to:
The currently confirmed speaker list is as follows (more to come!) --
Our program is as shown in the following table.
|Times in CEST (UTC+2)||Tuesday, July 21 (CEST)||Wednesday, July 22 (CEST)|
"Structure of information to understand the physical basis of consciousness"
The University of Sydney
"Exact Inference of Linear Dependence Between Multiple Autocorrelated Time Series"
Istituto Italiano di Tecnologia (IIT)
"Developing and testing the concept of intersection information using PID"
University Hospital Bonn
"Exploring relevant spatiotemporal scales for analyses of brain dynamics"
Universitat de Valencia
"Information flow under visual cortical magnification: Gaussianization estimates and theoretical results"
University College London
"Inferring what to do"
University of Ghent
"Synergistic information in a dynamical model implemented on the human structural connectome reveals spatially distinct associations with age"
University of Goettingen
"A differentiable measure of pointwise shared information"
University of Cambridge
"Multi-target information decomposition and applications to integrated information theory"
"Using lossy representations to find the neural code?"
University of Hertfordshire
"Information Theory for Cognitive Modelling: Speculations and Directions"
University of Washington
"Motifs for processes on networks"
University of Southern California
"Dynamical modeling, decoding, and control of multiscale brain networks"
|Fernando da Silva Borges
Federal University of ABC
"Inference of topology and the nature of synapses in neuronal networks"
|Wrap-up and ECR Best Presentation award|
We have created a YouTube playlist containing all of the talks, or you can watch individual talks below.
Joseph Lizier (chair) - Opening remarks
Thomas Parr - "Inferring what to do"
In recent years, the "planning as inference" paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about "how I am going to act". In this talk, I will overview some the factors that contribute to this prior - through the lens of active inference. These are rooted in optimal experimental design, information theory, and statistical decision making. The first part of the talk summarises the principles that underwrite active inference and motivates the question of how we formulate prior beliefs about how to act. The second part unpacks this in terms of exploitative and explorative behaviours. Finally, we consider the implementation of behavioural policies in terms of movement, the neuronal message passing that underwrites this, and the computational pathologies that result from aberrant priors.
Abdullah Makkeh - "A differentiable measure of pointwise shared information"
Partial information decomposition (PID) of the multivariate mutual information describes the distinct ways in which a set of source variables contains information about a target variable. The groundbreaking work of Williams and Beer has shown that this decomposition can not be determined from classic information theory without making additional assumptions, and several candidate measures have been proposed, often drawing on principles from related fields such as decision theory. None of these measures is differentiable with respect to the underlying probability mass function. We here present a novel measure that draws only on the principle linking the local mutual information to exclusion of probability mass. This principle is foundational to the original definition of the mutual information by Fano. We reuse this principle to define a measure of shared information based on the shared exclusion of probability mass by the realizations of source variables. Our measure is differentiable and well-defined for individual realizations of the random variables. Thus, it lends itself for example to local learning in artificial neural networks. We show that the measure can be interpreted as local mutual information with the help of an auxiliary variable. We also show that it has a meaningful Moebius inversion on a redundancy lattice and obeys a target chain rule. We give an operational interpretation of the measure based on the decisions that an agent should take if given only the shared information.
Sarah Marzen - "Using lossy representations to find the neural code?"
One of the major questions in neuroscience centers around definition and extraction of the neural code. I will talk about this problem in the abstract, drawing on rate-distortion theory (a less-used branch of information theory) to define the neural code, and will then describe a new method that may allow for practical extraction of the neural code from data.
Alice Schwarze - "Motifs for processes on networks" - slides
(Alice Schwarze, Mason A. Porter)
The study of motifs in networks can help researchers uncover links between structure and function of networks in biology, ecology, neuroscience, and many other fields. To connect the study of motifs in networks (which is common, e.g., in biology and the social sciences) with the study of motifs in dynamical processes (which is common in neuroscience), we propose to distinguish between "structure motifs" (i.e., graphlets) in networks and "process motifs" (i.e., structured sets of walks) on networks. Using as examples the covariances and correlations in a multivariate Ornstein--Uhlenbeck process on a network, we demonstrate that the distinction between structure motifs and process motifs makes it possible to gain new, quantitative insights into mechanisms that contribute to important functions of dynamical systems on networks.
Fernando da Silva Borges - "Inference of topology and the nature of synapses in neuronal networks"
The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which non adjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.
Naotsugu Tsuchiya - "Structure of information to understand the physical basis of consciousness" - slides
One of the biggest mysteries in science is the origin of subjective conscious experience. In modern investigation on consciousness, researchers distinguish level and contents of consciousness. The former is about the global state of conscious creatures, which goes from very low in coma, vegetitative states, deep dreamless sleep, and deep general anesthesia to high in fully wakeful state. The latter is about the contents that one experiences at a given moment of high level of consciousness, sometimes called qualia, covering all sensory and any other experiences.
In both meanings, consciousness has been difficult to relate to electrochemical physical interactions in the brain. Meanwhile, informational structure, which is derived from these neural activity and connectivity, is more promising as a possible candidate that is isomorphic to consciousness.
In this talk, I will explain three approaches that try to characterize 1) structures of information, 2) structures of consciousness, and 3) relationship between these two structures, primarily drawing on the approach with Integrated Information Theory [Tononi 2004 BMC, Tononi 2016 Nat Rev Neuro, Oizumi 2016 PNAS, Haun 2018, Leung 2020 bioRxiv] and Category Theory [Spivak 2011, Tsuchiya 2016 Neurosci Res, Tsuchiya 2020 OSF].
Oliver Cliff - "Exact Inference of Linear Dependence Between Multiple Autocorrelated Time Series"
Inferring linear dependence between time series is central to the study of dynamics, and has significant consequences for our understanding of natural and artificial systems. Unfortunately, traditional hypothesis tests often yield spurious associations (type I errors) or omit causal relationships (type II errors) when used to infer directed or multivariate dependencies in time-series data. Here we show that this problem is due to autocorrelation in the analysed time series -- a property that is ubiquitous across a diverse range of applications, from brain dynamics to climate change, and can be exacerbated by digital filtering. This insight enabled us to derive the first exact hypothesis tests for a large family of multivariate linear-dependence measures, including Granger causality and mutual information. Using numerical simulations and fMRI brain recordings, we show that our tests maintain the expected false-positive rate with minimally-sufficient samples, while demonstrating that asymptotic likelihood-ratio tests can induce unbounded statistical errors. Our findings suggest that many time-series dependencies in the scientific literature may have been, and may continue to be, spuriously reported or missed if our testing procedure is not widely adopted. (Cliff et al., arXiv:2003.03887)
Marco Celotto - "Developing and testing the concept of intersection information using PID"
(Marco Celotto, Stefano Panzeri)
To crack the neural code used during a perceptual decision-making process, it is fundamental to determine not only how information about sensory stimuli (encoding stage) is encoded in neural activity, but also how this information is read out to inform the behavioral decision .
In previous work, our group used the concept of redundancy, as defined into the mathematical framework of Partial Information Decomposition (PID), to develop an information-theoretic measure capable of quantifying that part of the information which is at the intersection between the mutual information of the stimulus S and the neural response R, and the mutual information of R and the consequent behavioral choice C . We called this measure "Information-theoretic intersection information" or II(S;R;C).
In this talk, we present our latest progress on how to use II(S;R;C) to study neural coding. We examine in detail its conceptual properties, and we show the results it provides both on simulated and on real neural data (the latter to test the role of spike timing in perceptual decision making). Furthermore, we discuss how to test the significance of the measure through a proper statistical null hypothesis.
 Panzeri et al. 2017 Neuron,  Pica et al. 2017 NIPS
|Note: YouTube video of talk not available||
Xenia Kobeleva - "Exploring relevant spatiotemporal scales for analyses of brain dynamics"
Introduction: The brain switches between cognitive states at a high speed by rearranging interactions between distant brain regions. Using analyses of brain dynamics neuroimaging researchers were able to further describe this dynamical brain- behavior relationship. However, the diversity of methodological choices for the brain dynamics analyses impedes comparisons between studies of brain dynamics, reducing their reproducibility and generalizability. A key choice constitutes deciding on the spatiotemporal scale of the analysis, which includes both the number of regions (spatial scale) as well as the sampling rate (temporal scale). Choosing a suboptimal scale might either lead to loss of information or inefficient analyses with increase of noise. Therefore, the aim of this study was to assess the effect of different spatiotemporal scales on analyses of brain dynamics and to determine which spatiotemporal scale would retrieve the most relevant information on dynamic spatiotemporal patterns of brain regions.
Methods: We compared the effect of different spatiotemporal scales on the information content of the evolution of spatiotemporal patterns using empirical as well as simulated timeseries. Empirical timeseries were extracted from the Human connectome project [Van Essen et al., 2013]. We then created a whole-brain mean-field model of neural activity [Deco et al., 2013] resembling the key properties of the empirical data by fitting the global synchronization level and measures of dynamical functional connectivity. This resulted in different spatiotemporal with spatial scales from 100 to 900 regions and varying temporal scales from milliseconds to seconds. With a variation of an eigenvalue analysis [Deco et al., 2019], we estimated the number of spatiotemporal patterns over time and then extracted these patterns with an independent component analysis. The evolution of these patterns was then compared between scales in regard to the richness of switching activity (corrected for the number of patterns in total) using the measure of entropy. Given the probability of the occurrence of a pattern over time, we defined the entropy as a function of the probability of patterns.
Results: Using the entropy measure, we were able to specify both optimal and temporal scales for the evolution of spatiotemporal patterns (fig. 1). The entropy followed an inverted U-shaped function with the highest value at an intermediate parcellation of n = 300. The entropy was highest at a temporal scale of around 200 ms.
Conclusions and discussion: We have investigated which spatiotemporal scale contained the highest information content for brain dynamics analyses. By combining whole-brain computational modelling with an estimation of the number of resulting patterns, we were able to analyze whole-brain dynamics in different spatial and temporal scales. From a probabilistic perspective, we explored the entropy of the probability of resulting brain patterns, which was highest at a parcellation of n = 300. Our results indicate that although more spatiotemporal patterns with increased heterogeneity are found with higher parcellations, the most relevant information on brain dynamics is captured when using a spatial scale of n = 200 and a temporal scale of 200 ms. Our results therefore provide guidance for researchers on choosing the optimal spatiotemporal scale in studies of brain dynamics.
Jesus Malo - "Information flow under visual cortical magnification: Gaussianization estimates and theoretical results"
Computations done by individual neural layers along the visual pathway (e.g. opponency at chromatic channels and their saturation, spatial filtering and the nonlinearities of the texture sensors at visual cortex) have been suggested to be organized for optimal information transmission. However, the efficiency of these layers has not been measured when they operate together on colorimetrically calibrated natural images and using multivariate information-theoretic units over the joint array of spatio-chromatic responses.
In this work we present a statistical tool to address this question in an appropriate (multivariate) way. Specifically, we propose an empirical estimate of the information transmitted through a network based on a recent Gaussianization technique. Our Gaussianization reduces the challenging multivariate density estimation problem to a set of simpler univariate estimations. Here we extend our previous results [Gomez et al. J.Neurophysiol.2020, arxiv:1907.13046] and [J.Malo arxiv:1910.01559] to address the problem posed by cortical magnification. Cortical magnification implies an expansion of the dimensionality of the signal, and here the proposed total correlation estimator is compared to theoretical predictions that work in scenarios that do not preserve the dimensionality.
In psychophysically tuned networks with Poisson noise, and assuming sensors of equivalent signal/noise quality at different neural layers, results on transmitted information show that: (1) progressively deeper representations are better in terms of the amount of information captured about the input, (2) the transmitted information up to the cortical representation follows the PDF of natural scenes over the chromatic and achromatic dimensions of the stimulus space, (3) the contribution of spatial transforms to capture visual information is substantially bigger than the contribution of chromatic transforms, and (4) nonlinearities of the responses contribute substantially to the transmitted information but less than the linear transforms.
A. Gomez-Villa, M. Bertalmio and J. Malo (2020). Visual information flow in Wilson–Cowan networks. J. Neurophysiology.
J. Malo (2020). Spatio-Chromatic Information available from different Neural Layers via Gaussianization.
Daniele Marinazzo - "Synergistic information in a dynamical model implemented on the human structural connectome reveals spatially distinct associations with age" - slides
(by Davide Nuzzi , Mario Pellicoro , Leonardo Angelini, Daniele Marinazzo, and Sebastiano Stramaglia)
In a previous study implementing the Ising model on a 2D lattice, we showed that the joint synergistic information shared by two variables on a target one peaks before the transition to an ordered state (critical point). Here we implemented the same model on individual structural connectomes, to answer these questions:
Pedro Mediano - "Multi-target information decomposition and applications to integrated information theory"
The Partial Information Decomposition (PID) framework allows us to decompose the information that multiple source variables have about a single target variable. In its 10 years of existence, PID has spawned numerous theoretical and practical tools to help us understand and analyse information processing in complex systems. However, the asymmetric role of sources and target in PID hinders its application in certain contexts, like studying information sharing in multiple processes evolving jointly over time. In this talk we present a novel extension of the PID framework to the multi-target setting, which lends itself more naturally to the analysis of multivariate dynamical systems. This new decomposition is tightly linked with Integrated Information Theory, and gives us new analysis tools as well as a richer understanding of information processing in multivariate dynamical systems.
Daniel Polani - "Information Theory for Cognitive Modelling: Speculations and Directions"
The talk will offer a mixed bag of ideas a considerations how information theory needs to be further developed to be useful and also how it is - already now - able to open routes for hypotheses about how cognition may be organized and may need to be organized in the brain.
|Note: YouTube video of talk not available||
Maryam Shanechi - "Dynamical modeling, decoding, and control of multiscale brain networks"
In this talk, I first discuss our recent work on modeling, decoding, and controlling multisite human brain dynamics underlying mood states. I present a multiscale dynamical modeling framework that allows us to decode mood variations for the first time and identify brain sites that are most predictive of mood. I then develop a system identification approach that can predict multiregional brain network dynamics (output) in response to electrical stimulation (input) toward enabling closed-loop control of brain network activity. Further, I demonstrate that our framework can uncover multiscale behaviorally relevant neural dynamics from hybrid spike-field recordings in monkeys performing naturalistic movements. Finally, the framework can combine information from multiple scales of activity and model their different time-scales and statistics. These dynamical models, decoders, and controllers can advance our understanding of neural mechanisms and facilitate future closed-loop therapies for neurological and neuropsychiatric disorders.
Joseph Lizier (chair) - Closing remarks and Best ECR Presentation award
This workshop has been run at CNS for over a decade now -- links to the websites for the previous workshops in this series are below:
Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CC-BY-3.0