Home > Conferences > CNS*2016ITW
67 July, 2016 Jeju, South Korea 
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
For the program of the past IT workshops see the Bionet page at the Columbia University.
The workshop will be held as a part of the wider CNS*2016 meeting, on Jeju Island, South Korea. Please see the CNS*2016 website for registration to the workshops (this is required to attend).
Our program is as shown in the following table.
You can also download a PDF copy of the program.
Wednesday, July 6  Thursday, July 7  

Session: Characterising information processing Chair: Joseph Lizier 
Session: Information dynamics and computation (1) Chair: Joseph Lizier 

09:0009:40  Eli Shlizerman University of Washington "Probabilistic graphical modeling for neuronal networks" 
Anna Levina Institute of Science and Technology Austria "Increase in information processing capacity with approach to criticality in developing neural networks" 
09:4010:20  Masafumi Oizumi RIKEN Brain Science Institute / Monash University "A unified framework for quantifying information integration based on information geometry" 
Masanori Shimono Osaka University "Architectures in the informatic microconnectome" 
10:2011:00  Break  Break 
Session: Spike coding (1)  modelling Chair: Taro Toyoizumi 
Session: Testing coding hypotheses Chair: TBA 

11:0011:40  Tatyana Sharpee Salk Institute for Biological Studies "Sensory coding in the natural environment" 
Michael Wibral Goethe University, Frankfurt "Predictive coding without the storytelling  an information theoretic approach to test a popular theory" 
11:4012:20  Si Wu Beijing Normal University "Dynamical information encoding in neural adaptation" 
Shigeru Shinomoto Kyoto University "Difference in neuronal coding schemes in the brain" 
12:2013:40  Lunch  Lunch 
Session: Signal processing and design Chair: Taro Toyoizumi 
Session: Information dynamics and computation (2) Chair: Michael Wibral 

13:4014:20  Mark McDonnell University of South Australia "Quantifying information transmission in neuroprostheses: mutual information or trained neural network classifiers?" 
Joseph T. Lizier The University of Sydney "Estimating information transfer between spike trains" 
14:2015:00  Sakyasingha Dasgutpa RIKEN Brain Science Insititute / IBM Research  Tokyo "Understanding computation with noise in spiking networks: Deterministic and stochastic models" 
14:2014:50  Felix Goetze National Central University, ChungLi, Taiwan "Sorted local transfer entropy between spike trains distinguishes inhibitory from excitatory interactions" 
Break  15:0015:40 (Wed)  Break  14:5015:30 (Thu)  
Session: Spike coding (2)  efficient coding Chair: Justin Dauwels 
Session: Late breaking talks Chair: Justin Dauwels 

15:4016:20 (Wed)  Braden Brinkman University of Washington "How do efficient encoding strategies depend on origins of noise in neural circuits?" 
15:3016:00  Dennis Goldschmidt Champalimaud Center for the Unknown "A neural model of information processing for insectlike navigation" 16:0016:30  Haiping Huang RIKEN Brain Science Institute "A firstorder phase transition reveals geometrical structure of neural codewords" 16:3017:00  Leonardo Gollo Queensland Institute of Medical Research (QIMR), Brisbane "Optimal performance with diversity: Combining critical sensitivity with subcritical reliability" 
16:2017:00 (Wed)  Rama Ratnam University of Illinois at UrbanaChampaign (USA), and Advanced Digital Sciences Center, Illinois at Singapore (Singapore) "Optimal energyefficient coding in sensory neurons" 
Braden Brinkman  "How do efficient encoding strategies depend on origins of noise in neural circuits?"
Our sensory nervous system receives vast quantities of external information that it must reliably encode and transmit to deeper regions of the brain. These signals can become corrupted by noise at various stages of transmission, and yet our brain is able to reliably decode this sensory information and perform computations with it. For example, we are able to see over a wide range of light levels from daylight to starlight, despite the drastic change in relative noise levels as photon rates decrease. Sensory neurons are able to adjust how they respond to stimuli to account for changes in the environment. How then should neural encoding strategies be adjusted so as to be robust to different sources of noise throughout the circuit? We develop a simple neural circuit model to solve for these optimal encoding strategies, focusing on neurons arranged in parallel channels. We find that noise sources entering the circuit at different processing states compete to determine the optimal encoding strategy, including whether pathways should encode common stimuli independently or redundantly, and whether pairs of neurons have the same or opposite sensitivity to stimuli.
Sakyasingha Dasgupta  "Understanding computation with noise in spiking networks: Deterministic and stochastic models"
In the first part of my talk I will address the question, how the source of cortical variability may influence computation or signal processing? We address this by studying two types of balanced random networks of quadratic IF neurons, with irregular spontaneous activity: (a) a deterministic network with strong connections generating noise by chaotic dynamics (b) a stochastic network with weak connections receiving noisy input. They are analytically tractable in the limit of large networksize and channel timeconstant. Despite different sources of noise, spontaneous activity of these networks are identical unless majority of neurons are simultaneously recorded. However, the two networks show remarkably different sensitivity to external stimuli. In the former, input reverberates internally and can be read out over long time, but in the latter, inputs rapidly decay. This is further enhanced with activitydependent plasticity at input synapses producing marked difference in decoding inputs from neural activity. We show, this leads to distinct performance of the two networks to integrate temporally separate signals from multiple sources, with the deterministic chaotic network activity serving as reservoir for Monte Carlo sampling to perform near optimal Bayesian integration. In the second part of my talk, I will briefly focus on the popular deep learning model of stochastic Boltzmann machines. We present a novel deep architecture called dynamic Boltzmann machine in which learning occurs based on the timing of spikes with LTP and LTD components. Here, we interpret STDP with homeostasis as a means to maximising the loglikelihood of any given temporal pattern and derive an exact learning rule for the parameters of the network. This can be applied as a stochastic generative model of highdimensional temporal patterns.
Felix Goetze  "Sorted local transfer entropy between spike trains distinguishes inhibitory from excitatory interactions"
Recent studies have used transfer entropy to measure the effective connectivity among large populations of neurons. Analyzing these networks gave novel insight on the information transfer in neural networks (Nigam et al, The Journal of Neuroscience 2016). The predicted information transfer as quantified by the estimation of transfer entropy detects the directed nonlinear interactions between neurons as a modelfree method. High information transfer between two spike trains is evidential for an underlying excitatory synapse between the neurons. However even inhibitory synapses show significant information transfer when observing sufficient spiking activity. We aim to extend the effective connectivity analysis by revealing whether the information transfer is coming from an excitatory or an inhibitory synapse. To distinguish these type of interactions we analyze the local transfer entropies (Lizier et al, Physical Review E 2008) which are opposite signed for each interaction type, allowing us to define the sorted local transfer entropy as the discriminating quantity. We further explore the usage of dynamic state selection for estimating the entropies (Stetter et al, PLOS Computational Biology 2012) in order to remove the network effects during highly synchronized bursting events of neural populations which are not indicative of a direct synaptic interaction. Applying these techniques to the spike trains of simulated networks of Izhikevich neurons with random synaptic delays and spiketimingdependent plasticity evolved connection weights like in a previous study (Ito et al, PLOS One 2011), we show that inhibitory and excitatory synapses can be inferred and the network reconstruction improved.
Dennis Goldschmidt  "A neural model of information processing for insectlike navigation"
Social insects, like ants and bees, prove that miniature brains are able to generate navigation in complex environments. Some of the observed navigational capabilities require spatial representations and memory, which poses the question of how insect brains can support such computations. In my talk, I will present an insectinspired model for representing and learning populationencoded vectors in navigating agents. It consists of a path integration mechanism, rewardmodulated learning of global vectors, random search, and action selection. The path integration mechanism computes a vectorial representation of the agent's current location. The vector is encoded in the activity pattern of circular neural networks, where the angle is populationcoded and the distance is ratecoded. Our results show that the model enables robust path integration and homing, even in the presence of external sensory noise. Furthermore, the proposed learning rule produces goaldirected navigation under realistic conditions. Our model aims to bridge behavioral observations with possible underlying neural substrates, to show how complex information processing and computations in insect navigation can arise from simple neural mechanisms.
Leonardo Gollo  "Optimal performance with diversity: Combining critical sensitivity with subcritical reliability"
As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversityinduced amplification can be harnessed by neuronal systems for evaluating stimulus intensities.
Haiping Huang  "A firstorder phase transition reveals geometrical structure of neural codewords"
A neuronal population uses collective spiking patterns as neural codewords to represent external information and communicate with downstream brain regions. How these codewords are organized is of fundamental importance in systems neuroscience, yet remains largely unknown. Here we develop an entropybased analysis to investigate the structure of codewords in populations of retinal ganglion cells. We establish the fundamental relationship between wellknown associative memory (Hopfield) models and real biological (retinal) networks, in terms of their common geometrical structure of codewords. This structure is revealed by a firstorder phase transition. We show that the neural codeword space of the retinal network is divided into multiple distinct clusters. This welldesigned structure may be functionally advantageous for the neural population not only to discriminate different neural activity patterns, but also to carry out errorcorrection. We also reveal a special nature of the allsilent codeword, which is surrounded by the densest cluster of codewords and located within a reachable distance from most codewords. This study marks an important step to understand neural codewords that shape the information representation in a biological network.
Anna Levina  "Increase in information processing capacity with approach to criticality in developing neural networks"
Human brains possess sophisticated information processing capabilities, which rely on the coordinated interplay of several billions of neurons. Despite recent advances in characterizing functional brain circuitry, however, it remains a major challenge to understand the principles of how functional neural networks develop and maintain these processing capabilities. Using multielectrode spike recordings in mouse hippocampal and cortical neurons over the first four weeks in vitro, we demonstrate that developing neuronal networks increase their information processing capacities, as quantified by transfer entropy and active information storage. The increase in processing capacity is tightly linked with approaching criticality (correlation r = 0.68, p < 1e9; r = 0.55, p < 1e6 for transfer and storage, respectively). This increase of processing capacity with approaching a critical state has been predicted by modelling studies, and our results are the first to confirm this prediction experimentally. We therefore suggest that neural networks approach a critical state during maturation with the aim to increase their processing capabilities.
Joseph Lizier  "Estimating information transfer between spike trains"
The nature of a directed relationship (or lack thereof) between neural entities is a fundamental topic of inquiry in computational neuroscience. Information theory provides the primary tool, transfer entropy (TE), for analysis of such directed relationships in a nonlinear, modelfree fashion, by measuring the predictive gain about state transitions in a target timeseries from observing some source timeseries. While the TE has been used extensively to analyse recordings from fMRI, MEG and EEG, fewer applications have been made to spiking timeseries. Temporal binning before computing TE on resulting binary discretetime series is the default approach here, leaving open questions around whether one can achieve estimates avoiding temporal binning and working directly on (continuousvalued) timestamps of spikes, and whether such estimates would be more accurate. Recent theoretical developments have suggested a path forward here, and we build on these to propose an estimator for a pointprocess formulation of TE, remaining in the continuoustime regime by harnessing a nearestneighbours approach to matching (rather than binning) interspike interval (ISI) histories and future spiketimes. By retaining as much information about ISIs as possible, this estimator is expected to improve on properties of TE such as robustness to noise and undersampling, bias removal, and sensitivity to strength of relationship, etc.
Mark McDonnell  "Quantifying information transmission in neuroprostheses: mutual information or trained neural network classifiers?"
A fundamentally important design problem in any neuroprosthesis is how to ensure perceptually important information is converted effectively from electrical current into spiking activity in neural populations, in a manner that can be usefully interpreted by the brain. Since 2010, we have developed a model of cochlear implant electrical stimulation, and applied information theoretic methods to analysis of the model to predict design aspects such as the ideal current level, how many electrodes are optimal, and where electrodes should be located along the cochlea, e.g. [McDonnell et al., IEEE Trans on IT, 2010; Gao et al., Phys Rev E, 2014; Gao et al., IEEE EMBS NER 2015; Gao et al., ISIT 2014].
A fundamental challenge in such an approach is whether increased mutual information in reality translates into enhanced human function. We have therefore developed an alternative approach where we train standard neural artificial network classifiers, using simulation data, to decide which electrode was stimulated in simulations of the model [Gao et al., IEEE EMBS NER 2015]. This enables quantification of an upperbound on electrode discriminability, and results in a measure with several advantages over mutual information.
In this talk, I will first discuss these models and our results. Next, drawing on these examples, I will then discuss the relationship between two commonlyused objective functions used when training stateof theart deep neural networks: minimum mean square error, and minimum crossentropy, and speculate on whether other information theoretic approaches might be beneficially applied to enhance either the training of deep neural networks, or for evaluating neuroprostheses.
Masafumi Oizumi  "A unified framework for quantifying information integration based on information geometry"
There have been many attempts to identify neural correlates of consciousness (NCC). NCC is defined as the minimum neural mechanisms jointly sufficient for conscious experience. One of promising candidates of NCC is information integration of cortical activity. The idea is theorized by Integrated Information Theory (IIT), which states that the brain (or any physical system) has to integrate information to generate consciousness. This hypothesis has been supported by neurophysiological experiments that show the breakdown of cortical connectivity when consciousness is lost. From such accumulating evidence, it is considered that measuring information integration from neural data could play an essential role for understanding consciousness. In this talk, we propose a unified theoretical framework for quantifying information integration based on information geometry. We derive a novel measure of "integrated information", which quantifies how much information is integrated in a system. The original measure of integrated information proposed in IIT was derived in restricted conditions and thus, the application of the measure to experimental data has been severely limited. Our measure is validly derived in a general condition and thus, broadens the applicability to experimental data. In the proposed framework, integrated information is quantified by the minimized KullbackLeibler divergence between the actual probability distribution of the system and an approximated probability distribution in which the system is statistically split into independent parts. Within the framework, integrated information is interpreted as information loss when causal influences between the parts are disrupted. This framework also provides novel unified interpretations of various information theoretic measures of interactions, such as mutual information (predictive information), transfer entropy, and stochastic interaction, each of which is characterized by information loss when interactions among elements in the system are disconnected in a particular way. Our framework therefore provides an intuitive understanding of the relationships among the various measures and will be utilized for quantifying integrated information in neural data from a consistent perspective.
Rama Ratnam  "Optimal energyefficient coding in sensory neurons"
The use of a spikebased code in the sensory nervous system must satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energyfidelity tradeoff can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We will show that the optmization process leads to a dynamic (adaptive) threshold that functions as an internal decoder (reconstruction filter) and shapes the spikefiring threshold so that spikes are timed optimally. Spikes are emitted only when the coding error reaches a threshold. Thus a neuron is an encoder with an inbuilt decoder. It can keep track of the coding error dynamically and regulate it within the bounds dictated by the energy constraint. This is analogous to lossy sourcecoding. A stochastic extension is obtained by by adding colored noise to the spiking threshold. We predict that a sourcecoding neuron can: i) reproduce experimentally observed spiketimes in response to a stimulus, and ii) reproduce the serial correlations in the observed sequence of interspike intervals. We validate these predictions using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spiketiming code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peristimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.
Tatyana Sharpee  "Sensory coding in the natural environment"
Natural stimuli hold the key to understanding how advanced signal processing  those that make it possible for us to recognize specific people and events  occur in the brain. Understanding how detailed signal representation provided by the sensory periphery can be transformed to mediate invariant forms of selectivity to complex input features is an important open problem. Towards this goal, I will describe a set of statistical methods that can be used in conjunction with natural stimuli to probe and characterize feature selectivity and invariance of neurons deep inside the sensory circuits. Using these methods, we are steadily building increasingly accurate reconstruction models of highlevel sensory neurons.
Masanori Shimono  "Architectures in the informatic microconnectome"
The nervous system is the system designed for transmitting and processing necessary information and for surviving in the world. As one important step toward understanding information processing, we need to record electrical activities from as many neurons as possible, and need to reconstruct comprehensive information flows (Informatics Connectome). This presentation will introduce one series of studies characterizing information processing among more than 500 neurons recorded from barrel cortex using our multielectrode array system. The information network showed clearly specific features as follows: The strengths of information flow were lognormally distributed, and showed a longtail. Based on the long tailed property, the network organization shows hubs not only with respect to the number of connections (as NonWeighted networks) but also the amount of information flowing through the connections (as Weighted networks). These findings are similar to properties of synaptic connections which underlie the electrical signals. Furthermore, the hubs were surrounded by hierarchical or multiscale organizations including Clusters and Communities. High outdegree hubs often received inputs from neurons with high information flows, and hubs produced a Rich Club organization by directly connecting to each other. These architectures reflect mechanisms by which the Microconnectome can process information effectively in our brain.
Shigeru Shinomoto  "Difference in neuronal coding schemes in the brain"
Information in the brain is represented as neuronal spike trains. It has been revealed that neuronal spiking patterns are different across different cortical areas, such that spikes are regular in motor areas, nearly random in visual and prefrontal cortical areas, and bursting in the hippocampus (Mochizuki et al., J. Neurosci. 2016 in press), suggesting that spiking pattern plays a key role in information processing in the brain. Nevertheless, exact manner in which information is coded in the brain is unknown, and several coding hypotheses have been suggested. Recently, we have suggested alternative coding schemes, such that information may be represented in either digital or analog, and developed a method for determining the more likely coding, given a single spike train (Mochizuki and Shinomoto, Phys. Rev. E, 2014). Here I shall discuss the possibility that different functional regions are relying on different coding schemes.
Eli Shlizerman  "Probabilistic graphical modeling for neuronal networks"
Inference of dominant neural pathways which control sensorimotor responses in neuronal networks is challenging, even though the mapping of the static connectome is available. This difficulty stems from the fact that neurons are dynamical objects and interactions within the network are also dynamic. In our study, we introduce an approach to construct a Probabilistic Graphical Model (PGM) for dynamic neuronal networks. In particular we apply our methodology to the Caenorhabditis elegans (C. elegans) is comprised of 302 neurons, for which electrophysical connectivity map is resolved, and construct a PGM that represents the 'effective connectivity' between the neurons (correlations) and takes into account the dynamics.
We find that the functional connectome is significantly different than the static connectome as it reflects recurrent interactions and nonlinear responses within the network. Bayesian posterior inference methods applied to the constructed PGM allow us to extract neural pathways in the connectome responsible for experimentally well characterized movements of the worm such as forward and backward locomotion. In addition, we show that the framework allows for inference of pathways that correspond to movements that were not fully characterized in experiments and to perform 'reverseengineering' studies in which a typical setup on the motor neurons layer is imposed and dominant pathways that propagate to the sensory layer through the interneurons layer are being identified.
Michael Wibral  "Predictive coding without the storytelling  an information theoretic approach to test a popular theory"
Predictive coding has become a dominant candidate theory for cortical function. Yet, current efforts to validate or refute this theory largely depend on an experimenter's opinion of what a neural structure should predict in the first place  leading to a circular approach. We introduce an information theoretic framework that can identify predictions and the computation of matches or prediction errors based on neural data alone, i.e. without the need for an experimenter's opinion. This is important as it extends our efforts to validate predictive coding theories and their universal claim about brain function to the 99% of experiments that were not planned as tests for the theory, and to species where our intuitions about things their brains predict a weak. We will demonstrate the use of this framework with human MEG data recorded in a priming task, and with paired single cell recordings from retina and the lateral geniculate nucleus.
Si Wu  "Dynamical information encoding in neural adaptation"
Adaptation refers to the general phenomenon that a neural system dynamically adjusts its response property according to the statistics of external inputs. In response to a prolonged constant stimulation, neuronal firing rates always first increase dramatically at the onset of the stimulation; afterwards, they decrease rapidly to a low level close to background activity. This attenuation of neural activity seems to be contradictory to our experience that we can still sense the stimulus after the neural system is adapted. It thus prompts a question: where is the stimulus information encoded during the adaptation? Here, we argue that the neural system employs a dynamical encoding strategy during adaptation: at the early stage of adaptation, the stimulus information is mainly encoded in the strong independent firings; as time goes on, the stimulus information is shifted into the weak but concerted responses of neurons. We demonstrate that shortterm plasticity can provide a mechanism to implement this.
Here are links to some photos of our workshop from the main CNS*2016 photo repository:
Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CCBY3.0