Home > Conferences > CNS*2018-ITW

CNS*2018 Workshop on Methods of Information Theory in Computational Neuroscience

Information in the brain. Modified from an original credited to dow_at_uoregon.edu (distributed without restrictions)

17-18 July, 2018

Seattle, USA

CNS*2018

Aims and topics

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

For the program of the past IT workshops see the Previous workshops section.

Location and Registration

The workshop will be held as a part of the wider CNS*2018 meeting, in Seattle, USA. Please see the CNS*2018 website for registration to the workshops (this is required to attend).

Best presentation award sponsor: Entropy

Awards

We would like to thank the Entropy journal for sponsoring our Best Presentation Award for ECRs, which we have awarded jointly to:

Congratulations to our winners! Please see photos of the presentation below.

Organising committee

Speakers

The following are invited and contributing speakers for the workshop:

Program

Our program is as shown in the following table.

Tuesday, July 17 Wednesday, July 18
Session: Sensory Processing
Chair: Joseph Lizier
Session: Transfer entropy and connectivity
Chair: Taro Toyoizumi
09:00-09:45 Justin Gardner
Stanford University
"Optimality and heuristics for human perceptual inference"
Nicholas M. Timme
Indiana University - Purdue University Indianapolis
"From neural cultures to rodent models of disease: examples of information theory analyses of effective connectivity, computation, and encoding"
09:45-10:30 Alexander Dimitrov
Washington State University Vancouver
"Modeling of perceptual invariances in biological sensory processing"
Leonardo Novelli
The University of Sydney
"Validation and performance of effective network inference using multivariate transfer entropy with IDTxl"
10:30-11:00 Break Break
Session: Coding and information structure
Chair: Tatyana Sharpee
Session: Information decomposition
Chair: Nicholas Timme
11:00-11:45 Eva Dyer
Georgia Tech
"Finding low-dimensional structure in large-scale neural recordings"
Jim Kay
University of Glasgow
"Partial Information Decompositions based on Dependency Constraints"
11:45-12:30 Braden Brinkman
Stony Brook University
"Signal-to-noise ratio competes with neural bandwidth to shape efficient coding strategies"
Joseph T. Lizier
The University of Sydney
"Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices"
12:30-14:00 Lunch Lunch
Session: Constraints and design
Chair: Eva Dyer
Session: Dynamics and information II
Chair: Justin Gardner
14:00-14:45 Tatyana Sharpee
Salk Institute for Biological Studies
"Information-theoretic constraints on cortical evolution"
Benjamin Cramer
University of Heidelberg
"Information theory reveals a diverse range of states induced by spike timing based learning in neural networks"
14:45-15:30 Siwei Wang
Hebrew University of Jerusalem
"Closing the gap from structure to function with information theoretic design principles"
Taro Toyoizumi
RIKEN Brain Science Institute
"Emergence of Levy Walks from Second-Order Stochastic Optimization"
15:30-16:00 Break Break
Session: Contributions -- Dynamics and information I
Chair: Siwei Wang
Session: Transfer
Chair: Alexander Dimitrov
16:00-16:45 Rainer Engelken
Columbia University
"How input spike trains and recurrent dynamics shape the entropy of cortical circuits"
16:00-16:30 -- Mireille Conrad
University of Geneva
"Mutual information vs. transfer entropy in spike-based neuroscience"
16:30-17:00 -- Demi Gao
The University of Sydney
"Information theoretic modeling framework for cochlear implant stimulation"
16:45-17:30 Ramón Martinez-Cancino
University of California San Diego
"Estimating transient phase-amplitude coupling in electrophysiological signals using local mutual information"
17:00-17:30 -- Artur Luczak
University of Lethbridge
"Neuronal packets as basic units of neuronal information processing"
17:30-17:45 N/A Wrap-up and ECR Best Presentation award

Abstracts

Braden Brinkman - "Signal-to-noise ratio competes with neural bandwidth to shape efficient coding strategies"
Laughlin's celebrated histogram equalization result showed that, in the absence of noise, a neuron with a fixed number of responses should use every response with equal frequency, a prediction supported by experiments on blowfly large monopolar cells. When the stimulus is corrupted by noise, this prediction can change dramatically depending not only on the signal-to-noise ratio (SNR) but also the number of distinct responses N available to the neuron. We analytically calculate the optimal coding strategies in limits where both SNR and N are large, finding very different solutions depending on the magnitude of SNR/N, reflecting a competition between the maximum amount of information the neuron can encode and the actual information available in the noisy signal. We apply our result to the stimulus data from Laughlin's original paper and find our corresponding prediction for SNR/N ~ 1 results in excellent agreement with Laughlin's empirically measured blowfly response distribution.

Mireille Conrad - "Mutual information vs. transfer entropy in spike-based neuroscience"
Measuring the amount of information transferred between stimuli and neural responses is essential for investigating computation by neural systems. Information theory offers a range of tools to calculate information flow in neural networks. Choosing the appropriate method is particularly important in experimental contexts where technical limitations can complicate the use of information theory. In this talk, I will discuss the comparative advantages of two different metrics: mutual information and transfer entropy. I will compare their performance on biologically plausible spike trains, and discuss their accuracy depending on various parameters and on the amount of available data, a critical limiting factor in all practical applications of information theory to experimental electrophysiological data. I will first demonstrate these metrics' performance using synthetic random spike trains before moving on to more realistic spike-generating models. I will conclude by discussing how these metrics can be used to study brain function and performance.

Benjamin Cramer - "Information theory reveals a diverse range of states induced by spike timing based learning in neural networks"
We study the dynamics of spiking neural networks subject to synaptic plasticity driven by causality, emulated on accelerated, analog neuromporphic hardware. By tuning the coupling strength to the stochastic external input and the degree of recurrence, the action of synaptic plasticity tunes the network to different dynamical regimes. For highly recurrent networks, long-tailed avalanche distributions emerge, indicating critical-like dynamics. In addition, computational properties in general increase. In more detail, the active information storage, the mutual information, as well as the transfer entropy increase within the network. Moreover, partial information decomposition, which is used to quantify information modification, also increases for higher degrees of recurrence. The performance of the network in a reservoir computing task is tested using an auditory setup. By adjusting the coupling to the external input, network features could be selected and adjusted for a desired task.

Alexander Dimitrov - "Modeling of perceptual invariances in biological sensory processing"
Much of the application of information theory in neuroscience has been concerned with quantifying and identifying the information that a sensory system transmits about an external stimulus. However, biological sensory systems do not represent external stimuli exactly. In fact, one could argue that the task of a sensory system is to selectively discard information. In this presentation, we explore particular pathways of information loss - those for identity-preserving stimulus transformations - which allow us to separately address questions on information about stimulus identity vs (independent) information about stimulus parameters (e.g. position, orientation, size in vision).
A problem faced by many perceptual systems is natural variability in sensory stimuli associated with the same object. This is a common problem in sensory perception: Interpreting varied optical signals as originating from the same object requires a large degree of tolerance. Understanding speech requires identifying phonemes, such as the consonant /g/, that constitute spoken words. A /g/ is perceived as a /g/, despite tremendous variability in acoustic structure that depends on the surrounding vowels and consonant. Similarly in vision, a major goal of an object recognition problem is the ability to identify individual objects while being invariant to changes stemming from multiple stimulus transformations.
In an ongoing project, we are testing the hypothesis that broad perceptual invariance is achieved through specific combinations of what we term locally invariant elements. The main questions we address here are: 1. What are the characteristics of locally-invariant units in sensory pathways? 2. How are biological locally-invariant units combined to achieve broadly invariant percepts? 3. What are the effects of invariant signal processing on information-theoretic measures?

Eva Dyer - "Finding low-dimensional structure in large-scale neural recordings"
Improvements in neural recording technologies have rapidly increased the number of neurons that it is now possible to record from. Along with these improvements, analyses of neural information processing are moving from single neuron to population-level analyses. One promising approach for understanding information processing across large populations of neurons is to use methods for dimensionality reduction; such approaches aim to find low-dimensional structure in the joint activity of many neurons over time. In this talk, I will describe my lab's efforts to learn low-dimensional structure present in large-scale neural recordings, both from electrophysiology recordings in motor cortex and from two-photon calcium movies in primary visual cortex. Our findings suggest that dimensionality reduction techniques can be used to pull out structure from neural activity to solve a range of decoding and classification problems.

Rainer Engelken - "How input spike trains and recurrent dynamics shape the entropy of cortical circuits"
Information in the cortex is processed by a deeply layered system of recurrent neural circuits. How well streams of spikes from one circuit can control spiking dynamics in a subsequent circuit constrains its ability to encode and process information. In particular, noise entropy arising from sensitivity to initial conditions limits the amount of information conveyed about the stimulus. Directly measuring entropy in a high-dimensional system, however, is computationally intractable even in models. Ergodic theory has been proposed as a tractable approach to measuring the dynamical entropy rate of large recurrent spiking networks [Monteforte 2010, Lajoie 2013, 2014]. Earlier studies measured dynamic entropy rates with constant external input [Monteforte 2010] or white noise [Lajoie 2013, 2014]. However, how spiking input controls the recurrent dynamics and the entropy rate has not yet been analyzed. To address this challenge, we developed a novel algorithm for spiking networks driven by input streams of spike trains and calculate their full Lyapunov spectra yielding the dynamical entropy rate and attractor dimensionality in numerically exact event-based simulations. Our new algorithm reduces the computational cost from N to log(N) operations per network spike for a fixed number of synapses per neuron and Lyapunov exponents. We demonstrate that streams of input spike trains suppress dynamical entropy in the dynamics of balanced circuits of neurons with adjustable spike mechanism. For sufficiently strong input, we find a transition to complete network control, where the network state is independent of initial conditions. Fast spike onset of single neurons in the target network facilitates both control by external input and suppression of entropy. Our work opens a novel avenue to investigate the role of sensory streams of spike trains in shaping the entropy and dynamics of large neural networks. These results could also be useful to understand and optimize emerging optogenetic approaches to achieve network state control.

Demi Gao - "Information theoretic modeling framework for cochlear implant stimulation"
Cochlear implants, also called bionic ears, are implanted neural prostheses that can restore lost human hearing function by direct electrical stimulation of auditory nerve fibers. The performance of cochlear implants is limited by the number of electrodes as few electrodes result in an inability to represent fine spectral detail. However, increasing the number of electrodes does not improve hearing perception as current spread results in stimulation of overlapping populations of auditory nerve fibers. What is the number of electrodes that may achieve the optimal performance of cochlear implants remains a question in cochlear implant designs. We proposed an information-theoretic framework for numerically estimating the optimal number of electrodes in cochlear implants. This approach relies on a model of stochastic action potential generation and a discrete memoryless channel model of the interface between the array of electrodes and the auditory nerve fibers. Using these models, the stochastic information transfer from cochlear implant electrodes to auditory nerve fibers is estimated from the mutual information between channel inputs (the locations of electrodes) and channel outputs (the set of electrode-activated nerve fibers). The optimal number of electrodes then corresponds to the maximum mutual information. This modeling framework provides theoretical insights into several important clinically relevant problems that will inform future designs of cochlear implant electrode arrays and stimulation strategies.

Justin Gardner - "Optimality and heuristics for human perceptual inference"
Optimality considerations have been a core driver of developments in sensory neuroscience. Information theoretic approaches highlight optimal ways in which sensory information can be encoded and transmitted through the nervous system. In signal detection theory, ideal observer models set the upper limits of performance on detection tasks. Statistical decision theory prescribes optimal solutions to sensory inference problems in which sensory and prior information are both uncertain. At the same time, human behavior has often been shown to take short-cuts to optimality in the form of heuristic behaviors which approximate optimal models using incomplete information and/or simpler computations. We have examined a human perceptual inference task in which subjects are asked to estimate the direction of motion of a random-dot array where we varied the uncertainty of the stimulus by the coherence of random-dot motion and the prior uncertainty by the distribution of directions subjects estimate within a block of trials. While summary statistics (means and standard deviation of estimates) obeyed the optimality goals of Bayesian inference, we found that this was achieve by a heuristic strategy in which subjects switched between prior and likelihood rather than multiplicatively integrating the two. Our data highlight the ability of human observers to use heuristic solutions to achieve nearly optimal behavior.

Jim Kay - "Partial Information Decompositions based on Dependency Constraints"
Since the seminal work of Williams and Beer [1] which introduced the Partial Information Decomposition (PID) several methods have been introduced for computing PIDs, and thereby quantifying several distinct components of the information shared between predictor and target variables. A new method (Idep) was recently announced [2] which is based on a lattice of probability models that are defined in terms of dependency constraints, and this method was applied with discrete variables. Here, application of the Idep method to Gaussian systems will be described. The resulting PIDs are available in closed form when the predictors and target are univariate Gaussian and also when they are multivariate Gaussian, thus making exact PIDs available for rapid computation. Previous work on Gaussian systems [3] produced a PID that is a minumum mutual information (MMI) PID. We compare the Idep PIDs to the MMI PIDs and show that, generally, the MMI method gives larger estimates of redundancy and synergy than does the Idep method, and we also identify conditions under which both these methods produce the same PID. More recent work on derivation of the Idep PID for mixed discrete-continuous systems will also be described. Here the target is multinomial and the predictors are multivariate Gaussian. For these systems the Idep PID can be computed using Monte Carlo approximation. The PIDs will be illustrated using real and simulated neuroscience data.
[1] Williams, P.L. and Beer, R.D., arXiv:1004.2515
[2] James, R.G., Emenheiser, J., Crutchfield, J.P., arXiv:1709.06653
[3] Barret, A.B. Phys Rev E, doi:10.1103/PhysRevE.91.052802

Joseph Lizier - "Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices"
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

Artur Luczak - "Neuronal packets as basic units of neuronal information processing"
Neurons are active in a coordinated fashion, for example, an onset response to sensory stimuli usually evokes a 50-100ms long burst of population activity. Recently it has been shown that such 'packets' of neuronal activity are composed of stereotypical sequential spiking patterns. The exact timing and number of spikes within packets convey information about the stimuli. Such structured packets are also occurring spontaneously, in absence of external stimuli. Here we present evidence that packets can be a good candidate for basic building blocks or 'the words' of neuronal coding, and can explain the mechanisms underlying multiple recent observations about neuronal coding, such as: multiplexing, LFP phase coding, and provide a possible connection between memory preplay and replay. This talk will summarize and expand on paper: Luczak et al. (Nature Rev. Neurosci., 2015).

Ramón Martinez-Cancino - "Estimating transient phase-amplitude coupling in electrophysiological signals using local mutual information"
Here we demonstrate the suitability of a local mutual information measure for estimating the temporal dynamics of cross-frequency coupling (CFC) in brain electrophysiological signals. In CFC, concurrent activity streams in different frequency ranges interact and transiently couple in some manner. A particular form of CFC, phase-amplitude coupling (PAC), has raised interest given the growing amount of evidence of its possible role in healthy and pathological brain information processing. Although several methods have been proposed for PAC estimation, only a few have addressed the estimation of the temporal evolution of PAC, and these typically require a large number of experimental trials to return a reliable estimate. Here we explore the use of mutual information to estimate a PAC measure (MIPAC) in both continuous and event-related multi-trial data. To validate these two applications of the proposed method, we first apply it to a set of simulated phase-amplitude modulated signals and show that MIPAC can successfully recover the temporal dynamics of the simulated coupling in either continuous or multi-trial data. Finally, to explore the use of MIPAC to analyze data from human event-related paradigms, we apply it to an actual event-related human electrocorticographic (ECoG) data set that exhibits strong and physiologically plausible PAC, demonstrating that the MIPAC estimator can be used to successfully characterize higher-order dynamics of electrophysiological data.

Leonardo Novelli - "Validation and performance of effective network inference using multivariate transfer entropy with IDTxl"
IDTxl is a new open source toolbox for effective network inference from multivariate time series using information theory, available from Github. The primary application area for IDTxl is the analysis of brain imaging data (import tools for common neuroscience formats, e.g. FieldTrip, are included); however, the toolkit is generic to analysing multivariate time-series data from any discipline and complex system. For each target node in a network, IDTxl employs a greedy iterative algorithm to find the set of parent nodes and delays which maximise the multivariate transfer entropy. Rigorous statistical controls (based on comparison to null distributions from time series surrogates) are used to gate parent selection and to provide automatic stopping conditions for the inference. We validated the IDTxl Python toolkit on different effective network inference tasks, using synthetic datasets where the underlying connectivity and the dynamics are known. We tested random networks of increasing size (10 to 100 nodes) and for an increasing number of time-series observations (100 to 10000 samples). We evaluated the effective network inference against the underlying structural networks in terms of precision, recall, and specificity in the classification of links. In the absence of hidden nodes, we expected the effective network to reflect the structural network. Given the generality of the toolkit, we chose two dynamical models of broad applicability: a vector autoregressive (VAR) process and a coupled logistic maps (CLM) process; both are widely used in computational neuroscience, macroeconomics, population dynamics, and chaotic systems research. We used a linear Gaussian estimator (i.e. Granger causality) for transfer entropy measurements in the VAR process and a nonlinear model-free estimator (Kraskov-Stoegbauer-Grassberger) for the CLM process. Our results showed that, for both types of dynamics, the performance of the inference increased with the number of samples and decreased with the size of the network, as expected. For a smaller number of samples, the recall was the most affected performance measure, while the precision and specificity were always close to maximal. For our choice of parameters, 10000 samples were enough to achieve nearly perfect network inference (>95% according to all performance measures) in both the VAR and CLM processes, regardless of the size of the network. Decreasing the threshold for statistical significance in accepting a link lead to higher precision and lower recall, as expected. Since we imposed a single coupling delay between each pair of processes (chosen at random between 1 and 5 discrete time steps), we further validated the performance of the algorithm in identifying the correct delays. Once again, 10000 samples were enough to achieve nearly optimal performance, regardless of the size of the network. We emphasise the significant improvement in network size and number of samples analysed in this study, with 100 nodes / 10000 samples being an order of magnitude larger than what has been previously demonstrated, bringing larger neural experiments into scope. Nonetheless, analysing large networks with 10000 samples and using the model-free estimators is computationally demanding; therefore, we exploited the compatibility of IDTxl with parallel and GPU computing on high-performance clusters.

Tatyana Sharpee - "Information-theoretic constraints on cortical evolution"
TBA

Nicholas Timme - "From neural cultures to rodent models of disease: examples of information theory analyses of effective connectivity, computation, and encoding"
Given the size and complexity of the data sets generated in modern neuroscience, it is imperative to utilize analysis tools that are capable of detecting and quantifying interactions in neural signals recorded in a wide range of scenarios. Information theory has proven to be just such a tool. In this presentation, I will discuss the relatively straightforward methods we have used in analyzing spontaneous and evoked neural data, as well as methods for analyzing ensembles of information sources. Next, I will discuss the results of applying these information theory analysis techniques to several neural systems. First, I will describe an analysis of timescale dependent effective connectivity in organotypic cultures using transfer entropy. These studies indicated that highly connected neurons (so called "hubs") were localized to certain timescales. Using the networks derived in that study, we also examined neural computation (quantified using synergy) and found that neurons that sent out many connections tended to contribute to larger amounts of computation. Second, I will present an analysis of auditory stimulus encoding using mutual information in a rodent model of schizophrenia. We found that encoding in depth EEG recordings was reduced in the rodent model of schizophrenia compared to a control strain. Finally, I will discuss an analysis of the encoding of signals related to the decision to consume alcohol in a rodent model of alcoholism (alcohol preferring "P rats"). We found that individual neurons in medial prefrontal cortex (a brain region heavily involved in decision-making) in P rats showed decreased alcohol cue and drinking intent encoding compared to a control strain using mutual information. Given the importance of the mPFC in decision-making, these results provide evidence that the neural processes underlying decision-making are fundamentally altered in this rodent model of alcoholism. Taken together, these example analyses demonstrate the value of information theory analyses to elucidate important phenomena in a wide variety of neural systems.

Taro Toyoizumi - "Emergence of Levy Walks from Second-Order Stochastic Optimization"
In natural foraging, many organisms seem to perform two different types of motile search: directed search (taxis) and random search. The former is observed when the environment provides cues to guide motion towards a target. The latter involves no apparent memory or information processing and can be mathematically modeled by random walks. We show that both types of search can be generated by a common mechanism in which Levy flights or Levy walks emerge from a second-order gradient-based search with noisy observations. No explicit switching mechanism is required -- instead, continuous transitions between the directed and random motions emerge depending on the Hessian matrix of the cost function. For a wide range of scenarios, the Levy tail index is a=1, consistent with previous observations in foraging organisms. These results suggest that adopting a second-order optimization method can be a useful strategy to combine efficient features of directed and random search.

Siwei Wang - "Closing the gap from structure to function with information theoretic design principles"
Information theory has proven to be an important tool for understanding the coding content and capacity of both single cell and population activity in neuronal networks. However, it remains relatively unexplored how efficient information transmission can be linked to network wiring, thus closing the gap from structure to function [Fairhall, Shea-Brown, Barreiro 2012]. Here we focus on an particular design principle, the efficient coding of predictive information about a sensory stimulus, to show that, indeed, such a general design principle has profound ramifications on the wiring in neuronal circuits. Prediction is essential for life; to interact meaningfully with a changing world, an organism must overcome the sensory and motor delays that plague any network containing chemical synapses. Efficient prediction can begin as early as the retina [Palmer et.al 2015] and reading out of these predictive bits is feasible via a simple, biologically plausible learning rule [Sederberg et.al 2018]. Nevertheless, what circuit features underlie efficient prediction in the retina and cortex is yet to be fully established. Can we identify mechanisms that are vital for the success of prediction? Are those mechanism universal across different organisms? Furthermore, which circuit features are implemented by the brain when efficient prediction is achieved? In this talk, I will show results on our initial effort to probe of how electrotonic coupling in neuronal networks, i.e., the segregation between synaptic input and lateral electrical connections in a given neuronal network, can influence the capacity of prediction in the network. By contrasting how segregation works in two drastically different systems, i.e., the fly visual system and the reconstructed blue-brain neocortex column, we argue that prediction is a general design principle that can be used to shed light on a wide spectrum of features such as the morphology of a single cell to the emergence of functional motifs in networks of pyramidal neurons.
[Fairhall, Shea-Brown, Barreiro 2012] Fairhall A, Shea-Brown E, Barreiro A. Curr Opin Neurobiol. 2012 Aug;22(4):653-9.
[Palmer et.al 2015] Palmer SE, Marre O, Berry MJ, Bialek W. Proc Natl Acad Sci USA. 2015 Jun
[Sederberg et.al 2018] Sederberg AJ, MacLean JN, Palmer SE Proc Natl Acad Sci USA. 2018 Jan

Photos

MDPI Entropy Best ECR Presentation winner Siwei Wang
MDPI Entropy Best ECR Presentation winner Rainer Engeltan
Justin Gardner delivers opening talk

Previous workshops

This workshop has been run at CNS for over a decade now -- links to the websites for the previous workshops in this series are below:

  1. CNS*2017 Workshop, July 19-20, 2017, Antwerp, Belgium.
  2. CNS*2016 Workshop, July 6-7, 2016, Jeju, South Korea.
  3. CNS*2015 Workshop, July 22-23, 2015, Prague, Czech Republic.
  4. CNS*2014 Workshop, July 30-31, 2014, Québec City, Canada.
  5. CNS*2013 Workshop, July 17-18, 2013, Paris, France.
  6. CNS*2012 Workshop, July 25-26, 2012, Atlanta/Decatur, GA, USA.
  7. CNS*2011 Workshop, July 27-28, 2011, Stockholm, Sweden.
  8. CNS*2010 Workshop, July 29-30, 2010, San Antonio, TX, USA.
  9. CNS*2009 Workshop, July 22-23, 2009, Berlin, Germany.
  10. CNS*2008 Workshop, July 23-24, 2008, Portland, OR, USA.
  11. CNS*2007 Workshop, July 11-12, 2007, Toronto, Canada.
  12. CNS*2006 Workshop, June 19-20, 2006, Edinburgh, U.K.