27-28 February, 2013
This workshop seeks to examine the intersection of the studies of computation in biological cognition, and the design of artificial cognitive systems, from the perspective of information processing in complex systems.
Computational neuroscience has produced statistically robust tools to analyse brain imaging data, revealing much about how different brain regions interact to create outcomes. A topical area is investigating mechanisms that give rise to complex information processing, in terms of how information is stored and transferred across brain networks. Certainly it is well understood that biological cognition is vastly different from the Von Neumann computing paradigm, involving an enormous number of distributed, simple units. From this perspective, there is much scope for complex systems science to provide insights here, including areas such as: measures of information dynamics, network structure and inference, and synchronization.
From another perspective, traditional computation faces the challenge of matching the performance of biological computation. The challenge must be met in order to deliver next-generation leaps in performance, and to be able to handle future problems. There are several approaches towards these challenges, with much hype around "Big Data", and the large-scale Blue Brain project. Again however, there is certainly much scope for complex systems science to provide insights to further the field, e.g., regarding biologically inspired hardware and software, combinations of distributed computing and data-processing together, and principled approaches to guiding the emergence of intelligence.
This workshop seeks to bring together active researchers from these communities to consider these issues, discuss current research in the area and future directions and challenges.
Please register your attendance via this form.
The workshop will be held in the Auditorium of:
CSIRO Life Sciences Centre,The site is part of Riverside Corporate Park. Building 53 is on the corner of Julius Ave West and Delhi Rd (follow signs to Reception for this building; the entrance faces Delhi Rd) - attendees will need to sign in at CSIRO reception.
The site is easily accessible on foot from North Ryde train station, or alternatively limited parking is available on site (either in front of Building 53 - take first driveway on left after entering Julius Ave West from Delhi Rd; or alternatively follow Julius Ave West through a roundabout then take second driveway on left to enter another CSIRO area).
The following are some hotels close to North Ryde (a train or bus journey from the conference location):
View Larger Map
Our program is as shown in the following table, with keynote talks (purple) lasting one hour, and all other talks (blue/green) lasting 30 mins. Finish time on Thursday is 16:45.
You can also download a PDF copy of the program.
|Wednesday, Feb. 27||Thursday, Feb. 28|
Goethe University, Frankfurt, Germany
"Information theory in the wild"
University of Sussex, UK
"How the fine spatio-temporal structure of the odour plume may help bees to recognize odor objects"
The University of Sydney
"Non-stationary monkeys: Transfer entropy as a behavioral measure in economic games"
The University of South Australia
"Stochastic pooling networks embedded in cortical networks of excitatory and inhibitory neurons"
The University of New South Wales
"Heteroclinic cycles as a model for brain activity underlying movement sequences"
The University of Sydney
"Eigenvalue spectra for hierarchically modular neural networks"
SUNY Downstate Medical Center, NY, USA
"Multiscale modeling of cortical information flow in Parkinson's disease"
The University of New South Wales
"Transforming cortical wave patterns into motor movement"
|12:45||Lunch (self-organised)||Lunch (self-organised)|
Harz University of Applied Sciences, Wernigerode, Germany
"Neural Learning with Applications in Object Recognition and Harmony Perception"
Doshisha University, Japan
"Genetic Transposition in Incremental Genetic Programming"
The University of Queensland
"Computing with spikes"
"Contrast and stimulus complexity moderate the relationship between spatial frequency and perceived speed: Implications for MT models of speed perception"
The University of Western Sydney
"Neuromorphic Architectures for View-Invariant Object Recognition"
"The Müller-Lyer Illusion in a Deep Neural Network"
|16:15||Break||Next talk (below)|
Centre National de la Recherche Scientifique (CNRS), France
"Links between Granger causality and directed information theory"
CSIRO ICT Centre
"Multivariate construction of effective computational networks from observational data"
|19:30||Workshop dinner - The Ranch, North Ryde (map)|
Keynote: Michael Wibral - "Information theory in the wild"
Information theoretic quantities measure key elements of distributed computation in neural systems, such as the storage and transfer of information. This way, they help to better understand the computational algorithm implemented in the network under investigation.
Information theoretic approaches have raised great interest in neuroscience recently, especially because they do not require modeling of the neural system. This is important, as our current knowledge about neural systems is often still too limited to rely on modeling alone. Examples of model-free, information theoretic analyses of information transfer and storage in real-world neural data from magnetoencephalography (MEG) and invasive local field potential recordings will form the first part of the keynote, demonstrating their applicability to experimental data. In contrast, the second part will try to answer the question what role information theoretic methods will play when, one day, our knowledge will suffice for detailed modeling of large neural systems like the human brain. In this second part, I will turn to the classic tri-level hypothesis of David Marr to explain how duplicating the dynamics of a neural system via detailed modeling amounts to the possibility of perfect measurements at the level of the biophysical implementation of the system but does not entail an understanding of the information processing algorithms implemented in the system's dynamics. The missing link between the dynamics simulated at the biophysical level and the computational algorithms implemented by these dynamics can be provided by information theoretic methods. This will make information theoretic methods an indispensable tool for the investigation of large scale, detailed neural models, and the fact that these model can create large amounts of samples will further improve the reliability and usefulness of these methods.
Michael Harré - "Non-stationary monkeys: Transfer entropy as a behavioral measure in economic games"
In this talk I'll present an analysis of a recent behavioral study that used three Macaque monkeys playing economic games carried out at Yale Medical School. One of the underlying principles of economic decision theory is that choices are based on the optimal or approximately optimal integration of information which is then 'mapped' to behaviours. What I will be able to demonstrate is that Macaque monkeys continuously use their behavioural strategies as a tool to probe their environment, looking for strategies that enable them to gain an advantage. As a result, their behaviours are non-stationary and often not even approximately optimal, implying that they cognitively integrate and then map information to choices in many different ways that can vary significantly between individuals. I'll discuss these results in terms of adaptive behaviour in stationary and non-stationary environments, and the insights that are possible regarding neural signals when considered with and without behavioural data.
Tjeerd Boonstra - "Heteroclinic cycles as a model for brain activity underlying movement sequences"
Rhythmic bimanual tapping is a well-established paradigm to study movement coordination, showing close agreement between theory and experiment. Patterns of movement coordination have been studied using coupled oscillator models operating at the movement frequency. Corresponding electrophysiological studies have shown modulations of cortical beta activity (15-30Hz) nested within the slower movement cycles. Here we consider a form of winnerless competition known as heteroclinic cycles as a potential model for temporal patterning in movement-related brain activity. We study a coupled phase oscillators model in a region of parameter space where heteroclinic cycles between cluster states are robustly observed. We investigate the role of the model parameters on the timing of switching behaviour. To model bimanual tapping we propose a new coupling function to temporally coordinate two ensembles of coupled oscillators. The model shows nested oscillations within an ensemble and n:m frequency coupling between ensembles. These computational results are in correspondence with the experimental data from polyrhythmic tapping and may be relevant for the timing of movement sequences in general.
Cliff Kerr - "Multiscale modeling of cortical information flow in Parkinson's disease"
The basal ganglia play a crucial role in the execution of movements, as demonstrated by the severe motor deficits that accompany the neuronal degeneration underlying Parkinson's disease (PD). Since motor commands originate from the cortex, an important functional question is how the basal ganglia influence cortical information flow, and how this influence becomes pathological in PD. To address this issue, we developed a composite spiking neuronal network/neural field model. Both network and field models have been separately validated in previous work. Spikes generated by the field model were then used to drive the network model. We then explored the effects that these drives had on the information flow and dynamics of the network. Compared to the network driven by the healthy field model, the PD-driven network had lower firing rates and increased power at low frequencies, consistent with clinical PET and EEG findings. The PD-driven network also showed significant reductions in Granger causality from the main "input" layer of the cortex (layer 4) to the main "output" layer (layer 5). This represents a possible explanation for some of the characteristics of parkinsonism, such as bradykinesia. These results demonstrate that the brain's large-scale oscillatory environment, represented here by the field model, strongly influences the information processing that occurs within its subnetworks.
Keynote: Frieder Stolzenburg - "Neural Learning with Applications in Object Recognition and Harmony Perception"
The fields of neural computation and artificial neural networks have developed much in the last decades. Since technical, physical, and also cognitive processes evolve in time, neural networks should be considered, which allow us to model the synthesis and analysis of continuous and possibly periodic processes in time besides computing discrete classification functions. This work in progress is motivated by different application scenarios: programming robot behavior of autonomous robots and musical harmony perception. In the beginning of the talk, the first application scenario is introduced, namely object recognition with multicopters. Here, the image recognition procedure employs methods from machine learning (clustering and decision trees) and computer vision (image segmentation and contour signatures). After that, the topic of musical harmony perception is introduced, taking recent results from psychophysics and neuroacoustics into account, in particular, that periodicities of complex chords can be detected in the human brain. In the last part of the talk, a continuous-time neural network architecture (without recurrence) is introduced, which is suitable to model the scenarios just mentioned.
Saeed Afshar - "Neuromorphic Architectures for View-Invariant Object Recognition"
For over 50 years computer scientists, computational neuroscientists, psychologists and, more recently neuromorphic engineers have been attempting to model, replicate and understand view-invariant vision. In this presentation we will present a neuromorphic model of view-invariant vision that was developed with the computational constraints of simple vertebrates in mind, rather than using more complex mammalian models. The result is a real-time, view-invariant object detection and recognition model that can be implemented in simple analogue or digital hardware. The applications of this neuromorphic model are diverse from simple, interacting robots, to complex UAVs and missile interception technology. The model also emphasizes the utility of the neuromorphic approach: focusing on functionality and minimizing hardware resources. Its performance indicates that it may be useful in understanding more complex vision recognition systems in mammals and particularly humans.
Joseph Lizier - "Multivariate construction of effective computational networks from observational data"
We introduce a new method for inferring effective network structure given a multivariate time-series of activation levels of the nodes in a network. For each destination node in the network, the method identifies the set of source nodes which can be used to provide the most statistically significant information regarding outcomes of the destination, and are thus inferred as those source information nodes from which the destination is computed. This is done using incrementally conditioned transfer entropy measurements, gradually building the set of source nodes for a destination conditioned on the previously identified sources. Our method is model-free and non-linear, but more importantly it handles multivariate interactions between sources in creating outcomes at destinations (synergies), rejects spurious connections for correlated sources (redundancies), and incorporates measures to avoid combinatorial explosions in the number of source combinations evaluated. We apply the method to autoregressive dynamics as well as probabilistic Boolean networks, demonstrating the utility of the method in revealing significant proportions of the underlying structural network given only short time-series of the network dynamics, particularly in comparison to other methods.
Keynote: Thomas Nowotny - "How the fine spatio-temporal structure of the odour plume may help bees to recognize odor objects"
In this talk I will present recent our recent models of odour-background segregation in the honeybee antennal lobe. The basis of this work are recent behavioural experiments that demonstrated that honeybees can distinguish a mixture of odours where one component is only a few millisecond delayed ("asynchronous mixture") from the same mixture where both components are in synchrony ("synchronous mixture"). To explain this ability, we hypothesised that a winner-take-all inhibitory network of local neurons in the antennal lobe has a symmetry-breaking effect that allows to generate lasting differences in the response patterns of projection neurons if the mixture is asynchronous. I will present data from a detailed data-driven model of the bee antennal lobe that reproduces a large data set of experimentally observed odour responses and demonstrates that our hypothesis is consistent with the current knowledge of the olfactory circuits in the bee brain. This work introduces a new aspect to how animals may use the information available to them to make sense of the complex odorant scenes they experience every day.
Mark McDonnell - "Stochastic pooling networks embedded in cortical networks of excitatory and inhibitory neurons"
Stochastic Pooling Networks (SPNs) are a useful model for understanding and explaining how nonlinear lossy compression, random noise and redundancy can interact in surprising ways to enable quantized encoding of signals. SPNs occur in systems ranging from macroscopic social networks to neuron populations and nanoscale electronics, and support various unexpected emergent features, such as suprathreshold stochastic resonance, which is an effect where there exists an optimal amount of system noise for minimising encoding distortion. Previous work on suprathreshold stochastic resonance in populations of neurons has assumed very regular feedforward network topologies, and these networks are clearly identifiable as SPNs. Here I demonstrate that SPNs can be observed embedded within more complex neuronal networks with recurrent feedback synapses, such as models of cortical networks.
Somwrita Sarkar - "Eigenvalue spectra for hierarchically modular neural networks"
The dynamics of neural networks are strongly influenced by synaptic connectivity. This connectivity can be characterized by the eigenvalues of the connectivity matrix. Previous research has focused on spectral properties of random connectivity matrices. In this seminar, we show how to derive spectra for hierarchically modular and modular networks. Questions will be thrown open for discussing the implications for dynamics on hierarchical neural networks.
Stewart Heitmann - "Transforming cortical wave patterns into motor movement"
Travelling waves of neuronal oscillations have been observed in many cortical regions, including the motor cortex. Such waves are often modulated in a task-dependent fashion although their precise functional role remains unknown. We propose that the morphology of such waves may be exploited by the brain to encode information as spatiotemporal oscillation patterns. These wave patterns may then be decoded by neurons with dendrites that project into the cortex in a spatially tuned manner. We use numerical models to explore this proposal in the context of the descending human motor system where the axons of large layer 5 pyramidal neurons (Betz cells) descend the spinal cord to monosynaptically innervate the motor neurons. Motor cortex is simulated by a two-dimensional field of phase-coupled oscillators that is capable of generating self-organised waves with a given spatial wavelength and orientation. We then investigate how the topology of the pyramidal cell receptor field can tune the cell's responses to specific wave patterns, even when those patterns are highly degraded. Furthermore, by transforming the output of the motor neurons into muscle unit action potentials, we demonstrate that wave patterns in cortex can effectively evoke specific movements in a simulated biomechanical limb. The resulting model replicates key findings of the descending motor system during simple motor tasks, including variable interspike intervals and weak corticospinal coherence. These findings provide an integrated neuronal account of encoding and decoding motor commands that also compliments active research on the problem of 'reading out' oscillatory neuronal activity.
Keynote: Ivan Tanev - "Genetic Transposition in Incremental Genetic Programming"
We will present a study on the cumulative effect of the bloat control and the seeding of the initial population in Genetic Programming, inspired by genetic transposition (GT), on the efficiency of incremental evolution of simulated snake-like robot (Snakebot). In the proposed incremental implementation of genetic programming (IGP), the task of coevolving the locomotion gaits and sensing of the bot in a challenging environment is decomposed into two sub-tasks, implemented as two consecutive evolutionary stages. First, we use GP with three ways of bloat management - (i) linear parametric parsimony pressure, (ii) lexicographic parsimony pressure and (iii) no bloat control, to evolve three pools of well moving sensorless Snakebots. During the second stage of IGP, we use these pools to seed the initial population of Snakebots with attached sensors, applying two methods of seeding: canonical seeding and GT-inspired seeding. The seeded populations are subjected to coevolution of the locomotion control and sensing morphology in a challenging environment. The empirical results indicate that the efficiency of the first stage of IGP for all bloat control techniques is similar. However, the bloated bots contribute to a much more efficient second stage of evolution. Compared to the canonical seeding, the GT-inspired seeding with bloated Snakebots about five times higher probability of success of IGP. We speculate that the observed speed-up could be attributed to the neutral code, introduced by both the bloat and the GT-inspired seed. This code could be used by IGP as an evolutionary playground to experiment with developing novel sensory traits without damaging the already evolved locomotion abilities of the bot.
Kevin Brooks - "Contrast and stimulus complexity moderate the relationship between spatial frequency and perceived speed: Implications for MT models of speed perception"
Area MT in extrastriate visual cortex is widely believed to be responsible for the perception of object speed. Recent physiological data show that many cells in macaque visual area MT change their speed preferences with a change in stimulus spatial frequency (N. J. Priebe, C. R. Cassanello, & S. G. Lisberger, 2003) and that this effect can accurately predict the dependence of perceived speed on spatial frequency demonstrated in a related psychophysical study (N. J. Priebe & S. G. Lisberger, 2004). For more complex compound gratings and high contrast stimuli, MT cell speed preferences show sharper tuning and less dependence on spatial frequency (Priebe et al., 2003), allowing us to predict that such stimuli should produce speed percepts that are less vulnerable to spatial frequency variations. We investigated the perceived speed of simple sine wave gratings and more complex compound gratings (formed from 2 sine wave components) in response to changes in contrast and spatial frequency. In all cases, high contrast stimuli appeared to translate more rapidly. In addition, high spatial frequencies appeared faster; the opposite effect to that predicted by changes in MT cell spatial frequency preferences. Complex grating stimuli were somewhat "protected" from the effect of spatial frequency (compared to simple gratings), as predicted. However, contrary to predictions, the effect of spatial frequency was larger in high (compared to low) contrast gratings. Our data demonstrate that the previously established links between changes in MT cells' speed preferences and human speed perception are more complex than first thought.
Astrid Zeman - "The Müller-Lyer Illusion in a Deep Neural Network"
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We tested system performance in judging relative line lengths for a control set of images versus illusory images. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections.
First image credited to dow_at_uoregon.edu, obtained here (distributed without restrictions).
Second image credited to Simon Cockel, obtained here; distributed under Creative Commons License 2.0