Project description:Gamma oscillations are widely seen in the awake and sleeping cerebral cortex, but the exact role of these oscillations is still debated. Here, we used biophysical models to examine how Gamma oscillations may participate to the processing of afferent stimuli. We constructed conductance-based network models of Gamma oscillations, based on different cell types found in cerebral cortex. The models were adjusted to extracellular unit recordings in humans, where Gamma oscillations always coexist with the asynchronous firing mode. We considered three different mechanisms to generate Gamma, first a mechanism based on the interaction between pyramidal neurons and interneurons (PING), second a mechanism in which Gamma is generated by interneuron networks (ING) and third, a mechanism which relies on Gamma oscillations generated by pacemaker chattering neurons (CHING). We find that all three mechanisms generate features consistent with human recordings, but that the ING mechanism is most consistent with the firing rate change inside Gamma bursts seen in the human data. We next evaluated the responsiveness and resonant properties of these networks, contrasting Gamma oscillations with the asynchronous mode. We find that for both slowly-varying stimuli and precisely-timed stimuli, the responsiveness is generally lower during Gamma compared to asynchronous states, while resonant properties are similar around the Gamma band. We could not find conditions where Gamma oscillations were more responsive. We therefore predict that asynchronous states provide the highest responsiveness to external stimuli, while Gamma oscillations tend to overall diminish responsiveness.
Project description:Interictal high-frequency oscillations (HFO) detected in electroencephalography recordings have been proposed as biomarkers of epileptogenesis, seizure propensity, disease severity, and treatment response. Automatic HFO detectors typically analyze the data offline using complex time-consuming algorithms, which limits their clinical application. Neuromorphic circuits offer the possibility of building compact and low-power processing systems that can analyze data on-line and in real time. In this review, we describe a fully automated detection pipeline for HFO that uses, for the first time, spiking neural networks and neuromorphic technology. We demonstrated that our HFO detection pipeline can be applied to recordings from different modalities (intracranial electroencephalography, electrocorticography, and scalp electroencephalography) and validated its operation in a custom-designed neuromorphic processor. Our HFO detection approach resulted in high accuracy and specificity in the prediction of seizure outcome in patients implanted with intracranial electroencephalography and electrocorticography, and in the prediction of epilepsy severity in patients recorded with scalp electroencephalography. Our research provides a further step toward the real-time detection of HFO using compact and low-power neuromorphic devices. The real-time detection of HFO in the operation room may improve the seizure outcome of epilepsy surgery, while the use of our neuromorphic processor for non-invasive therapy monitoring might allow for more effective medication strategies to achieve seizure control. Therefore, this work has the potential to improve the quality of life in patients with epilepsy by improving epilepsy diagnostics and treatment.
Project description:We propose a novel, scalable, and accurate method for detecting neuronal ensembles from a population of spiking neurons. Our approach offers a simple yet powerful tool to study ensemble activity. It relies on clustering synchronous population activity (population vectors), allows the participation of neurons in different ensembles, has few parameters to tune and is computationally efficient. To validate the performance and generality of our method, we generated synthetic data, where we found that our method accurately detects neuronal ensembles for a wide range of simulation parameters. We found that our method outperforms current alternative methodologies. We used spike trains of retinal ganglion cells obtained from multi-electrode array recordings under a simple ON-OFF light stimulus to test our method. We found a consistent stimuli-evoked ensemble activity intermingled with spontaneously active ensembles and irregular activity. Our results suggest that the early visual system activity could be organized in distinguishable functional ensembles. We provide a Graphic User Interface, which facilitates the use of our method by the scientific community.
Project description:The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Project description:State-of-the-art computer vision systems use frame-based cameras that sample the visual scene as a series of high-resolution images. These are then processed using convolutional neural networks using neurons with continuous outputs. Biological vision systems use a quite different approach, where the eyes (cameras) sample the visual scene continuously, often with a non-uniform resolution, and generate neural spike events in response to changes in the scene. The resulting spatio-temporal patterns of events are then processed through networks of spiking neurons. Such event-based processing offers advantages in terms of focusing constrained resources on the most salient features of the perceived scene, and those advantages should also accrue to engineered vision systems based upon similar principles. Event-based vision sensors, and event-based processing exemplified by the SpiNNaker (Spiking Neural Network Architecture) machine, can be used to model the biological vision pathway at various levels of detail. Here we use this approach to explore structural synaptic plasticity as a possible mechanism whereby biological vision systems may learn the statistics of their inputs without supervision, pointing the way to engineered vision systems with similar online learning capabilities.
Project description:In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.
Project description:Somatosensation is composed of two distinct modalities: touch, arising from sensors in the skin, and proprioception, resulting primarily from sensors in the muscles, combined with these same cutaneous sensors. In contrast to the wealth of information about touch, we know quite less about the nature of the signals giving rise to proprioception at the cortical level. Likewise, while there is considerable interest in developing encoding models of touch-related neurons for application to brain machine interfaces, much less emphasis has been placed on an analogous proprioceptive interface. Here we investigate the use of Artificial Neural Networks (ANNs) to model the relationship between the firing rates of single neurons in area 2, a largely proprioceptive region of somatosensory cortex (S1) and several types of kinematic variables related to arm movement. To gain a better understanding of how these kinematic variables interact to create the proprioceptive responses recorded in our datasets, we train ANNs under different conditions, each involving a different set of input and output variables. We explore the kinematic variables that provide the best network performance, and find that the addition of information about joint angles and/or muscle lengths significantly improves the prediction of neural firing rates. Our results thus provide new insight regarding the complex representations of the limb motion in S1: that the firing rates of neurons in area 2 may be more closely related to the activity of peripheral sensors than it is to extrinsic hand position. In addition, we conduct numerical experiments to determine the sensitivity of ANN models to various choices of training design and hyper-parameters. Our results provide a baseline and new tools for future research that utilizes machine learning to better describe and understand the activity of neurons in S1.
Project description:In biological neural systems, different neurons are capable of self-organizing to form different neural circuits for achieving a variety of cognitive functions. However, the current design paradigm of spiking neural networks is based on structures derived from deep learning. Such structures are dominated by feedforward connections without taking into account different types of neurons, which significantly prevent spiking neural networks from realizing their potential on complex tasks. It remains an open challenge to apply the rich dynamical properties of biological neural circuits to model the structure of current spiking neural networks. This paper provides a more biologically plausible evolutionary space by combining feedforward and feedback connections with excitatory and inhibitory neurons. We exploit the local spiking behavior of neurons to adaptively evolve neural circuits such as forward excitation, forward inhibition, feedback inhibition, and lateral inhibition by the local law of spike-timing-dependent plasticity and update the synaptic weights in combination with the global error signals. By using the evolved neural circuits, we construct spiking neural networks for image classification and reinforcement learning tasks. Using the brain-inspired Neural circuit Evolution strategy (NeuEvo) with rich neural circuit types, the evolved spiking neural network greatly enhances capability on perception and reinforcement learning tasks. NeuEvo achieves state-of-the-art performance on CIFAR10, DVS-CIFAR10, DVS-Gesture, and N-Caltech101 datasets and achieves advanced performance on ImageNet. Combined with on-policy and off-policy deep reinforcement learning algorithms, it achieves comparable performance with artificial neural networks. The evolved spiking neural circuits lay the foundation for the evolution of complex networks with functions.
Project description:Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using isomorphic binary spikes. While Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons (Cao et al., 2015; Diehl et al., 2015) to obtain reasonable performance, these SNNs use Poisson spiking mechanisms with exceedingly high firing rates compared to their biological counterparts. Here we show how spiking neurons that employ a form of neural coding can be used to construct SNNs that match high-performance ANNs and match or exceed state-of-the-art in SNNs on important benchmarks, while requiring firing rates compatible with biological findings. For this, we use spike-based coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in fast adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out competitive classification in deep neural networks without further modifications. Adaptive spike-based coding additionally allows for the dynamic control of neural coding precision: we show empirically how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally extends to other forms of attention as studied in neuroscience. AdSNNs thus hold promise as a novel and sparsely active model for neural computation that naturally fits to temporally continuous and asynchronous applications.
Project description:The adaptive changes in synaptic efficacy that occur between spiking neurons have been demonstrated to play a critical role in learning for biological neural networks. Despite this source of inspiration, many learning focused applications using Spiking Neural Networks (SNNs) retain static synaptic connections, preventing additional learning after the initial training period. Here, we introduce a framework for simultaneously learning the underlying fixed-weights and the rules governing the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in SNNs through gradient descent. We further demonstrate the capabilities of this framework on a series of challenging benchmarks, learning the parameters of several plasticity rules including BCM, Oja's, and their respective set of neuromodulatory variants. The experimental results display that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks that a traditional SNN fails to solve, even in the presence of significant noise. These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task, where near-minimal degradation in performance is observed in the presence of novel conditions not seen during the initial training period.