Project description:Spiking neural networks (SNNs) promise to bridge the gap between artificial neural networks (ANNs) and biological neural networks (BNNs) by exploiting biologically plausible neurons that offer faster inference, lower energy expenditure, and event-driven information processing capabilities. However, implementation of SNNs in future neuromorphic hardware requires hardware encoders analogous to the sensory neurons, which convert external/internal stimulus into spike trains based on specific neural algorithm along with inherent stochasticity. Unfortunately, conventional solid-state transducers are inadequate for this purpose necessitating the development of neural encoders to serve the growing need of neuromorphic computing. Here, we demonstrate a biomimetic device based on a dual gated MoS2 field effect transistor (FET) capable of encoding analog signals into stochastic spike trains following various neural encoding algorithms such as rate-based encoding, spike timing-based encoding, and spike count-based encoding. Two important aspects of neural encoding, namely, dynamic range and encoding precision are also captured in our demonstration. Furthermore, the encoding energy was found to be as frugal as ≈1-5 pJ/spike. Finally, we show fast (≈200 timesteps) encoding of the MNIST data set using our biomimetic device followed by more than 91% accurate inference using a trained SNN.
Project description:Accurately predicting the brain responses to various stimuli poses a significant challenge in neuroscience. Despite recent breakthroughs in neural encoding using convolutional neural networks (CNNs) in fMRI studies, there remain critical gaps between the computational rules of traditional artificial neurons and real biological neurons. To address this issue, a spiking CNN (SCNN)-based framework is presented in this study to achieve neural encoding in a more biologically plausible manner. The framework utilizes unsupervised SCNN to extract visual features of image stimuli and employs a receptive field-based regression algorithm to predict fMRI responses from the SCNN features. Experimental results on handwritten characters, handwritten digits and natural images demonstrate that the proposed approach can achieve remarkably good encoding performance and can be utilized for "brain reading" tasks such as image reconstruction and identification. This work suggests that SNN can serve as a promising tool for neural encoding.
Project description:Despite advances in artificial intelligence models, neural networks still cannot achieve human performance, partly due to differences in how information is encoded and processed compared with human brain. Information in an artificial neural network (ANN) is represented using a statistical method and processed as a fitting function, enabling handling of structural patterns in image, text, and speech processing. However, substantial changes to the statistical characteristics of the data, for example, reversing the background of an image, dramatically reduce the performance. Here, we propose a quantum superposition spiking neural network (QS-SNN) inspired by quantum mechanisms and phenomena in the brain, which can handle reversal of image background color. The QS-SNN incorporates quantum theory with brain-inspired spiking neural network models from a computational perspective, resulting in more robust performance compared with traditional ANN models, especially when processing noisy inputs. The results presented here will inform future efforts to develop brain-inspired artificial intelligence.
Project description:Neural modelling tools are increasingly employed to describe, explain, and predict the human brain's behavior. Among them, spiking neural networks (SNNs) make possible the simulation of neural activity at the level of single neurons, but their use is often threatened by the resources needed in terms of processing capabilities and memory. Emerging applications where a low energy burden is required (e.g. implanted neuroprostheses) motivate the exploration of new strategies able to capture the relevant principles of neuronal dynamics in reduced and efficient models. The recent Leaky Integrate-and-Fire with Latency (LIFL) spiking neuron model shows some realistic neuronal features and efficiency at the same time, a combination of characteristics that may result appealing for SNN-based brain modelling. In this paper we introduce FNS, the first LIFL-based SNN framework, which combines spiking/synaptic modelling with the event-driven approach, allowing us to define heterogeneous neuron groups and multi-scale connectivity, with delayed connections and plastic synapses. FNS allows multi-thread, precise simulations, integrating a novel parallelization strategy and a mechanism of periodic dumping. We evaluate the performance of FNS in terms of simulation time and used memory, and compare it with those obtained with neuronal models having a similar neurocomputational profile, implemented in NEST, showing that FNS performs better in both scenarios. FNS can be advantageously used to explore the interaction within and between populations of spiking neurons, even for long time-scales and with a limited hardware configuration.
Project description:Graph layout algorithms used in network visualization represent the first and the most widely used tool to unveil the inner structure and the behavior of complex networks. Current network visualization software relies on the force-directed layout (FDL) algorithm, whose high computational complexity makes the visualization of large real networks computationally prohibitive and traps large graphs into high energy configurations, resulting in hard-to-interpret "hairball" layouts. Here we use Graph Neural Networks (GNN) to accelerate FDL, showing that deep learning can address both limitations of FDL: it offers a 10 to 100 fold improvement in speed while also yielding layouts which are more informative. We analytically derive the speedup offered by GNN, relating it to the number of outliers in the eigenspectrum of the adjacency matrix, predicting that GNNs are particularly effective for networks with communities and local regularities. Finally, we use GNN to generate a three-dimensional layout of the Internet, and introduce additional measures to assess the layout quality and its interpretability, exploring the algorithm's ability to separate communities and the link-length distribution. The novel use of deep neural networks can help accelerate other network-based optimization problems as well, with applications from reaction-diffusion systems to epidemics.
Project description:We have developed a new deep boosted molecular dynamics (DBMD) method. Probabilistic Bayesian neural network models were implemented to construct boost potentials that exhibit Gaussian distribution with minimized anharmonicity, thereby allowing for accurate energetic reweighting and enhanced sampling of molecular simulations. DBMD was demonstrated on model systems of alanine dipeptide and the fast-folding protein and RNA structures. For alanine dipeptide, 30 ns DBMD simulations captured up to 83-125 times more backbone dihedral transitions than 1 μs conventional molecular dynamics (cMD) simulations and were able to accurately reproduce the original free energy profiles. Moreover, DBMD sampled multiple folding and unfolding events within 300 ns simulations of the chignolin model protein and identified low-energy conformational states comparable to previous simulation findings. Finally, DBMD captured a general folding pathway of three hairpin RNAs with the GCAA, GAAA, and UUCG tetraloops. Based on a deep learning neural network, DBMD provides a powerful and generally applicable approach to boosting biomolecular simulations. DBMD is available with open source in OpenMM at https://github.com/MiaoLab20/DBMD/.
Project description:We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Project description:In computational neuroscience, hypotheses are often formulated as bottom-up mechanistic models of the systems in question, consisting of differential equations that can be numerically integrated forward in time. Candidate models can then be validated by comparison against experimental data. The model outputs of neural network models depend on both neuron parameters, connectivity parameters and other model inputs. Successful model fitting requires sufficient exploration of the model parameter space, which can be computationally demanding. Additionally, identifying degeneracy in the parameters, i.e. different combinations of parameter values that produce similar outputs, is of interest, as they define the subset of parameter values consistent with the data. In this computational study, we apply metamodels to a two-population recurrent spiking network of point-neurons, the so-called Brunel network. Metamodels are data-driven approximations to more complex models with more desirable computational properties, which can be run considerably faster than the original model. Specifically, we apply and compare two different metamodelling techniques, masked autoregressive flows (MAF) and deep Gaussian process regression (DGPR), to estimate the power spectra of two different signals; the population spiking activities and the local field potential. We find that the metamodels are able to accurately model the power spectra in the asynchronous irregular regime, and that the DGPR metamodel provides a more accurate representation of the simulator compared to the MAF metamodel. Using the metamodels, we estimate the posterior probability distributions over parameters given observed simulator outputs separately for both LFP and population spiking activities. We find that these distributions correctly identify parameter combinations that give similar model outputs, and that some parameters are significantly more constrained by observing the LFP than by observing the population spiking activities.
Project description:We have developed a new Deep Boosted Molecular Dynamics (DBMD) method. Probabilistic Bayesian neural network models were implemented to construct boost potentials that exhibit Gaussian distribution with minimized anharmonicity, thereby allowing for accurate energetic reweighting and enhanced sampling of molecular simulations. DBMD was demonstrated on model systems of alanine dipeptide and the fast-folding protein and RNA structures. For alanine dipeptide, 30ns DBMD simulations captured up to 83-125 times more backbone dihedral transitions than 1μs conventional molecular dynamics (cMD) simulations and were able to accurately reproduce the original free energy profiles. Moreover, DBMD sampled multiple folding and unfolding events within 300ns simulations of the chignolin model protein and identified low-energy conformational states comparable to previous simulation findings. Finally, DBMD captured a general folding pathway of three hairpin RNAs with the GCAA, GAAA, and UUCG tetraloops. Based on Deep Learning neural network, DBMD provides a powerful and generally applicable approach to boosting biomolecular simulations. DBMD is available with open source in OpenMM at https://github.com/MiaoLab20/DBMD/.
Project description:We demonstrate that recently introduced ultra-compact neurons (UCN) with a minimal number of components can be interconnected to implement a functional spiking neural network. For concreteness we focus on the Jeffress model, which is a classic neuro-computational model proposed in the 40's to explain the sound directionality detection by animals and humans. In addition, we introduce a long-axon neuron, whose architecture is inspired by the Hodgkin-Huxley axon delay-line and where the UCNs implement the nodes of Ranvier. We then interconnect two of those neurons to an output layer of UCNs, which detect coincidences between spikes propagating down the long-axons. This functional spiking neural neuron circuit with biological relevance is built from identical UCN blocks, which are simple enough to be made with off-the-shelf electronic components. Our work realizes a new, accessible and affordable physical model platform, where neuroscientists can construct arbitrary mid-size spiking neuronal networks in a lego-block like fashion that work in continuous time. This should enable them to address in a novel experimental manner fundamental questions about the nature of the neural code and to test predictions from mathematical models and algorithms of basic neurobiology research. The present work aims at opening a new experimental field of basic research in Spiking Neural Networks to a potentially large community, which is at the crossroads of neurobiology, dynamical systems, theoretical neuroscience, condensed matter physics, neuromorphic engineering, artificial intelligence, and complex systems.