Project description:Changes to the structure and function of neural networks are thought to underlie the evolutionary adaptation of animal behaviours. Among the many developmental phenomena that generate change programmed cell death (PCD) appears to play a key role. We show that cell death occurs continuously throughout insect neurogenesis and happens soon after neurons are born. Mimicking an evolutionary role for increasing cell numbers, we artificially block in the medial neuroblast lineage in Drosophila melanogaster, which results in the production of 'undead' neurons with complex arborisations and distinct neurotransmitter identities. Activation of these 'undead' neurons and recordings of neural activity in behaving animals demonstrate that they are functional. Focusing on two dipterans, which have lost flight during evolution, we reveal that reductions in populations of flight interneurons are likely caused by increased cell death during development. Our findings suggest that the evolutionary modulation of death-based patterning could generate novel network configurations.
Project description:GyroWheel is an integrated device that can provide three-axis control torques and two-axis angular rate sensing for small spacecrafts. Large tilt angle of its rotor and de-tuned spin rate lead to a complex and non-linear dynamics as well as difficulties in measuring angular rates. In this paper, the problem of angular rate sensing with the GyroWheel is investigated. Firstly, a simplified rate sensing equation is introduced, and the error characteristics of the method are analyzed. According to the analysis results, a rate sensing principle based on torque balance theory is developed, and a practical way to estimate the angular rates within the whole operating range of GyroWheel is provided by using explicit genetic algorithm optimized neural networks. The angular rates can be determined by the measurable values of the GyroWheel (including tilt angles, spin rate and torque coil currents), the weights and the biases of the neural networks. Finally, the simulation results are presented to illustrate the effectiveness of the proposed angular rate sensing method with GyroWheel.
Project description:Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques.
Project description:Face-selective neurons are observed in the primate visual pathway and are considered as the basis of face detection in the brain. However, it has been debated as to whether this neuronal selectivity can arise innately or whether it requires training from visual experience. Here, using a hierarchical deep neural network model of the ventral visual stream, we suggest a mechanism in which face-selectivity arises in the complete absence of training. We found that units selective to faces emerge robustly in randomly initialized networks and that these units reproduce many characteristics observed in monkeys. This innate selectivity also enables the untrained network to perform face-detection tasks. Intriguingly, we observed that units selective to various non-face objects can also arise innately in untrained networks. Our results imply that the random feedforward connections in early, untrained deep neural networks may be sufficient for initializing primitive visual selectivity.
Project description:Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.
Project description:BACKGROUND:MicroRNAs (miRNAs) play important roles in a variety of biological processes by regulating gene expression at the post-transcriptional level. So, the discovery of new miRNAs has become a popular task in biological research. Since the experimental identification of miRNAs is time-consuming, many computational tools have been developed to identify miRNA precursor (pre-miRNA). Most of these computation methods are based on traditional machine learning methods and their performance depends heavily on the selected features which are usually determined by domain experts. To develop easily implemented methods with better performance, we investigated different deep learning architectures for the pre-miRNAs identification. RESULTS:In this work, we applied convolution neural networks (CNN) and recurrent neural networks (RNN) to predict human pre-miRNAs. We combined the sequences with the predicted secondary structures of pre-miRNAs as input features of our models, avoiding the feature extraction and selection process by hand. The models were easily trained on the training dataset with low generalization error, and therefore had satisfactory performance on the test dataset. The prediction results on the same benchmark dataset showed that our models outperformed or were highly comparable to other state-of-the-art methods in this area. Furthermore, our CNN model trained on human dataset had high prediction accuracy on data from other species. CONCLUSIONS:Deep neural networks (DNN) could be utilized for the human pre-miRNAs detection with high performance. Complex features of RNA sequences could be automatically extracted by CNN and RNN, which were used for the pre-miRNAs prediction. Through proper regularization, our deep learning models, although trained on comparatively small dataset, had strong generalization ability.
Project description:Prostate cancer is one of the most common forms of cancer and the third leading cause of cancer death in North America. As an integrated part of computer-aided detection (CAD) tools, diffusion-weighted magnetic resonance imaging (DWI) has been intensively studied for accurate detection of prostate cancer. With deep convolutional neural networks (CNNs) significant success in computer vision tasks such as object detection and segmentation, different CNN architectures are increasingly investigated in medical imaging research community as promising solutions for designing more accurate CAD tools for cancer detection. In this work, we developed and implemented an automated CNN-based pipeline for detection of clinically significant prostate cancer (PCa) for a given axial DWI image and for each patient. DWI images of 427 patients were used as the dataset, which contained 175 patients with PCa and 252 patients without PCa. To measure the performance of the proposed pipeline, a test set of 108 (out of 427) patients were set aside and not used in the training phase. The proposed pipeline achieved area under the receiver operating characteristic curve (AUC) of 0.87 (95[Formula: see text] Confidence Interval (CI): 0.84-0.90) and 0.84 (95[Formula: see text] CI: 0.76-0.91) at slice level and patient level, respectively.
Project description:BackgroundA common yet still manual task in basic biology research, high-throughput drug screening and digital pathology is identifying the number, location, and type of individual cells in images. Object detection methods can be useful for identifying individual cells as well as their phenotype in one step. State-of-the-art deep learning for object detection is poised to improve the accuracy and efficiency of biological image analysis.ResultsWe created Keras R-CNN to bring leading computational research to the everyday practice of bioimage analysts. Keras R-CNN implements deep learning object detection techniques using Keras and Tensorflow ( https://github.com/broadinstitute/keras-rcnn ). We demonstrate the command line tool's simplified Application Programming Interface on two important biological problems, nucleus detection and malaria stage classification, and show its potential for identifying and classifying a large number of cells. For malaria stage classification, we compare results with expert human annotators and find comparable performance.ConclusionsKeras R-CNN is a Python package that performs automated cell identification for both brightfield and fluorescence images and can process large image sets. Both the package and image datasets are freely available on GitHub and the Broad Bioimage Benchmark Collection.
Project description:fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework.
Project description:Calpains are a family of calcium-activated proteases involved in numerous disorders. Notably, previous studies have shown that calpain activity was substantially increased in various models for inherited retinal degeneration (RD). In the present study, we tested the capacity of the calpain-specific substrate t-BOC-Leu-Met-CMAC to detect calpain activity in living retina, in organotypic retinal explant cultures derived from wild-type mice, as well as from rd1 and RhoP23H/+ RD-mutant mice. Test conditions were refined until the calpain substrate readily detected large numbers of cells in the photoreceptor layer of RD retina but not in wild-type retina. At the same time, the calpain substrate was not obviously toxic to photoreceptor cells. Comparison of calpain activity with immunostaining for activated calpain-2 furthermore suggested that individual calpain isoforms may be active in distinct temporal stages of photoreceptor cell death. Notably, calpain-2 activity may be a relatively short-lived event, occurring only towards the end of the cell-death process. Finally, our results support the development of calpain activity detection as a novel in vivo biomarker for RD suitable for combination with non-invasive imaging techniques.