Project description:In this paper, a study is conducted to explore the ability of deep learning in recognizing pulmonary diseases from electronically recorded lung sounds. The selected data-set included a total of 103 patients obtained from locally recorded stethoscope lung sounds acquired at King Abdullah University Hospital, Jordan University of Science and Technology, Jordan. In addition, 110 patients data were added to the data-set from the Int. Conf. on Biomedical Health Informatics publicly available challenge database. Initially, all signals were checked to have a sampling frequency of 4 kHz and segmented into 5 s segments. Then, several preprocessing steps were undertaken to ensure smoother and less noisy signals. These steps included wavelet smoothing, displacement artifact removal, and z-score normalization. The deep learning network architecture consisted of two stages; convolutional neural networks and bidirectional long short-term memory units. The training of the model was evaluated based on a k-fold cross-validation scheme of tenfolds using several performance evaluation metrics including Cohen's kappa, accuracy, sensitivity, specificity, precision, and F1-score. The developed algorithm achieved the highest average accuracy of 99.62% with a precision of 98.85% in classifying patients based on the pulmonary disease types using CNN + BDLSTM. Furthermore, a total agreement of 98.26% was obtained between the predictions and original classes within the training scheme. This study paves the way towards implementing deep learning models in clinical settings to assist clinicians in decision making related to the recognition of pulmonary diseases.Supplementary informationThe online version supplementary material available at 10.1007/s12652-021-03184-y.
Project description:The forecasting of lower limb trajectories can improve the operation of assistive devices and minimise the risk of tripping and balance loss. The aim of this work was to examine four Long Short Term Memory (LSTM) neural network architectures (Vanilla, Stacked, Bidirectional and Autoencoders) in predicting the future trajectories of lower limb kinematics, i.e. Angular Velocity (AV) and Linear Acceleration (LA). Kinematics data of foot, shank and thigh (LA and AV) were collected from 13 male and 3 female participants (28 ± 4 years old, 1.72 ± 0.07 m in height, 66 ± 10 kg in mass) who walked for 10 minutes at preferred walking speed (4.34 ± 0.43 km.h-1) and at an imposed speed (5km.h-1, 15.4% ± 7.6% faster) on a 0% gradient treadmill. The sliding window technique was adopted for training and testing the LSTM models with total kinematics time-series data of 10,500 strides. Results based on leave-one-out cross validation, suggested that the LSTM autoencoders is the top predictor of the lower limb kinematics trajectories (i.e. up to 0.1s). The normalised mean squared error was evaluated on trajectory predictions at each time-step and it obtained 2.82-5.31% for the LSTM autoencoders. The ability to predict future lower limb motions may have a wide range of applications including the design and control of bionics allowing improved human-machine interface and mitigating the risk of falls and balance loss.
Project description:Image analysis in histopathology provides insights into the microscopic examination of tissue for disease diagnosis, prognosis, and biomarker discovery. Particularly for cancer research, precise classification of histopathological images is the ultimate objective of the image analysis. Here, the time-frequency time-space long short-term memory network (TF-TS LSTM) developed for classification of time series is applied for classifying histopathological images. The deep learning is empowered by the use of sequential time-frequency and time-space features extracted from the images. Furthermore, unlike conventional classification practice, a strategy for class modeling is designed to leverage the learning power of the TF-TS LSTM. Tests on several datasets of histopathological images of haematoxylin-and-eosin and immunohistochemistry stains demonstrate the strong capability of the artificial intelligence (AI)-based approach for producing very accurate classification results. The proposed approach has the potential to be an AI tool for robust classification of histopathological images.
Project description:Current practice of building QSAR models usually involves computing a set of descriptors for the training set compounds, applying a descriptor selection algorithm and finally using a statistical fitting method to build the model. In this study, we explored the prospects of building good quality interpretable QSARs for big and diverse datasets, without using any pre-calculated descriptors. We have used different forms of Long Short-Term Memory (LSTM) neural networks to achieve this, trained directly using either traditional SMILES codes or a new linear molecular notation developed as part of this work. Three endpoints were modeled: Ames mutagenicity, inhibition of P. falciparum Dd2 and inhibition of Hepatitis C Virus, with training sets ranging from 7,866 to 31,919 compounds. To boost the interpretability of the prediction results, attention-based machine learning mechanism, jointly with a bidirectional LSTM was used to detect structural alerts for the mutagenicity data set. Traditional fragment descriptor-based models were used for comparison. As per the results of the external and cross-validation experiments, overall prediction accuracies of the LSTM models were close to the fragment-based models. However, LSTM models were superior in predicting test chemicals that are dissimilar to the training set compounds, a coveted quality of QSAR models in real world applications. In summary, it is possible to build QSAR models using LSTMs without using pre-computed traditional descriptors, and models are far from being "black box." We wish that this study will be helpful in bringing large, descriptor-less QSARs to mainstream use.
Project description:The networks proposed here show how neurons can be connected to form flip-flops, the basic building blocks in sequential logic systems. The novel neural flip-flops (NFFs) are explicit, dynamic, and can generate known phenomena of short-term memory. For each network design, all neurons, connections, and types of synapses are shown explicitly. The neurons' operation depends only on explicitly stated, minimal properties of excitement and inhibition. This operation is dynamic in the sense that the level of neuron activity is the only cellular change, making the NFFs' operation consistent with the speed of most brain functions. Memory tests have shown that certain neurons fire continuously at a high frequency while information is held in short-term memory. These neurons exhibit seven characteristics associated with memory formation, retention, retrieval, termination, and errors. One of the neurons in each of the NFFs produces all of the characteristics. This neuron and a second neighboring neuron together predict eight unknown phenomena. These predictions can be tested by the same methods that led to the discovery of the first seven phenomena. NFFs, together with a decoder from a previous paper, suggest a resolution to the longstanding controversy of whether short-term memory depends on neurons firing persistently or in brief, coordinated bursts. Two novel NFFs are composed of two and four neurons. Their designs follow directly from a standard electronic flip-flop design by moving each negation symbol from one end of the connection to the other. This does not affect the logic of the network, but it changes the logic of each component to a logic function that can be implemented by a single neuron. This transformation is reversible and is apparently new to engineering as well as neuroscience.
Project description:Cardiac arrhythmia is a leading cause of cardiovascular disease, with a high fatality rate worldwide. The timely diagnosis of cardiac arrhythmias, determined by irregular and fast heart rate, may help lower the risk of strokes. Electrocardiogram signals have been widely used to identify arrhythmias due to their non-invasive approach. However, the manual process is error-prone and time-consuming. A better alternative is to utilize deep learning models for early automatic identification of cardiac arrhythmia, thereby enhancing diagnosis and treatment. In this article, a novel deep learning model, combining convolutional neural network and bi-directional long short-term memory, is proposed for arrhythmia classification. Specifically, the classification comprises five different classes: non-ectopic (N), supraventricular ectopic (S), ventricular ectopic (V), fusion (F), and unknown (Q) beats. The proposed model is trained, validated, and tested using MIT-BIH and St-Petersburg data sets separately. Also, the performance was measured in terms of precision, accuracy, recall, specificity, and f1-score. The results show that the proposed model achieves training, validation, and testing accuracies of 100%, 98%, and 98%, respectively with the MIT-BIH data set. Lower accuracies were shown for the St-Petersburg data set. The performance of the proposed model based on the MIT-BIH data set is also compared with the performance of existing models based on the MIT-BIH data set.
Project description:Visual short-term memory (VSTM) relies on a distributed network including sensory-related, posterior regions of the brain and frontal areas associated with attention and cognitive control. To characterize the fine temporal details of processing within this network, we recorded event-related potentials (ERPs) while human subjects performed a recognition-memory task. The task's difficulty was graded by varying the perceptual similarity between the items held in memory and the probe used to access memory. The evaluation of VSTM's contents against a test stimulus produced clear similarity-dependent differences in ERPs as early as 156 ms after probe onset. Posterior recording sites were the first to reflect the difficulty of the analysis, preceding their frontal counterparts by about 50 ms. Our results suggest an initial feed-forward interaction underlying stimulus-memory comparisons, consistent with the idea that visual areas contribute to temporary storage of visual information for use in ongoing tasks. This study provides a first look into early neural activity underlying the processing of visual information in short-term memory.
Project description:Behavioral research has led to the view that items in short-term memory can be parsed into two categories: a single item in the focus of attention that is available for immediate cognitive processing and a small set of other items that are in a heightened state of activation but require retrieval for further use. We examined this distinction by using an item-recognition task. The results show that the item in the focus of attention is represented by increased activation in inferior temporal representational cortices relative to other information in short-term memory. Functional connectivity analyses suggest that activation of these inferior temporal regions is maintained via frontal- and posterior-parietal contributions. By contrast, other items in short-term memory demand retrieval mechanisms that are represented by increased activation in the medial temporal lobe and left mid-ventrolateral prefrontal cortex. These results show that there are two distinctly different sorts of access to information in short-term memory, and that access by retrieval operations makes use of neural machinery similar to that used in long-term memory retrieval.
Project description:Seizure prediction could improve quality of life for patients through removing uncertainty and providing an opportunity for acute treatments. Most seizure prediction models use feature engineering to process the EEG recordings. Long-Short Term Memory (LSTM) neural networks are a recurrent neural network architecture that can display temporal dynamics and, therefore, potentially analyze EEG signals without performing feature engineering. In this study, we tested if LSTMs could classify unprocessed EEG recordings to make seizure predictions. Long-term intracranial EEG data was used from 10 patients. 10-s segments of EEG were input to LSTM models that were trained to classify the EEG signal. The final seizure prediction was generated from 5 outputs of the LSTM model over 50 s and combined with time information to account for seizure cycles. The LSTM models could make predictions significantly better than a random predictor. When compared to other publications using the same dataset, our model performed better than several others and was comparable to the best models published to date. Furthermore, this framework could still produce predictions significantly better than chance when the experimental paradigm design was altered, without the need to reperform feature engineering. Removing the need to perform feature engineering is an advancement on previously published models. This framework can be applied to many different patients' needs and a variety of acute interventions. Also, it opens the possibility of personalized seizure predictions that can be altered to meet daily needs.
Project description:Antibiotic resistance is an increasing public health threat. To combat it, a fast method to determine the antibiotic susceptibility of infecting pathogens is required. Here we present an optical imaging-based method to track the motion of single bacterial cells and generate a model to classify active and inactive cells based on the motion patterns of the individual cells. The model includes an image-processing algorithm to segment individual bacterial cells and track the motion of the cells over time, and a deep learning algorithm (Long Short-Term Memory network) to learn and determine if a bacterial cell is active or inactive. By applying the model to human urine specimens spiked with an Escherichia coli lab strain, we show that the method can accurately perform antibiotic susceptibility testing as fast as 30 minutes for five commonly used antibiotics.