Project description:BackgroundNeglected tropical diseases (NTDs) primarily affect the poorest populations, often living in remote, rural areas, urban slums or conflict zones. Arboviruses are a significant NTD category spread by mosquitoes. Dengue, Chikungunya, and Zika are three arboviruses that affect a large proportion of the population in Latin and South America. The clinical diagnosis of these arboviral diseases is a difficult task due to the concurrent circulation of several arboviruses which present similar symptoms, inaccurate serologic tests resulting from cross-reaction and co-infection with other arboviruses.ObjectiveThe goal of this paper is to present evidence on the state of the art of studies investigating the automatic classification of arboviral diseases to support clinical diagnosis based on Machine Learning (ML) and Deep Learning (DL) models.MethodWe carried out a Systematic Literature Review (SLR) in which Google Scholar was searched to identify key papers on the topic. From an initial 963 records (956 from string-based search and seven from a single backward snowballing procedure), only 15 relevant papers were identified.ResultsResults show that current research is focused on the binary classification of Dengue, primarily using tree-based ML algorithms. Only one paper was identified using DL. Five papers presented solutions for multi-class problems, covering Dengue (and its variants) and Chikungunya. No papers were identified that investigated models to differentiate between Dengue, Chikungunya, and Zika.ConclusionsThe use of an efficient clinical decision support system for arboviral diseases can improve the quality of the entire clinical process, thus increasing the accuracy of the diagnosis and the associated treatment. It should help physicians in their decision-making process and, consequently, improve the use of resources and the patient's quality of life.
Project description:Although advances in deep learning systems for image-based medical diagnosis demonstrate their potential to augment clinical decision-making, the effectiveness of physician-machine partnerships remains an open question, in part because physicians and algorithms are both susceptible to systematic errors, especially for diagnosis of underrepresented populations. Here we present results from a large-scale digital experiment involving board-certified dermatologists (n = 389) and primary-care physicians (n = 459) from 39 countries to evaluate the accuracy of diagnoses submitted by physicians in a store-and-forward teledermatology simulation. In this experiment, physicians were presented with 364 images spanning 46 skin diseases and asked to submit up to four differential diagnoses. Specialists and generalists achieved diagnostic accuracies of 38% and 19%, respectively, but both specialists and generalists were four percentage points less accurate for the diagnosis of images of dark skin as compared to light skin. Fair deep learning system decision support improved the diagnostic accuracy of both specialists and generalists by more than 33%, but exacerbated the gap in the diagnostic accuracy of generalists across skin tones. These results demonstrate that well-designed physician-machine partnerships can enhance the diagnostic accuracy of physicians, illustrating that success in improving overall diagnostic accuracy does not necessarily address bias.
Project description:Alzheimer’s disease is an incurable neurodegenerative disease that affects brain memory mainly in aged people. Alzheimer’s disease occurs worldwide and mainly affects people aged older than 65 years. Early diagnosis for accurate detection is needed for this disease. Manual diagnosis by health specialists is error prone and time consuming due to the large number of patients presenting with the disease. Various techniques have been applied to the diagnosis and classification of Alzheimer’s disease but there is a need for more accuracy in early diagnosis solutions. The model proposed in this research suggests a deep learning-based solution using DenseNet-169 and ResNet-50 CNN architectures for the diagnosis and classification of Alzheimer’s disease. The proposed model classifies Alzheimer’s disease into Non-Dementia, Very Mild Dementia, Mild Dementia, and Moderate Dementia. The DenseNet-169 architecture outperformed in the training and testing phases. The training and testing accuracy values for DenseNet-169 are 0.977 and 0.8382, while the accuracy values for ResNet-50 were 0.8870 and 0.8192. The proposed model is usable for real-time analysis and classification of Alzheimer’s disease.
Project description:The increasing rates of neurodevelopmental disorders (NDs) are threatening pregnant women, parents, and clinicians caring for healthy infants and children. NDs can initially start through embryonic development due to several reasons. Up to three in 1000 pregnant women have embryos with brain defects; hence, the primitive detection of embryonic neurodevelopmental disorders (ENDs) is necessary. Related work done for embryonic ND classification is very limited and is based on conventional machine learning (ML) methods for feature extraction and classification processes. Feature extraction of these methods is handcrafted and has several drawbacks. Deep learning methods have the ability to deduce an optimum demonstration from the raw images without image enhancement, segmentation, and feature extraction processes, leading to an effective classification process. This article proposes a new framework based on deep learning methods for the detection of END. To the best of our knowledge, this is the first study that uses deep learning techniques for detecting END. The framework consists of four stages which are transfer learning, deep feature extraction, feature reduction, and classification. The framework depends on feature fusion. The results showed that the proposed framework was capable of identifying END from embryonic MRI images of various gestational ages. To verify the efficiency of the proposed framework, the results were compared with related work that used embryonic images. The performance of the proposed framework was competitive. This means that the proposed framework can be successively used for detecting END.
Project description:ObjectiveAlzheimer's disease (AD) is the most common neurodegenerative disorder with one of the most complex pathogeneses, making effective and clinically actionable decision support difficult. The objective of this study was to develop a novel multimodal deep learning framework to aid medical professionals in AD diagnosis.Materials and methodsWe present a Multimodal Alzheimer's Disease Diagnosis framework (MADDi) to accurately detect the presence of AD and mild cognitive impairment (MCI) from imaging, genetic, and clinical data. MADDi is novel in that we use cross-modal attention, which captures interactions between modalities-a method not previously explored in this domain. We perform multi-class classification, a challenging task considering the strong similarities between MCI and AD. We compare with previous state-of-the-art models, evaluate the importance of attention, and examine the contribution of each modality to the model's performance.ResultsMADDi classifies MCI, AD, and controls with 96.88% accuracy on a held-out test set. When examining the contribution of different attention schemes, we found that the combination of cross-modal attention with self-attention performed the best, and no attention layers in the model performed the worst, with a 7.9% difference in F1-scores.DiscussionOur experiments underlined the importance of structured clinical data to help machine learning models contextualize and interpret the remaining modalities. Extensive ablation studies showed that any multimodal mixture of input features without access to structured clinical information suffered marked performance losses.ConclusionThis study demonstrates the merit of combining multiple input modalities via cross-modal attention to deliver highly accurate AD diagnostic decision support.
Project description:Prompt diagnostics and appropriate cancer therapy necessitate the use of gene expression databases. The integration of analytical methods can enhance detection precision by capturing intricate patterns and subtle connections in the data. This study proposes a diagnostic-integrated approach combining Empirical Bayes Harmonization (EBS), Jensen-Shannon Divergence (JSD), deep learning, and contour mathematics for cancer detection using gene expression data. EBS preprocesses the gene expression data, while JSD measures the distributional differences between cancerous and non-cancerous samples, providing invaluable insights into gene expression patterns. Deep learning (DL) models are employed for automatic deep feature extraction and to discern complex patterns from the data. Contour mathematics is applied to visualize decision boundaries and regions in the high-dimensional feature space. JSD imparts significant information to the deep learning model, directing it to concentrate on pertinent features associated with cancerous samples. Contour visualization elucidates the model's decision-making process, bolstering interpretability. The amalgamation of JSD, deep learning, and contour mathematics in gene expression dataset analysis diagnostics presents a promising pathway for precise cancer detection. This method taps into the prowess of deep learning for feature extraction while employing JSD to pinpoint distributional differences and contour mathematics for visual elucidation. The outcomes underscore its potential as a formidable instrument for cancer detection, furnishing crucial insights for timely diagnostics and tailor-made treatment strategies.
Project description:Early diagnosis of the coronavirus disease in 2019 (COVID-19) is essential for controlling this pandemic. COVID-19 has been spreading rapidly all over the world. There is no vaccine available for this virus yet. Fast and accurate COVID-19 screening is possible using computed tomography (CT) scan images. The deep learning techniques used in the proposed method is based on a convolutional neural network (CNN). Our manuscript focuses on differentiating the CT scan images of COVID-19 and non-COVID 19 CT using different deep learning techniques. A self-developed model named CTnet-10 was designed for the COVID-19 diagnosis, having an accuracy of 82.1%. Also, other models that we tested are DenseNet-169, VGG-16, ResNet-50, InceptionV3, and VGG-19. The VGG-19 proved to be superior with an accuracy of 94.52% as compared to all other deep learning models. Automated diagnosis of COVID-19 from the CT scan pictures can be used by the doctors as a quick and efficient method for COVID-19 screening.
Project description:BackgroundDento-maxillofacial deformities are common problems. Orthodontic-orthognathic surgery is the primary treatment but accurate diagnosis and careful surgical planning are essential for optimum outcomes. This study aimed to establish and verify a machine learning-based decision support system for treatment of dento-maxillofacial malformations.MethodsPatients (n = 574) with dento-maxillofacial deformities undergoing spiral CT during January 2015 to August 2020 were enrolled to train diagnostic models based on five different machine learning algorithms; the diagnostic performances were compared with expert diagnoses. Accuracy, sensitivity, specificity, and area under the curve (AUC) were calculated. The adaptive artificial bee colony algorithm was employed to formulate the orthognathic surgical plan, and subsequently evaluated by maxillofacial surgeons in a cohort of 50 patients. The objective evaluation included the difference in bone position between the artificial intelligence (AI) generated and actual surgical plans for the patient, along with discrepancies in postoperative cephalometric analysis outcomes.ResultsThe binary relevance extreme gradient boosting model performed best, with diagnostic success rates > 90% for six different kinds of dento-maxillofacial deformities; the exception was maxillary overdevelopment (89.27%). AUC was > 0.88 for all diagnostic types. Median score for the surgical plans was 9, and was improved after human-computer interaction. There was no statistically significant difference between the actual and AI- groups.ConclusionsMachine learning algorithms are effective for diagnosis and surgical planning of dento-maxillofacial deformities and help improve diagnostic efficiency, especially in lower medical centers.
Project description:Mental Status Assessment (MSA) holds significant importance in psychiatry. In recent years, several studies have leveraged Electroencephalogram (EEG) technology to gauge an individual's mental state or level of depression. This study introduces a novel multi-tier ensemble learning approach to integrate multiple EEG bands for conducting mental state or depression assessments. Initially, the EEG signal is divided into eight sub-bands, and then a Long Short-Term Memory (LSTM)-based Deep Neural Network (DNN) model is trained for each band. Subsequently, the integration of multi-band EEG frequency models and the evaluation of mental state or depression level are facilitated through a two-tier ensemble learning approach based on Multiple Linear Regression (MLR). The authors conducted numerous experiments to validate the performance of the proposed method under different evaluation metrics. For clarity and conciseness, the research employs the simplest commercialized one-channel EEG sensor, positioned at FP1, to collect data from 57 subjects (49 depressed and 18 healthy subjects). The obtained results, including an accuracy of 0.897, F1-score of 0.921, precision of 0.935, negative predictive value of 0.829, recall of 0.908, specificity of 0.875, and AUC of 0.8917, provide evidence of the superior performance of the proposed method compared to other ensemble learning techniques. This method not only proves effective but also holds the potential to significantly enhance the accuracy of depression assessment.
Project description:Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early and accurate diagnosis of AD is crucial for effective intervention and disease management. In recent years, deep learning techniques have shown promising results in medical image analysis, including AD diagnosis from neuroimaging data. However, the lack of interpretability in deep learning models hinders their adoption in clinical settings, where explainability is essential for gaining trust and acceptance from healthcare professionals. In this study, we propose an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging the power of deep transfer learning and ensemble modeling. The proposed framework aims to enhance the interpretability of deep learning models by incorporating XAI techniques, allowing clinicians to understand the decision-making process and providing valuable insights into disease diagnosis. By leveraging popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, and DenseNet201, we conducted extensive experiments to evaluate their individual performances on a comprehensive dataset. The proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), demonstrated superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. In order to enhance interpretability and transparency in Alzheimer's diagnosis, we introduced a novel model achieving an impressive accuracy of 96%. This model incorporates explainable AI techniques, including saliency maps and grad-CAM (gradient-weighted class activation mapping). The integration of these techniques not only contributes to the model's exceptional accuracy but also provides clinicians and researchers with visual insights into the neural regions influencing the diagnosis. Our findings showcase the potential of combining deep transfer learning with explainable AI in the realm of Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.