Project description:ObjectiveThis study aims to develop and validate a convolutional neural network (CNN)-based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer-aided diagnosis systems and reduce data processing time for clinician-computer scientist teams.MethodsA dataset of 22,132 laryngoscopic frames was extracted from 137 flexible laryngostroboscopic videos from 115 patients. 55 videos were from healthy patients with no laryngeal pathology and 82 videos were from patients with vocal fold polyps. The extracted frames were manually labeled as informative or uninformative by two independent reviewers based on vocal fold visibility, lighting, focus, and camera distance, resulting in 18,114 informative frames and 4018 uninformative frames. The dataset was split into training and test sets. A pre-trained ResNet-18 model was trained using transfer learning to classify frames as informative or uninformative. Hyperparameters were set using cross-validation. The primary outcome was precision for the informative class and secondary outcomes were precision, recall, and F1-score for all classes. The processing rate for frames between the model and a human annotator were compared.ResultsThe automated classifier achieved an informative frame precision, recall, and F1-score of 94.4%, 90.2%, and 92.3%, respectively, when evaluated on a hold-out test set of 4438 frames. The model processed frames 16 times faster than a human annotator.ConclusionThe CNN-based classifier demonstrates high precision for classifying informative frames in flexible laryngostroboscopic videos. This model has the potential to aid researchers with dataset creation for computer-aided diagnosis systems by automatically extracting relevant frames from laryngoscopic videos.
Project description:Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.
Project description:Active pulmonary tuberculosis (ATB), which is more infectious and has a higher mortality rate compared with non-active pulmonary tuberculosis (non-ATB), needs to be diagnosed accurately and timely to prevent the tuberculosis from spreading and causing deaths. However, traditional differential diagnosis methods of active pulmonary tuberculosis involve bacteriological testing, sputum culturing and radiological images reading, which is time consuming and labour intensive. Therefore, an artificial intelligence model for ATB differential diagnosis would offer great assistance in clinical practice. In this study, computer tomography (CT) scans images and corresponding clinical information of 1160 ATB patients and 1131 patients with non-ATB were collected and divided into training, validation, and testing sets. A 3-dimension (3D) Nested UNet model was utilized to delineate lung field regions in the CT images, and three different pre-trained deep learning models including 3D VGG-16, 3D EfficientNet and 3D ResNet-50 were used for classification and differential diagnosis task. We also collected an external testing set with 100 ATB cases and 100 Non-ATB cases for further validation of the model. In the internal and external testing set, the 3D ResNet-50 model outperformed other models, reaching an AUC of 0.961 and 0.946, respectively. The 3D ResNet-50 model reached even higher levels of diagnostic accuracy than experienced radiologists, while the CT images reading and diagnosing speed was 10 times faster than human experts. The model was also capable of visualizing clinician interpretable lung lesion regions important for differential diagnosis, making it a powerful tool assisting ATB diagnosis. In conclusion, we developed an auxiliary tool to differentiate active and non-active pulmonary tuberculosis, which would have broad prospects in the bedside.
Project description:BackgroundA colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum.MethodWe constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images.ResultsThe constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images.ConclusionWe constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.
Project description:Tumor histology is an important predictor of therapeutic response and outcomes in lung cancer. Tissue sampling for pathologist review is the most reliable method for histology classification, however, recent advances in deep learning for medical image analysis allude to the utility of radiologic data in further describing disease characteristics and for risk stratification. In this study, we propose a radiomics approach to predicting non-small cell lung cancer (NSCLC) tumor histology from non-invasive standard-of-care computed tomography (CT) data. We trained and validated convolutional neural networks (CNNs) on a dataset comprising 311 early-stage NSCLC patients receiving surgical treatment at Massachusetts General Hospital (MGH), with a focus on the two most common histological types: adenocarcinoma (ADC) and Squamous Cell Carcinoma (SCC). The CNNs were able to predict tumor histology with an AUC of 0.71(p = 0.018). We also found that using machine learning classifiers such as k-nearest neighbors (kNN) and support vector machine (SVM) on CNN-derived quantitative radiomics features yielded comparable discriminative performance, with AUC of up to 0.71 (p = 0.017). Our best performing CNN functioned as a robust probabilistic classifier in heterogeneous test sets, with qualitatively interpretable visual explanations to its predictions. Deep learning based radiomics can identify histological phenotypes in lung cancer. It has the potential to augment existing approaches and serve as a corrective aid for diagnosticians.
Project description:Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients' lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, K-nearest neighbour (KNN), support vector machine (SVM), naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.
Project description:The novel coronavirus, SARS-CoV-2, can be deadly to people, causing COVID-19. The ease of its propagation, coupled with its high capacity for illness and death in infected individuals, makes it a hazard to the community. Chest X-rays are one of the most common but most difficult to interpret radiographic examination for early diagnosis of coronavirus-related infections. They carry a considerable amount of anatomical and physiological information, but it is sometimes difficult even for the expert radiologist to derive the related information they contain. Automatic classification using deep learning models can help in better assessing these infections swiftly. Deep CNN models, namely, MobileNet, ResNet50, and InceptionV3, were applied with different variations, including training the model from the start, fine-tuning along with adjusting learned weights of all layers, and fine-tuning with learned weights along with augmentation. Fine-tuning with augmentation produced the best results in pretrained models. Out of these, two best-performing models (MobileNet and InceptionV3) selected for ensemble learning produced accuracy and FScore of 95.18% and 90.34%, and 95.75% and 91.47%, respectively. The proposed hybrid ensemble model generated with the merger of these deep models produced a classification accuracy and FScore of 96.49% and 92.97%. For test dataset, which was separately kept, the model generated accuracy and FScore of 94.19% and 88.64%. Automatic classification using deep ensemble learning can help radiologists in the correct identification of coronavirus-related infections in chest X-rays. Consequently, this swift and computer-aided diagnosis can help in saving precious human lives and minimizing the social and economic impact on society.
Project description:BackgroundAccurate cephalometric analysis plays a vital role in the diagnosis and subsequent surgical planning in orthognathic and orthodontics treatment. However, manual digitization of anatomical landmarks in computed tomography (CT) is subject to limitations such as low accuracy, poor repeatability and excessive time consumption. Furthermore, the detection of landmarks has more difficulties on individuals with dentomaxillofacial deformities than normal individuals. Therefore, this study aims to develop a deep learning model to automatically detect landmarks in CT images of patients with dentomaxillofacial deformities.MethodsCraniomaxillofacial (CMF) CT data of 80 patients with dentomaxillofacial deformities were collected for model development. 77 anatomical landmarks digitized by experienced CMF surgeons in each CT image were set as the ground truth. 3D UX-Net, the cutting-edge medical image segmentation network, was adopted as the backbone of model architecture. Moreover, a new region division pattern for CMF structures was designed as a training strategy to optimize the utilization of computational resources and image resolution. To evaluate the performance of this model, several experiments were conducted to make comparison between the model and manual digitization approach.ResultsThe training set and the validation set included 58 and 22 samples respectively. The developed model can accurately detect 77 landmarks on bone, soft tissue and teeth with a mean error of 1.81 ± 0.89 mm. Removal of region division before training significantly increased the error of prediction (2.34 ± 1.01 mm). In terms of manual digitization, the inter-observer and intra-observer variations were 1.27 ± 0.70 mm and 1.01 ± 0.74 mm respectively. In all divided regions except Teeth Region (TR), our model demonstrated equivalent performance to experienced CMF surgeons in landmarks detection (p > 0.05).ConclusionsThe developed model demonstrated excellent performance in detecting craniomaxillofacial landmarks when considering manual digitization work of expertise as benchmark. It is also verified that the region division pattern designed in this study remarkably improved the detection accuracy.
Project description:COVID-19 clinical presentation and prognosis are highly variable, ranging from asymptomatic and paucisymptomatic cases to acute respiratory distress syndrome and multi-organ involvement. We developed a hybrid machine learning/deep learning model to classify patients in two outcome categories, non-ICU and ICU (intensive care admission or death), using 558 patients admitted in a northern Italy hospital in February/May of 2020. A fully 3D patient-level CNN classifier on baseline CT images is used as feature extractor. Features extracted, alongside with laboratory and clinical data, are fed for selection in a Boruta algorithm with SHAP game theoretical values. A classifier is built on the reduced feature space using CatBoost gradient boosting algorithm and reaching a probabilistic AUC of 0.949 on holdout test set. The model aims to provide clinical decision support to medical doctors, with the probability score of belonging to an outcome class and with case-based SHAP interpretation of features importance.