Project description:Purpose: To assess the accuracy and efficacy of a semi-automated deep learning algorithm (DLA) assisted approach to detect vision-threatening diabetic retinopathy (DR). Methods: We developed a two-step semi-automated DLA-assisted approach to grade fundus photographs for vision-threatening referable DR. Study images were obtained from the Lingtou Cohort Study, and captured at participant enrollment in 2009-2010 ("baseline images") and annual follow-up between 2011 and 2017. To begin, a validated DLA automatically graded baseline images for referable DR and classified them as positive, negative, or ungradable. Following, each positive image, all other available images from patients who had a positive image, and a 5% random sample of all negative images were selected and regraded by trained human graders. A reference standard diagnosis was assigned once all graders achieved consistent grading outcomes or with a senior ophthalmologist's final diagnosis. The semi-automated DLA assisted approach combined initial DLA screening and subsequent human grading for images identified as high-risk. This approach was further validated within the follow-up image datasets and its time and economic costs evaluated against fully human grading. Results: For evaluation of baseline images, a total of 33,115 images were included and automatically graded by the DLA. 2,604 images (480 positive results, 624 available other images from participants with a positive result, and 1500 random negative samples) were selected and regraded by graders. The DLA achieved an area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy of 0.953, 0.970, 0.879, and 88.6%, respectively. In further validation within the follow-up image datasets, a total of 88,363 images were graded using this semi-automated approach and human grading was performed on 8975 selected images. The DLA achieved an AUC, sensitivity, and specificity of 0.914, 0.852, 0.853, respectively. Compared against fully human grading, the semi-automated DLA-assisted approach achieved an estimated 75.6% time and 90.1% economic cost saving. Conclusions: The DLA described in this study was able to achieve high accuracy, sensitivity, and specificity in grading fundus images for referable DR. Validated against long-term follow-up datasets, a semi-automated DLA-assisted approach was able to accurately identify suspect cases, and minimize misdiagnosis whilst balancing safety, time, and economic cost.
Project description:Importance:Convolutional neural networks have recently been applied to ophthalmic diseases; however, the rationale for the outputs generated by these systems is inscrutable to clinicians. A visualization tool is needed that would enable clinicians to understand important exposure variables in real time. Objective:To systematically visualize the convolutional neural networks of 2 validated deep learning models for the detection of referable diabetic retinopathy (DR) and glaucomatous optic neuropathy (GON). Design, Setting, and Participants:The GON and referable DR algorithms were previously developed and validated (holdout method) using 48 116 and 66 790 retinal photographs, respectively, derived from a third-party database (LabelMe) of deidentified photographs from various clinical settings in China. In the present cross-sectional study, a random sample of 100 true-positive photographs and all false-positive cases from each of the GON and DR validation data sets were selected. All data were collected from March to June 2017. The original color fundus images were processed using an adaptive kernel visualization technique. The images were preprocessed by applying a sliding window with a size of 28 × 28 pixels and a stride of 3 pixels to crop images into smaller subimages to produce a feature map. Threshold scales were adjusted to optimal levels for each model to generate heat maps highlighting localized landmarks on the input image. A single optometrist allocated each image to predefined categories based on the generated heat map. Main Outcomes and Measures:Visualization regions of the fundus. Results:In the GON data set, 90 of 100 true-positive cases (90%; 95% CI, 82%-95%) and 15 of 22 false-positive cases (68%; 95% CI, 45%-86%) displayed heat map visualization within regions of the optic nerve head only. Lesions typically seen in cases of referable DR (exudate, hemorrhage, or vessel abnormality) were identified as the most important prognostic regions in 96 of 100 true-positive DR cases (96%; 95% CI, 90%-99%). In 39 of 46 false-positive DR cases (85%; 95% CI, 71%-94%), the heat map displayed visualization of nontraditional fundus regions with or without retinal venules. Conclusions and Relevance:These findings suggest that this visualization method can highlight traditional regions in disease diagnosis, substantiating the validity of the deep learning models investigated. This visualization technique may promote the clinical adoption of these models.
Project description:ObjectiveTo develop and validate a real-world screening, guideline-based deep learning (DL) system for referable diabetic retinopathy (DR) detection.DesignThis is a multicentre platform development study based on retrospective, cross-sectional data sets. Images were labelled by two-level certificated graders as the ground truth. According to the UK DR screening guideline, a DL model based on colour retinal images with five-dimensional classifiers, namely image quality, retinopathy, maculopathy gradability, maculopathy and photocoagulation, was developed. Referable decisions were generated by integrating the output of all classifiers and reported at the image, eye and patient level. The performance of the DL was compared with DR experts.SettingDR screening programmes from three hospitals and the Lifeline Express Diabetic Retinopathy Screening Program in China.Participants83 465 images of 39 836 eyes from 21 716 patients were annotated, of which 53 211 images were used as the development set and 30 254 images were used as the external validation set, split based on centre and period.Main outcomesAccuracy, F1 score, sensitivity, specificity, area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), Cohen's unweighted κ and Gwet's AC1 were calculated to evaluate the performance of the DL algorithm.ResultsIn the external validation set, the five classifiers achieved an accuracy of 0.915-0.980, F1 score of 0.682-0.966, sensitivity of 0.917-0.978, specificity of 0.907-0.981, AUROC of 0.9639-0.9944 and AUPRC of 0.7504-0.9949. Referable DR at three levels was detected with an accuracy of 0.918-0.967, F1 score of 0.822-0.918, sensitivity of 0.970-0.971, specificity of 0.905-0.967, AUROC of 0.9848-0.9931 and AUPRC of 0.9527-0.9760. With reference to the ground truth, the DL system showed comparable performance (Cohen's κ: 0.86-0.93; Gwet's AC1: 0.89-0.94) with three DR experts (Cohen's κ: 0.89-0.96; Gwet's AC1: 0.91-0.97) in detecting referable lesions.ConclusionsThe automatic DL system for detection of referable DR based on the UK guideline could achieve high accuracy in multidimensional classifications. It is suitable for large-scale, real-world DR screening.
Project description:Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.
Project description:ObjectiveTo evaluate the ability of capillary nonperfusion parameters on OCT angiography (OCTA) to predict the development of clinically significant outcomes in eyes with referable nonproliferative diabetic retinopathy (NPDR).DesignProspective longitudinal observational study.SubjectsIn total, 59 patients (74 eyes) with treatment-naive moderate and severe (referable) NPDR.MethodsPatients were imaged with OCTA at baseline and then followed-up for 1 year. We evaluated 2 OCTA capillary nonperfusion metrics, vessel density (VD) and geometric perfusion deficits (GPDs), in the superficial capillary plexus, middle capillary plexus (MCP), and deep capillary plexus (DCP). We compared the predictive accuracy of baseline OCTA metrics for clinically significant diabetic retinopathy (DR) outcomes at 1 year.Main outcome measuresSignificant clinical outcomes at 1 year, defined as 1 or more of the following-vitreous hemorrhage, center-involving diabetic macular edema, and initiation of treatment with pan-retinal photocoagulation or anti-VEGF injections.ResultsOverall, 49 patients (61 eyes) returned for the 1-year follow-up. Geometric perfusion deficits and VD in the MCP and DCP correlated with clinically significant outcomes at 1 year (P < 0.001). Eyes with these outcomes had lower VD and higher GPD, indicating worse nonperfusion of the deeper retinal layers than those that remained free from complication. These differences remained significant (P = 0.046 to < 0.001) when OCTA parameters were incorporated into models that also considered sex, baseline corrected visual acuity, and baseline DR severity. Adjusted receiver operating characteristic curve for DCP GPD achieved an area under the curve (AUC) of 0.929, with sensitivity of 89% and specificity of 98%. In a separate analysis focusing on high-risk proliferative diabetic retinopathy outcomes, MCP and DCP GPD and VD remained significantly predictive with comparable AUC and sensitivities to the pooled analysis.ConclusionsEvidence of deep capillary nonperfusion at baseline in eyes with clinically referable NPDR can predict short-term DR complications with high accuracy, suggesting that deep retinal ischemia has an important pathophysiologic role in DR progression. Our results suggest that OCTA may provide additional prognostic benefit to clinical DR staging in eyes with high risk.
Project description:Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Project description:Diabetes is one of the leading causes of morbidity and mortality in the United States and worldwide. Traditionally, diabetes detection from retinal images has been performed only using relevant retinopathy indications. This research aimed to develop an artificial intelligence (AI) machine learning model which can detect the presence of diabetes from fundus imagery of eyes without any diabetic eye disease. A machine learning algorithm was trained on the EyePACS dataset, consisting of 47,076 images. Patients were also divided into cohorts based on disease duration, each cohort consisting of patients diagnosed within the timeframe in question (e.g., 15 years) and healthy participants. The algorithm achieved 0.86 area under receiver operating curve (AUC) in detecting diabetes per patient visit when averaged across camera models, and AUC 0.83 on the task of detecting diabetes per image. The results suggest that diabetes may be diagnosed non-invasively using fundus imagery alone. This may enable diabetes diagnosis at point of care, as well as other, accessible venues, facilitating the diagnosis of many undiagnosed people with diabetes.
Project description:Purpose: To development and validation of machine learning-based classifiers based on simple non-ocular metrics for detecting referable diabetic retinopathy (RDR) in a large-scale Chinese population-based survey. Methods: The 1,418 patients with diabetes mellitus from 8,952 rural residents screened in the population-based Dongguan Eye Study were used for model development and validation. Eight algorithms [extreme gradient boosting (XGBoost), random forest, naïve Bayes, k-nearest neighbor (KNN), AdaBoost, Light GBM, artificial neural network (ANN), and logistic regression] were used for modeling to detect RDR in individuals with diabetes. The area under the receiver operating characteristic curve (AUC) and their 95% confidential interval (95% CI) were estimated using five-fold cross-validation as well as an 80:20 ratio of training and validation. Results: The 10 most important features in machine learning models were duration of diabetes, HbA1c, systolic blood pressure, triglyceride, body mass index, serum creatine, age, educational level, duration of hypertension, and income level. Based on these top 10 variables, the XGBoost model achieved the best discriminative performance, with an AUC of 0.816 (95%CI: 0.812, 0.820). The AUCs for logistic regression, AdaBoost, naïve Bayes, and Random forest were 0.766 (95%CI: 0.756, 0.776), 0.754 (95%CI: 0.744, 0.764), 0.753 (95%CI: 0.743, 0.763), and 0.705 (95%CI: 0.697, 0.713), respectively. Conclusions: A machine learning-based classifier that used 10 easily obtained non-ocular variables was able to effectively detect RDR patients. The importance scores of the variables provide insight to prevent the occurrence of RDR. Screening RDR with machine learning provides a useful complementary tool for clinical practice in resource-poor areas with limited ophthalmic infrastructure.
Project description:Purpose: The purpose of our review paper is to examine many existing works of literature presenting the different methods utilized for diabetic retinopathy (DR) recognition employing deep learning (DL) and machine learning (ML) techniques, and also to address the difficulties faced in various datasets used by DR. Approach: DR is a progressive illness and may become a reason for vision loss. Early identification of DR lesions is, therefore, helpful and prevents damage to the retina. However, it is a complex job in view of the fact that it is symptomless earlier, and also ophthalmologists have been needed in traditional approaches. Recently, automated identification of DR-based studies has been stated based on image processing, ML, and DL. We analyze the recent literature and provide a comparative study that also includes the limitations of the literature and future work directions. Results: A relative analysis among the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection based on various DR features is presented. Conclusion: Our review paper discusses the methods employed in DR detection along with the technical and clinical challenges that are encountered, which is missing in existing reviews, as well as future scopes to assist researchers in the field of retinal imaging.
Project description:As the prevalence of diabetes increases, millions of people need to be screened for diabetic retinopathy (DR). Remarkable advances in technology have made it possible to use artificial intelligence to screen DR from retinal images with high accuracy and reliability, resulting in reducing human labor by processing large amounts of data in a shorter time. We developed a fully automated classification algorithm to diagnose DR and identify referable status using optical coherence tomography angiography (OCTA) images with convolutional neural network (CNN) model and verified its feasibility by comparing its performance with that of conventional machine learning model. Ground truths for classifications were made based on ultra-widefield fluorescein angiography to increase the accuracy of data annotation. The proposed CNN classifier achieved an accuracy of 91-98%, a sensitivity of 86-97%, a specificity of 94-99%, and an area under the curve of 0.919-0.976. In the external validation, overall similar performances were also achieved. The results were similar regardless of the size and depth of the OCTA images, indicating that DR could be satisfactorily classified even with images comprising narrow area of the macular region and a single image slab of retina. The CNN-based classification using OCTA is expected to create a novel diagnostic workflow for DR detection and referral.