Replication and Refinement of an Algorithm for Automated Drusen Segmentation on Optical Coherence Tomography.
Ontology highlight
ABSTRACT: Here, we investigate the extent to which re-implementing a previously published algorithm for OCT-based drusen quantification permits replicating the reported accuracy on an independent dataset. We refined that algorithm so that its accuracy is increased. Following a systematic literature search, an algorithm was selected based on its reported excellent results. Several steps were added to improve its accuracy. The replicated and refined algorithms were evaluated on an independent dataset with the same metrics as in the original publication. Accuracy of the refined algorithm (overlap ratio 36-52%) was significantly greater than the replicated one (overlap ratio 25-39%). In particular, separation of the retinal pigment epithelium and the ellipsoid zone could be improved by the refinement. However, accuracy was still lower than reported previously on different data (overlap ratio 67-76%). This is the first replication study of an algorithm for OCT image analysis. Its results indicate that current standards for algorithm validation do not provide a reliable estimate of algorithm performance on images that differ with respect to patient selection and image quality. In order to contribute to an improved reproducibility in this field, we publish both our replication and the refinement, as well as an exemplary dataset.
Project description:Background:Optical coherence tomography (OCT) is an innovative imaging technique that generates high-resolution intracoronary images. In the last few years, the need for more precise analysis regarding coronary artery disease to achieve optimal treatment has made intravascular imaging an area of primary importance in interventional cardiology. One of the main challenges in OCT image analysis is the accurate detection of lumen which is significant for the further prognosis. Method:In this research, we present a new approach to the segmentation of lumen in OCT images. The proposed work is focused on designing an efficient automatic algorithm containing the following steps: preprocessing (artifacts removal: speckle noise, circular rings, and guide wire), conversion between polar and Cartesian coordinates, and segmentation algorithm. Results:The implemented method was tasted on 667 OCT frames. The lumen border was extracted with a high correlation compared to the ground truth: 0.97 ICC (0.97-0.98). Conclusions:Proposed algorithm allows for fully automated lumen segmentation on optical coherence tomography images. This tool may be applied to automated quantitative lumen analysis.
Project description:PurposeTo benchmark the human and machine performance of spectral-domain (SD) and swept-source (SS) optical coherence tomography (OCT) image segmentation, i.e., pixel-wise classification, for the compartments vitreous, retina, choroid, sclera.MethodsA convolutional neural network (CNN) was trained on OCT B-scan images annotated by a senior ground truth expert retina specialist to segment the posterior eye compartments. Independent benchmark data sets (30 SDOCT and 30 SSOCT) were manually segmented by three classes of graders with varying levels of ophthalmic proficiencies. Nine graders contributed to benchmark an additional 60 images in three consecutive runs. Inter-human and intra-human class agreement was measured and compared to the CNN results.ResultsThe CNN training data consisted of a total of 6210 manually segmented images derived from 2070 B-scans (1046 SDOCT and 1024 SSOCT; 630 C-Scans). The CNN segmentation revealed a high agreement with all grader groups. For all compartments and groups, the mean Intersection over Union (IOU) score of CNN compartmentalization versus group graders' compartmentalization was higher than the mean score for intra-grader group comparison.ConclusionThe proposed deep learning segmentation algorithm (CNN) for automated eye compartment segmentation in OCT B-scans (SDOCT and SSOCT) is on par with manual segmentations by human graders.
Project description:PurposeTo investigate the thickness of retinal layers and association with final visual acuity using spectral-domain optical coherence tomography (SD-OCT) in macular area of macula-off rhegmatogenous retinal detachment (RRD) patients after a successful macular re-attachment.MethodsIn retrospective study, a total 24 eyes with macula-off RRD were enrolled. All patients underwent vitrectomy to repair RRD. Outer plexiform layer (OPL), outer nuclear layer (ONL), photoreceptor layer (PR), retinal pigment epithelium (RPE) thicknesses were measured by the Spectralis (Heidelberg Engineering, Heidelberg, Germany) SD-OCT with automated segmentation software. The relationship between the thicknesses of each retinal layer and postoperative logarithm of the minimum angle of resolution scale (LogMAR) visual acuity was analyzed.ResultsOPL and RPE thicknesses were not significantly different between the retinal detachment eyes and fellow eyes (P = 0.839, 0.999, respectively). The ONL and photoreceptor thickness were significantly thinner in the retinal detachment eyes (P <0.001 and 0.001, respectively). In the univariate regression analysis, preoperative best corrected visual acuity (BCVA), ONL thickness and photoreceptor thickness showed association with the postoperative BCVA (P = 0.003, <0.001 and 0.024, respectively). In final multiple linear regression model, ONL thickness was the only variable significantly associated with postoperative BCVA (P = 0.044).ConclusionsSegmented ONL and photoreceptor thickness of retinal detachment eyes were significantly thinner than fellow eyes. Segmental analysis of the retinal layer in macular region may provide valuable information for evaluation RRD. And ONL thickness can be used as a potential biomarker to predict visual outcome after RRD repair.
Project description:Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.
Project description:Objective:To study the automated segmentation of retinal layers using spectral domain optical coherence tomography (OCT) and the impact of manual correction over segmentation mistakes. Methods:This was a retrospective, cross-sectional, comparative study that compared the automated segmentation of macular thickness using Spectralis™ OCT technology (Heidelberg Engineering, Heidelberg, Germany) versus manual segmentation in eyes with no macular changes, macular cystoid edema (CME), and choroidal neovascularization (CNV). Automated segmentation of macular thickness was manually corrected by two independent examiners and reanalyzed by them together in case of disagreement. Results:In total, 306 eyes of 254 consecutive patients were evaluated. No statistically significant differences were noted between automated and manual macular thickness measurements in patients with normal maculas, while a statistically significant difference was found in central thickness in patients with CNV and with CME. Segmentation mistakes in macular OCTs were present in 5.3% (5 of 95) in the normal macula group, 16.4% (23 of 140) in the CME group, and 66.2% (47 of 71) in CNV group. The difference between automated and manual macular thickness was higher than 10% in 1.4% (2 of 140) in the CME group and in 28.17% (20 of 71) in the CNV group. Only one case in the normal group had a higher than 10% segmentation error (1 of 95). Conclusion:The evaluation of automated segmented OCT images revealed appropriate delimitation of macular thickness in patients with no macular changes or with CME, since the frequency and magnitude of the segmentation mistakes had low impact over clinical evaluation of the images. Conversely, automated macular thickness segmentation in patients with CNV showed a high frequency and magnitude of mistakes, with potential impact on clinical analysis.
Project description:PurposeTo evaluate the performance of the Pegasus-OCT (Visulytix Ltd) multiclass automated fluid segmentation algorithms on independent spectral domain optical coherence tomography data sets.MethodsThe Pegasus automated fluid segmentation algorithms were applied to three data sets with edematous pathology, comprising 750, 600, and 110 b-scans, respectively. Intraretinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelial detachment (PED) were automatically segmented by Pegasus-OCT for each b-scan where ground truth from data set owners was available. Detection performance was assessed by calculating sensitivities and specificities, while Dice coefficients were used to assess agreement between the segmentation methods.ResultsFor two data sets, IRF detection yielded promising sensitivities (0.98 and 0.94, respectively) and specificities (1.00 and 0.98) but less consistent agreement with the ground truth (dice coefficients 0.81 and 0.59); likewise, SRF detection showed high sensitivity (0.86 and 0.98) and specificity (0.83 and 0.89) but less consistent agreement (0.59 and 0.78). PED detection on the first data set showed moderate agreement (0.66) with high sensitivity (0.97) and specificity (0.98). IRF detection in a third data set yielded less favorable agreement (0.46-0.57) and sensitivity (0.59-0.68), attributed to image quality and ground truth grader discordance.ConclusionsThe Pegasus automated fluid segmentation algorithms were able to detect IRF, SRF, and PED in SD-OCT b-scans acquired across multiple independent data sets. Dice coefficients and sensitivity and specificity values indicate the potential for application to automated detection and monitoring of retinal diseases such as age-related macular degeneration and diabetic macular edema.Translational relevanceThe potential of Pegasus-OCT for automated fluid quantification and differentiation of IRF, SRF, and PED in OCT images has application to both clinical practice and research.
Project description:Retinal segmentation is a prerequisite for quantifying retinal structural features and diagnosing related ophthalmic diseases. Canny operator is recognized as the best boundary detection operator so far, and is often used to obtain the initial boundary of the retina in retinal segmentation. However, the traditional Canny operator is susceptible to vascular shadows, vitreous artifacts, or noise interference in retinal segmentation, causing serious misdetection or missed detection. This paper proposed an improved Canny operator for automatic segmentation of retinal boundaries. The improved algorithm solves the problems of the traditional Canny operator by adding a multi-point boundary search step on the basis of the original method, and adjusts the convolution kernel. The algorithm was used to segment the retinal images of healthy subjects and age-related macular degeneration (AMD) patients; eleven retinal boundaries were identified and compared with the results of manual segmentation by the ophthalmologists. The average difference between the automatic and manual methods is: 2-6 microns (1-2 pixels) for healthy subjects and 3-10 microns (1-3 pixels) for AMD patients. Qualitative method is also used to verify the accuracy and stability of the algorithm. The percentage of "perfect segmentation" and "good segmentation" is 98% in healthy subjects and 94% in AMD patients. This algorithm can be used alone or in combination with other methods as an initial boundary detection algorithm. It is easy to understand and improve, and may become a useful tool for analyzing and diagnosing eye diseases.
Project description:Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Project description:PurposeTo use a deep learning model to develop a fully automated method (fully semantic network and graph search [FS-GS]) of retinal segmentation for optical coherence tomography (OCT) images from patients with Stargardt disease.MethodsEighty-seven manually segmented (ground truth) OCT volume scan sets (5171 B-scans) from 22 patients with Stargardt disease were used for training, validation and testing of a novel retinal boundary detection approach (FS-GS) that combines a fully semantic deep learning segmentation method, which generates a per-pixel class prediction map with a graph-search method to extract retinal boundary positions. The performance was evaluated using the mean absolute boundary error and the differences in two clinical metrics (retinal thickness and volume) compared with the ground truth. The performance of a separate deep learning method and two publicly available software algorithms were also evaluated against the ground truth.ResultsFS-GS showed an excellent agreement with the ground truth, with a boundary mean absolute error of 0.23 and 1.12 pixels for the internal limiting membrane and the base of retinal pigment epithelium or Bruch's membrane, respectively. The mean difference in thickness and volume across the central 6 mm zone were 2.10 µm and 0.059 mm3. The performance of the proposed method was more accurate and consistent than the publicly available OCTExplorer and AURA tools.ConclusionsThe FS-GS method delivers good performance in segmentation of OCT images of pathologic retina in Stargardt disease.Translational relevanceDeep learning models can provide a robust method for retinal segmentation and support a high-throughput analysis pipeline for measuring retinal thickness and volume in Stargardt disease.
Project description:PurposeTo generate the first open dataset of retinal parafoveal optical coherence tomography angiography (OCTA) images with associated ground truth manual segmentations, and to establish a standard for OCTA image segmentation by surveying a broad range of state-of-the-art vessel enhancement and binarization procedures.MethodsHandcrafted filters and neural network architectures were used to perform vessel enhancement. Thresholding methods and machine learning approaches were applied to obtain the final binarization. Evaluation was performed by using pixelwise metrics and newly proposed topological metrics. Finally, we compare the error in the computation of clinically relevant vascular network metrics (e.g., foveal avascular zone area and vessel density) across segmentation methods.ResultsOur results show that, for the set of images considered, deep learning architectures (U-Net and CS-Net) achieve the best performance (Dice = 0.89). For applications where manually segmented data are not available to retrain these approaches, our findings suggest that optimally oriented flux (OOF) is the best handcrafted filter (Dice = 0.86). Moreover, our results show up to 25% differences in vessel density accuracy depending on the segmentation method used.ConclusionsIn this study, we derive and validate the first open dataset of retinal parafoveal OCTA images with associated ground truth manual segmentations. Our findings should be taken into account when comparing the results of clinical studies and performing meta-analyses. Finally, we release our data and source code to support standardization efforts in OCTA image segmentation.Translational relevanceThis work establishes a standard for OCTA retinal image segmentation and introduces the importance of evaluating segmentation performance in terms of clinically relevant metrics.