Unknown

Dataset Information

0

Automated vs. human evaluation of corneal staining.


ABSTRACT:

Background and purpose

Corneal fluorescein staining is one of the most important diagnostic tests in dry eye disease (DED). Nevertheless, the result of this examination is depending on the grader. So far, there is no method for an automated quantification of corneal staining commercially available. Aim of this study was to develop a software-assisted grading algorithm and to compare it with a group of human graders with variable clinical experience in patients with DED.

Methods

Fifty images of eyes stained with 2 µl of 2% fluorescein presenting different severity of superficial punctate keratopathy in patients with DED were taken under standardized conditions. An algorithm for detecting and counting superficial punctate keratitis was developed using ImageJ with a training dataset of 20 randomly picked images. Then, the test dataset of 30 images was analyzed (1) by the ImageJ algorithm and (2) by 22 graders, all ophthalmologists with different levels of experience. All graders evaluated the images using the Oxford grading scheme for corneal staining at baseline and after 6-8 weeks. Intrarater agreement was also evaluated by adding a mirrored version of all original images into the set of images during the 2nd grading.

Results

The count of particles detected by the algorithm correlated significantly (n = 30; p < 0.01) with the estimated true Oxford grade (Sr = 0,91). Overall human graders showed only moderate intrarater agreement (K = 0,426), while software-assisted grading was always the same (K = 1,0). Little difference was found between specialists and non-specialists in terms of intrarater agreement (K = 0,436 specialists; K = 0,417 non-specialists). The highest interrater agreement was seen with 75,6% in the most experienced grader, a cornea specialist with 29 years of experience, and the lowest was seen in a resident with 25,6% who had only 2 years of experience.

Conclusion

The variance in human grading of corneal staining - if only small - is likely to have only little impact on clinical management and thus seems to be acceptable. While human graders give results sufficient for clinical application, software-assisted grading of corneal staining ensures higher consistency and thus is preferrable for re-evaluating patients, e.g., in clinical trials.

SUBMITTER: Kourukmas R 

PROVIDER: S-EPMC9325848 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC7685888 | biostudies-literature
| S-EPMC6396684 | biostudies-literature
| S-EPMC6787640 | biostudies-literature
| S-EPMC6245840 | biostudies-literature
| S-EPMC3616690 | biostudies-literature
| S-EPMC6458884 | biostudies-literature
| S-EPMC8310432 | biostudies-literature
| S-EPMC3627485 | biostudies-literature
| S-EPMC8502234 | biostudies-literature
| S-EPMC4048234 | biostudies-literature