Unknown

Dataset Information

0

A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: A retrospective study.


ABSTRACT:

Background

The overall prognosis of oral cancer remains poor because over half of patients are diagnosed at advanced-stages. Previously reported screening and earlier detection methods for oral cancer still largely rely on health workers' clinical experience and as yet there is no established method. We aimed to develop a rapid, non-invasive, cost-effective, and easy-to-use deep learning approach for identifying oral cavity squamous cell carcinoma (OCSCC) patients using photographic images.

Methods

We developed an automated deep learning algorithm using cascaded convolutional neural networks to detect OCSCC from photographic images. We included all biopsy-proven OCSCC photographs and normal controls of 44,409 clinical images collected from 11 hospitals around China between April 12, 2006, and Nov 25, 2019. We trained the algorithm on a randomly selected part of this dataset (development dataset) and used the rest for testing (internal validation dataset). Additionally, we curated an external validation dataset comprising clinical photographs from six representative journals in the field of dentistry and oral surgery. We also compared the performance of the algorithm with that of seven oral cancer specialists on a clinical validation dataset. We used the pathological reports as gold standard for OCSCC identification. We evaluated the algorithm performance on the internal, external, and clinical validation datasets by calculating the area under the receiver operating characteristic curves (AUCs), accuracy, sensitivity, and specificity with two-sided 95% CIs.

Findings

1469 intraoral photographic images were used to validate our approach. The deep learning algorithm achieved an AUC of 0·983 (95% CI 0·973-0·991), sensitivity of 94·9% (0·915-0·978), and specificity of 88·7% (0·845-0·926) on the internal validation dataset (n = 401), and an AUC of 0·935 (0·910-0·957), sensitivity of 89·6% (0·847-0·942) and specificity of 80·6% (0·757-0·853) on the external validation dataset (n = 402). For a secondary analysis on the internal validation dataset, the algorithm presented an AUC of 0·995 (0·988-0·999), sensitivity of 97·4% (0·932-1·000) and specificity of 93·5% (0·882-0·979) in detecting early-stage OCSCC. On the clinical validation dataset (n = 666), our algorithm achieved comparable performance to that of the average oral cancer expert in terms of accuracy (92·3% [0·902-0·943] vs 92.4% [0·912-0·936]), sensitivity (91·0% [0·879-0·941] vs 91·7% [0·898-0·934]), and specificity (93·5% [0·909-0·960] vs 93·1% [0·914-0·948]). The algorithm also achieved significantly better performance than that of the average medical student (accuracy of 87·0% [0·855-0·885], sensitivity of 83·1% [0·807-0·854], and specificity of 90·7% [0·889-0·924]) and the average non-medical student (accuracy of 77·2% [0·757-0·787], sensitivity of 76·6% [0·743-0·788], and specificity of 77·9% [0·759-0·797]).

Interpretation

Automated detection of OCSCC by deep-learning-powered algorithm is a rapid, non-invasive, low-cost, and convenient method, which yielded comparable performance to that of human specialists and has the potential to be used as a clinical tool for fast screening, earlier detection, and therapeutic efficacy assessment of the cancer.

SUBMITTER: Fu Q 

PROVIDER: S-EPMC7599313 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC9117994 | biostudies-literature
| S-EPMC10859441 | biostudies-literature
| S-EPMC8022578 | biostudies-literature
| S-EPMC10898548 | biostudies-literature
| S-EPMC6171227 | biostudies-literature
| S-EPMC8369362 | biostudies-literature
| S-EPMC10449747 | biostudies-literature
| S-EPMC10988262 | biostudies-literature
| S-EPMC8573410 | biostudies-literature
2012-10-10 | GSE30788 | GEO