Ontology highlight
ABSTRACT: Objectives
To perform a systematic review of design and reporting of imaging studies applying convolutional neural network models for radiological cancer diagnosis.Methods
A comprehensive search of PUBMED, EMBASE, MEDLINE and SCOPUS was performed for published studies applying convolutional neural network models to radiological cancer diagnosis from January 1, 2016, to August 1, 2020. Two independent reviewers measured compliance with the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Compliance was defined as the proportion of applicable CLAIM items satisfied.Results
One hundred eighty-six of 655 screened studies were included. Many studies did not meet the criteria for current design and reporting guidelines. Twenty-seven percent of studies documented eligibility criteria for their data (50/186, 95% CI 21-34%), 31% reported demographics for their study population (58/186, 95% CI 25-39%) and 49% of studies assessed model performance on test data partitions (91/186, 95% CI 42-57%). Median CLAIM compliance was 0.40 (IQR 0.33-0.49). Compliance correlated positively with publication year (ρ = 0.15, p = .04) and journal H-index (ρ = 0.27, p < .001). Clinical journals demonstrated higher mean compliance than technical journals (0.44 vs. 0.37, p < .001).Conclusions
Our findings highlight opportunities for improved design and reporting of convolutional neural network research for radiological cancer diagnosis.Key points
• Imaging studies applying convolutional neural networks (CNNs) for cancer diagnosis frequently omit key clinical information including eligibility criteria and population demographics. • Fewer than half of imaging studies assessed model performance on explicitly unobserved test data partitions. • Design and reporting standards have improved in CNN research for radiological cancer diagnosis, though many opportunities remain for further progress.
SUBMITTER: O'Shea RJ
PROVIDER: S-EPMC8452579 | biostudies-literature |
REPOSITORIES: biostudies-literature