Unknown

Dataset Information

0

An evaluation of DistillerSR's machine learning-based prioritization tool for title/abstract screening - impact on reviewer-relevant outcomes.


ABSTRACT: BACKGROUND:Systematic reviews often require substantial resources, partially due to the large number of records identified during searching. Although artificial intelligence may not be ready to fully replace human reviewers, it may accelerate and reduce the screening burden. Using DistillerSR (May 2020 release), we evaluated the performance of the prioritization simulation tool to determine the reduction in screening burden and time savings. METHODS:Using a true recall @ 95%, response sets from 10 completed systematic reviews were used to evaluate: (i) the reduction of screening burden; (ii) the accuracy of the prioritization algorithm; and (iii) the hours saved when a modified screening approach was implemented. To account for variation in the simulations, and to introduce randomness (through shuffling the references), 10 simulations were run for each review. Means, standard deviations, medians and interquartile ranges (IQR) are presented. RESULTS:Among the 10 systematic reviews, using true recall @ 95% there was a median reduction in screening burden of 47.1% (IQR: 37.5 to 58.0%). A median of 41.2% (IQR: 33.4 to 46.9%) of the excluded records needed to be screened to achieve true recall @ 95%. The median title/abstract screening hours saved using a modified screening approach at a true recall @ 95% was 29.8?h (IQR: 28.1 to 74.7?h). This was increased to a median of 36?h (IQR: 32.2 to 79.7?h) when considering the time saved not retrieving and screening full texts of the remaining 5% of records not yet identified as included at title/abstract. Among the 100 simulations (10 simulations per review), none of these 5% of records were a final included study in the systematic review. The reduction in screening burden to achieve true recall @ 95% compared to @ 100% resulted in a reduced screening burden median of 40.6% (IQR: 38.3 to 54.2%). CONCLUSIONS:The prioritization tool in DistillerSR can reduce screening burden. A modified or stop screening approach once a true recall @ 95% is achieved appears to be a valid method for rapid reviews, and perhaps systematic reviews. This needs to be further evaluated in prospective reviews using the estimated recall.

SUBMITTER: Hamel C 

PROVIDER: S-EPMC7559198 | biostudies-literature | 2020 Oct

REPOSITORIES: biostudies-literature

altmetric image

Publications

An evaluation of DistillerSR's machine learning-based prioritization tool for title/abstract screening - impact on reviewer-relevant outcomes.

Hamel C C   Kelly S E SE   Thavorn K K   Rice D B DB   Wells G A GA   Hutton B B  

BMC medical research methodology 20201015 1


<h4>Background</h4>Systematic reviews often require substantial resources, partially due to the large number of records identified during searching. Although artificial intelligence may not be ready to fully replace human reviewers, it may accelerate and reduce the screening burden. Using DistillerSR (May 2020 release), we evaluated the performance of the prioritization simulation tool to determine the reduction in screening burden and time savings.<h4>Methods</h4>Using a true recall @ 95%, resp  ...[more]

Similar Datasets

| S-EPMC5848519 | biostudies-literature
| S-EPMC8017894 | biostudies-literature
| S-EPMC4217707 | biostudies-literature
| S-EPMC7694314 | biostudies-literature
2021-06-01 | GSE171549 | GEO
| S-EPMC8686081 | biostudies-literature
| S-EPMC6958795 | biostudies-literature
2021-07-26 | GSE175955 | GEO
| S-EPMC9811792 | biostudies-literature
| S-EPMC10817549 | biostudies-literature