Unknown

Dataset Information

0

Assessment of a method to detect signals for updating systematic reviews.


ABSTRACT:

Background

Systematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.

Methods

The AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority.

Results

The 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa?=?0.74.

Conclusions

These results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating.

SUBMITTER: Shekelle PG 

PROVIDER: S-EPMC3937021 | biostudies-literature | 2014 Feb

REPOSITORIES: biostudies-literature

altmetric image

Publications

Assessment of a method to detect signals for updating systematic reviews.

Shekelle Paul G PG   Motala Aneesa A   Johnsen Breanne B   Newberry Sydne J SJ  

Systematic reviews 20140214


<h4>Background</h4>Systematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.<h4>Methods</h4>The AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a  ...[more]

Similar Datasets

| S-EPMC1569863 | biostudies-other
| S-EPMC3874670 | biostudies-literature
| S-EPMC4141462 | biostudies-literature
| S-EPMC5875625 | biostudies-literature
| S-EPMC4289543 | biostudies-literature
| S-EPMC9006561 | biostudies-literature
| S-EPMC5941703 | biostudies-literature
| S-EPMC10788252 | biostudies-literature
| S-EPMC4466215 | biostudies-literature