Unknown

Dataset Information

0

Early warning score validation methodologies and performance metrics: a systematic review.


ABSTRACT:

Background

Early warning scores (EWS) have been developed as clinical prognostication tools to identify acutely deteriorating patients. In the past few years, there has been a proliferation of studies that describe the development and validation of novel machine learning-based EWS. Systematic reviews of published studies which focus on evaluating performance of both well-established and novel EWS have shown conflicting conclusions. A possible reason is the heterogeneity in validation methods applied. In this review, we aim to examine the methodologies and metrics used in studies which perform EWS validation.

Methods

A systematic review of all eligible studies from the MEDLINE database and other sources, was performed. Studies were eligible if they performed validation on at least one EWS and reported associations between EWS scores and inpatient mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA) of adults. Two reviewers independently did a full-text review and performed data abstraction by using standardized data-worksheet based on the TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) checklist. Meta-analysis was not performed due to heterogeneity.

Results

The key differences in validation methodologies identified were (1) validation dataset used, (2) outcomes of interest, (3) case definition, time of EWS use and aggregation methods, and (4) handling of missing values. In terms of case definition, among the 48 eligible studies, 34 used the patient episode case definition while 12 used the observation set case definition, and 2 did the validation using both case definitions. Of those that used the patient episode case definition, 18 studies validated the EWS at a single point of time, mostly using the first recorded observation. The review also found more than 10 different performance metrics reported among the studies.

Conclusions

Methodologies and performance metrics used in studies performing validation on EWS were heterogeneous hence making it difficult to interpret and compare EWS performance. Standardizing EWS validation methodology and reporting can potentially address this issue.

SUBMITTER: Fang AHS 

PROVIDER: S-EPMC7301346 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC8239612 | biostudies-literature
| S-EPMC6544303 | biostudies-literature
| S-EPMC7892287 | biostudies-literature
| S-EPMC7926898 | biostudies-literature