Ontology highlight
ABSTRACT: Background
Many measures of prediction accuracy have been developed. However, the most popular ones in typical medical outcome prediction settings require additional investigation of calibration.Methods
We show how rescaling the Brier score produces a measure that combines discrimination and calibration in one value and improves interpretability by adjusting for a benchmark model. We have called this measure the index of prediction accuracy (IPA). The IPA permits a common interpretation across binary, time to event, and competing risk outcomes. We illustrate this measure using example datasets.Results
The IPA is simple to compute, and example code is provided. The values of the IPA appear very interpretable.Conclusions
IPA should be a prominent measure reported in studies of medical prediction model performance. However, IPA is only a measure of average performance and, by default, does not measure the utility of a medical decision.
SUBMITTER: Kattan MW
PROVIDER: S-EPMC6460739 | biostudies-literature | 2018
REPOSITORIES: biostudies-literature
Kattan Michael W MW Gerds Thomas A TA
Diagnostic and prognostic research 20180504
<h4>Background</h4>Many measures of prediction accuracy have been developed. However, the most popular ones in typical medical outcome prediction settings require additional investigation of calibration.<h4>Methods</h4>We show how rescaling the Brier score produces a measure that combines discrimination and calibration in one value and improves interpretability by adjusting for a benchmark model. We have called this measure the index of prediction accuracy (IPA). The IPA permits a common interpr ...[more]