Ontology highlight
ABSTRACT: Objective
The study sought to evaluate the overall performance of hospitals that used the Computerized Physician Order Entry Evaluation Tool in both 2017 and 2018, along with their performance against fatal orders and nuisance orders.Materials and methods
We evaluated 1599 hospitals that took the test in both 2017 and 2018 by using their overall percentage scores on the test, along with the percentage of fatal orders appropriately alerted on, and the percentage of nuisance orders incorrectly alerted on.Results
Hospitals showed overall improvement; the mean score in 2017 was 58.1%, and this increased to 66.2% in 2018. Fatal order performance improved slightly from 78.8% to 83.0% (P < .001), though there was almost no change in nuisance order performance (89.0% to 89.7%; P = .43). Hospitals alerting on one or more nuisance orders had a 3-percentage-point increase in their overall score.Discussion
Despite the improvement of overall scores in 2017 and 2018, there was little improvement in fatal order performance, suggesting that hospitals are not targeting the deadliest orders first. Nuisance order performance showed almost no improvement, and some hospitals may be achieving higher scores by overalerting, suggesting that the thresholds for which alerts are fired from are too low.Conclusions
Although hospitals improved overall from 2017 to 2018, there is still important room for improvement for both fatal and nuisance orders. Hospitals that incorrectly alerted on one or more nuisance orders had slightly higher overall performance, suggesting that some hospitals may be achieving higher scores at the cost of overalerting, which has the potential to cause clinician burnout and even worsen safety.
SUBMITTER: Co Z
PROVIDER: S-EPMC7647300 | biostudies-literature |
REPOSITORIES: biostudies-literature