Unknown

Dataset Information

0

Validation of a detailed scoring checklist for use during advanced cardiac life support certification.


ABSTRACT: Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, before setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by nonexpert raters during simulations of American Heart Association (AHA) Megacodes.The reliability of scores generated from a detailed set of checklists, when used by 4 nonexpert raters, was tested by grading team leader performance in 8 Megacode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and a group of nonexpert raters. The videos were reviewed "continuously" and "with pauses." The grading made by 2 content experts served as the reference standard, and 4 nonexpert raters were used to test the reliability of the checklists.Our results demonstrate that nonexpert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intrarater reliability and agreement with a reference standard. The results also demonstrate that nonexpert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA Megacodes when using our checklist in a continuous mode because measures of agreement in total scoring were very strong [Lin's (Biometrics 1989;45:255-268) concordance correlation coefficient, 0.96; intraclass correlation coefficient, 0.97].We have shown that our checklists can yield reliable scores, are appropriate for use by nonexpert raters, and are able to be used during continuous assessment of team leader performance during the review of a simulated Megacode. This checklist may be more appropriate for use by advanced cardiac life support instructors during Megacode assessments than the current tools provided by the AHA.

SUBMITTER: McEvoy MD 

PROVIDER: S-EPMC3467004 | biostudies-literature | 2012 Aug

REPOSITORIES: biostudies-literature

altmetric image

Publications

Validation of a detailed scoring checklist for use during advanced cardiac life support certification.

McEvoy Matthew D MD   Smalley Jeremy C JC   Nietert Paul J PJ   Field Larry C LC   Furse Cory M CM   Blenko John W JW   Cobb Benjamin G BG   Walters Jenna L JL   Pendarvis Allen A   Dalal Nishita S NS   Schaefer John J JJ  

Simulation in healthcare : journal of the Society for Simulation in Healthcare 20120801 4


<h4>Introduction</h4>Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, before setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by nonexpert raters duri  ...[more]

Similar Datasets

| S-EPMC7723912 | biostudies-literature
| S-EPMC8371248 | biostudies-literature
| S-EPMC8804484 | biostudies-literature
| S-EPMC5523088 | biostudies-literature
| S-EPMC5125185 | biostudies-literature
| S-EPMC8217732 | biostudies-literature
| S-EPMC4314335 | biostudies-literature
| S-EPMC5831729 | biostudies-literature
| S-EPMC7267993 | biostudies-literature
| S-EPMC4182114 | biostudies-literature