Unknown

Dataset Information

0

Evaluating Capabilities of Large Language Models: Performance of GPT4 on Surgical Knowledge Assessments.


ABSTRACT:

Background

Artificial intelligence (AI) has the potential to dramatically alter healthcare by enhancing how we diagnosis and treat disease. One promising AI model is ChatGPT, a large general-purpose language model trained by OpenAI. The chat interface has shown robust, human-level performance on several professional and academic benchmarks. We sought to probe its performance and stability over time on surgical case questions.

Methods

We evaluated the performance of ChatGPT-4 on two surgical knowledge assessments: the Surgical Council on Resident Education (SCORE) and a second commonly used knowledge assessment, referred to as Data-B. Questions were entered in two formats: open-ended and multiple choice. ChatGPT output were assessed for accuracy and insights by surgeon evaluators. We categorized reasons for model errors and the stability of performance on repeat encounters.

Results

A total of 167 SCORE and 112 Data-B questions were presented to the ChatGPT interface. ChatGPT correctly answered 71% and 68% of multiple-choice SCORE and Data-B questions, respectively. For both open-ended and multiple-choice questions, approximately two-thirds of ChatGPT responses contained non-obvious insights. Common reasons for inaccurate responses included: inaccurate information in a complex question (n=16, 36.4%); inaccurate information in fact-based question (n=11, 25.0%); and accurate information with circumstantial discrepancy (n=6, 13.6%). Upon repeat query, the answer selected by ChatGPT varied for 36.4% of inaccurate questions; the response accuracy changed for 6/16 questions.

Conclusion

Consistent with prior findings, we demonstrate robust near or above human-level performance of ChatGPT within the surgical domain. Unique to this study, we demonstrate a substantial inconsistency in ChatGPT responses with repeat query. This finding warrants future consideration and presents an opportunity to further train these models to provide safe and consistent responses. Without mental and/or conceptual models, it is unclear whether language models such as ChatGPT would be able to safely assist clinicians in providing care.

SUBMITTER: Beaulieu-Jones BR 

PROVIDER: S-EPMC10371188 | biostudies-literature | 2023 Jul

REPOSITORIES: biostudies-literature

altmetric image

Publications

Evaluating Capabilities of Large Language Models: Performance of GPT4 on Surgical Knowledge Assessments.

Beaulieu-Jones Brendin R BR   Shah Sahaj S   Berrigan Margaret T MT   Marwaha Jayson S JS   Lai Shuo-Lun SL   Brat Gabriel A GA  

medRxiv : the preprint server for health sciences 20230724


<h4>Background</h4>Artificial intelligence (AI) has the potential to dramatically alter healthcare by enhancing how we diagnosis and treat disease. One promising AI model is ChatGPT, a large general-purpose language model trained by OpenAI. The chat interface has shown robust, human-level performance on several professional and academic benchmarks. We sought to probe its performance and stability over time on surgical case questions.<h4>Methods</h4>We evaluated the performance of ChatGPT-4 on tw  ...[more]

Similar Datasets

| S-EPMC11754395 | biostudies-literature
| S-EPMC10396962 | biostudies-literature
| S-EPMC11322688 | biostudies-literature
| S-EPMC10449915 | biostudies-literature
| S-EPMC10168498 | biostudies-literature
| S-EPMC10909174 | biostudies-literature
| S-EPMC10831180 | biostudies-literature
| S-EPMC11551352 | biostudies-literature
| S-EPMC11293997 | biostudies-literature
| S-EPMC11301122 | biostudies-literature