Unknown

Dataset Information

0

Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.


ABSTRACT:

Objective

Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods.

Materials and methods

We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association.

Results

The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P?ConclusionsPhysician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.

SUBMITTER: Diprose WK 

PROVIDER: S-EPMC7647292 | biostudies-literature | 2020 Apr

REPOSITORIES: biostudies-literature

altmetric image

Publications

Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.

Diprose William K WK   Buist Nicholas N   Hua Ning N   Thurier Quentin Q   Shand George G   Robinson Reece R  

Journal of the American Medical Informatics Association : JAMIA 20200401 4


<h4>Objective</h4>Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingnes  ...[more]

Similar Datasets

| S-EPMC6404456 | biostudies-literature
| S-EPMC4428824 | biostudies-literature
| S-EPMC6678298 | biostudies-literature
| S-EPMC10535779 | biostudies-literature
| S-EPMC6500604 | biostudies-other
| S-EPMC7099019 | biostudies-literature
| S-EPMC9333244 | biostudies-literature
| S-EPMC9086002 | biostudies-literature
2024-03-13 | PXD050577 |
| S-EPMC10227067 | biostudies-literature