Unknown

Dataset Information

0

Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study.


ABSTRACT:

Background

It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients' perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care.

Objective

The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios.

Methods

We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States.

Results

The interactions between the types of health care service encounters and health conditions significantly influenced individuals' perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases.

Conclusions

The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals' intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.

SUBMITTER: Esmaeilzadeh P 

PROVIDER: S-EPMC8663518 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC6697547 | biostudies-literature
| S-EPMC9396444 | biostudies-literature
| S-EPMC8800095 | biostudies-literature
| S-EPMC8430862 | biostudies-literature
| S-EPMC7424481 | biostudies-literature
| S-EPMC8277302 | biostudies-literature
| S-EPMC9522339 | biostudies-literature
| S-EPMC10182456 | biostudies-literature
| S-EPMC8713099 | biostudies-literature
| S-EPMC9748798 | biostudies-literature