Unknown

Dataset Information

0

Clinician Perspectives on Using Computational Mental Health Insights From Patients' Social Media Activities: Design and Qualitative Evaluation of a Prototype.


ABSTRACT:

Background

Previous studies have suggested that social media data, along with machine learning algorithms, can be used to generate computational mental health insights. These computational insights have the potential to support clinician-patient communication during psychotherapy consultations. However, how clinicians perceive and envision using computational insights during consultations has been underexplored.

Objective

The aim of this study is to understand clinician perspectives regarding computational mental health insights from patients' social media activities. We focus on the opportunities and challenges of using these insights during psychotherapy consultations.

Methods

We developed a prototype that can analyze consented patients' Facebook data and visually represent these computational insights. We incorporated the insights into existing clinician-facing assessment tools, the Hamilton Depression Rating Scale and Global Functioning: Social Scale. The design intent is that a clinician will verbally interview a patient (eg, How was your mood in the past week?) while they reviewed relevant insights from the patient's social media activities (eg, number of depression-indicative posts). Using the prototype, we conducted interviews (n=15) and 3 focus groups (n=13) with mental health clinicians: psychiatrists, clinical psychologists, and licensed clinical social workers. The transcribed qualitative data were analyzed using thematic analysis.

Results

Clinicians reported that the prototype can support clinician-patient collaboration in agenda-setting, communicating symptoms, and navigating patients' verbal reports. They suggested potential use scenarios, such as reviewing the prototype before consultations and using the prototype when patients missed their consultations. They also speculated potential negative consequences: patients may feel like they are being monitored, which may yield negative effects, and the use of the prototype may increase the workload of clinicians, which is already difficult to manage. Finally, our participants expressed concerns regarding the prototype: they were unsure whether patients' social media accounts represented their actual behaviors; they wanted to learn how and when the machine learning algorithm can fail to meet their expectations of trust; and they were worried about situations where they could not properly respond to the insights, especially emergency situations outside of clinical settings.

Conclusions

Our findings support the touted potential of computational mental health insights from patients' social media account data, especially in the context of psychotherapy consultations. However, sociotechnical issues, such as transparent algorithmic information and institutional support, should be addressed in future endeavors to design implementable and sustainable technology.

SUBMITTER: Yoo DW 

PROVIDER: S-EPMC8663497 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC6875652 | biostudies-literature
| S-EPMC8673716 | biostudies-literature
| S-EPMC7333070 | biostudies-literature
| S-EPMC10965491 | biostudies-literature
| S-EPMC10123591 | biostudies-literature
| S-EPMC7442947 | biostudies-literature
| S-EPMC6224760 | biostudies-literature
| S-EPMC10975755 | biostudies-literature
| S-EPMC7147779 | biostudies-literature
| S-EPMC11234137 | biostudies-literature