Unknown

Dataset Information

0

The Promise and Pitfalls of Using Crowdsourcing in Research Prioritization for Back Pain: Cross-Sectional Surveys.


ABSTRACT:

Background

The involvement of patients in research better aligns evidence generation to the gaps that patients themselves face when making decisions about health care. However, obtaining patients' perspectives is challenging. Amazon's Mechanical Turk (MTurk) has gained popularity over the past decade as a crowdsourcing platform to reach large numbers of individuals to perform tasks for a small reward for the respondent, at small cost to the investigator. The appropriateness of such crowdsourcing methods in medical research has yet to be clarified.

Objective

The goals of this study were to (1) understand how those on MTurk who screen positive for back pain prioritize research topics compared with those who screen negative for back pain, and (2) determine the qualitative differences in open-ended comments between groups.

Methods

We conducted cross-sectional surveys on MTurk to assess participants' back pain and allow them to prioritize research topics. We paid respondents US $0.10 to complete the 24-point Roland Morris Disability Questionnaire (RMDQ) to categorize participants as those "with back pain" and those "without back pain," then offered both those with (RMDQ score ?7) and those without back pain (RMDQ <7) an opportunity to rank their top 5 (of 18) research topics for an additional US $0.75. We compared demographic information and research priorities between the 2 groups and performed qualitative analyses on free-text commentary that participants provided.

Results

We conducted 2 screening waves. We first screened 2189 individuals for back pain over 33 days and invited 480 (21.93%) who screened positive to complete the prioritization, of whom 350 (72.9% of eligible) did. We later screened 664 individuals over 7 days and invited 474 (71.4%) without back pain to complete the prioritization, of whom 397 (83.7% of eligible) did. Those with back pain who prioritized were comparable with those without in terms of age, education, marital status, and employment. The group with back pain had a higher proportion of women (234, 67.2% vs 229, 57.8%, P=.02). The groups' rank lists of research priorities were highly correlated: Spearman correlation coefficient was .88 when considering topics ranked in the top 5. The 2 groups agreed on 4 of the top 5 and 9 of the top 10 research priorities.

Conclusions

Crowdsourcing platforms such as MTurk support efforts to efficiently reach large groups of individuals to obtain input on research activities. In the context of back pain, a prevalent and easily understood condition, the rank list of those with back pain was highly correlated with that of those without back pain. However, subtle differences in the content and quality of free-text comments suggest supplemental efforts may be needed to augment the reach of crowdsourcing in obtaining perspectives from patients, especially from specific populations.

SUBMITTER: Bartek MA 

PROVIDER: S-EPMC5650676 | biostudies-literature | 2017 Oct

REPOSITORIES: biostudies-literature

altmetric image

Publications

The Promise and Pitfalls of Using Crowdsourcing in Research Prioritization for Back Pain: Cross-Sectional Surveys.

Bartek Matthew A MA   Truitt Anjali R AR   Widmer-Rodriguez Sierra S   Tuia Jordan J   Bauer Zoya A ZA   Comstock Bryan A BA   Edwards Todd C TC   Lawrence Sarah O SO   Monsell Sarah E SE   Patrick Donald L DL   Jarvik Jeffrey G JG   Lavallee Danielle C DC  

Journal of medical Internet research 20171006 10


<h4>Background</h4>The involvement of patients in research better aligns evidence generation to the gaps that patients themselves face when making decisions about health care. However, obtaining patients' perspectives is challenging. Amazon's Mechanical Turk (MTurk) has gained popularity over the past decade as a crowdsourcing platform to reach large numbers of individuals to perform tasks for a small reward for the respondent, at small cost to the investigator. The appropriateness of such crowd  ...[more]

Similar Datasets

| S-EPMC3799069 | biostudies-literature
| S-EPMC4189631 | biostudies-literature
| S-EPMC4161855 | biostudies-other
| S-EPMC5491135 | biostudies-literature
| S-EPMC8721077 | biostudies-literature
| S-EPMC4681766 | biostudies-literature
| S-EPMC4039787 | biostudies-literature
| S-EPMC3988828 | biostudies-literature
| S-EPMC2908106 | biostudies-other
| S-EPMC6341699 | biostudies-literature