Unknown

Dataset Information

0

Using crowdsourcing to evaluate published scientific literature: methods and example.


ABSTRACT: Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (??=?0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low cost, crowdsourcing has potential to evaluate published literature in a cost-effective, quick, and reliable manner using existing, easily accessible resources.

SUBMITTER: Brown AW 

PROVIDER: S-EPMC4079692 | biostudies-literature | 2014

REPOSITORIES: biostudies-literature

altmetric image

Publications

Using crowdsourcing to evaluate published scientific literature: methods and example.

Brown Andrew W AW   Allison David B DB  

PloS one 20140702 7


Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are  ...[more]

Similar Datasets

| S-EPMC5897790 | biostudies-literature
| S-EPMC8175269 | biostudies-literature
| S-EPMC8362035 | biostudies-literature
| S-EPMC6492164 | biostudies-literature
| S-EPMC5860084 | biostudies-literature
| S-EPMC5362196 | biostudies-literature
| S-EPMC8675915 | biostudies-literature
| S-EPMC7226051 | biostudies-literature
| S-EPMC6341248 | biostudies-literature
| S-EPMC9587980 | biostudies-literature