Unknown

Dataset Information

0

Estimating the deep replicability of scientific findings using human and artificial intelligence.


ABSTRACT: Replicability tests of scientific papers show that the majority of papers fail replication. Moreover, failed papers circulate through the literature as quickly as replicating papers. This dynamic weakens the literature, raises research costs, and demonstrates the need for new approaches for estimating a study's replicability. Here, we trained an artificial intelligence model to estimate a paper's replicability using ground truth data on studies that had passed or failed manual replication tests, and then tested the model's generalizability on an extensive set of out-of-sample studies. The model predicts replicability better than the base rate of reviewers and comparably as well as prediction markets, the best present-day method for predicting replicability. In out-of-sample tests on manually replicated papers from diverse disciplines and methods, the model had strong accuracy levels of 0.65 to 0.78. Exploring the reasons behind the model's predictions, we found no evidence for bias based on topics, journals, disciplines, base rates of failure, persuasion words, or novelty words like "remarkable" or "unexpected." We did find that the model's accuracy is higher when trained on a paper's text rather than its reported statistics and that n-grams, higher order word combinations that humans have difficulty processing, correlate with replication. We discuss how combining human and machine intelligence can raise confidence in research, provide research self-assessment techniques, and create methods that are scalable and efficient enough to review the ever-growing numbers of publications-a task that entails extensive human resources to accomplish with prediction markets and manual replication alone.

SUBMITTER: Yang Y 

PROVIDER: S-EPMC7245108 | biostudies-literature | 2020 May

REPOSITORIES: biostudies-literature

altmetric image

Publications

Estimating the deep replicability of scientific findings using human and artificial intelligence.

Yang Yang Y   Youyou Wu W   Uzzi Brian B  

Proceedings of the National Academy of Sciences of the United States of America 20200504 20


Replicability tests of scientific papers show that the majority of papers fail replication. Moreover, failed papers circulate through the literature as quickly as replicating papers. This dynamic weakens the literature, raises research costs, and demonstrates the need for new approaches for estimating a study's replicability. Here, we trained an artificial intelligence model to estimate a paper's replicability using ground truth data on studies that had passed or failed manual replication tests,  ...[more]

Similar Datasets

| S-EPMC9552145 | biostudies-literature
| S-EPMC9755280 | biostudies-literature
| S-EPMC10045890 | biostudies-literature
| S-EPMC8621095 | biostudies-literature
| S-EPMC7607084 | biostudies-literature
| S-EPMC8128724 | biostudies-literature
| S-EPMC9035975 | biostudies-literature
| S-EPMC7762465 | biostudies-literature
| S-EPMC8371476 | biostudies-literature
| S-EPMC5537099 | biostudies-other