Unknown

Dataset Information

0

Low agreement among reviewers evaluating the same NIH grant applications.


ABSTRACT: Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers' evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers' ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers "translated" a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.

SUBMITTER: Pier EL 

PROVIDER: S-EPMC5866547 | biostudies-literature | 2018 Mar

REPOSITORIES: biostudies-literature

altmetric image

Publications

Low agreement among reviewers evaluating the same NIH grant applications.

Pier Elizabeth L EL   Brauer Markus M   Filut Amarette A   Kaatz Anna A   Raclaw Joshua J   Nathan Mitchell J MJ   Ford Cecilia E CE   Carnes Molly M  

Proceedings of the National Academy of Sciences of the United States of America 20180305 12


Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers' evaluation of  ...[more]

Similar Datasets

| S-EPMC2219790 | biostudies-other
| S-EPMC8268663 | biostudies-literature
| S-EPMC11216609 | biostudies-literature
| S-EPMC6699508 | biostudies-literature
| S-EPMC212789 | biostudies-literature
| S-EPMC10553257 | biostudies-literature
2024-10-10 | PXD050548 | Pride
2010-04-29 | GSE21534 | GEO
| S-EPMC8485516 | biostudies-literature
| S-EPMC6466482 | biostudies-other