Selection bias, vote counting, and money-priming effects: A comment on Rohrer, Pashler, and Harris (2015) and Vohs (2015).
Ontology highlight
ABSTRACT: When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a "vote counting" approach to decide whether the effect is reliable-that is, simply comparing the number of successful and unsuccessful replications. Vohs's (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. In the present comment, we show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting. Our major conclusion is that the simple proportion of significant findings is a poor guide to the reliability of research and that preregistered replications are an essential means to assess the reliability of money-priming effects.
SUBMITTER: Vadillo MA
PROVIDER: S-EPMC4831299 | biostudies-other | 2016 May
REPOSITORIES: biostudies-other
ACCESS DATA