The quest for an optimal alpha.
Ontology highlight
ABSTRACT: Researchers who analyze data within the framework of null hypothesis significance testing must choose a critical "alpha" level, ?, to use as a cutoff for deciding whether a given set of data demonstrates the presence of a particular effect. In most fields, ? = 0.05 has traditionally been used as the standard cutoff. Many researchers have recently argued for a change to a more stringent evidence cutoff such as ? = 0.01, 0.005, or 0.001, noting that this change would tend to reduce the rate of false positives, which are of growing concern in many research areas. Other researchers oppose this proposed change, however, because it would correspondingly tend to increase the rate of false negatives. We show how a simple statistical model can be used to explore the quantitative tradeoff between reducing false positives and increasing false negatives. In particular, the model shows how the optimal ? level depends on numerous characteristics of the research area, and it reveals that although ? = 0.05 would indeed be approximately the optimal value in some realistic situations, the optimal ? could actually be substantially larger or smaller in other situations. The importance of the model lies in making it clear what characteristics of the research area have to be specified to make a principled argument for using one ? level rather than another, and the model thereby provides a blueprint for researchers seeking to justify a particular ? level.
SUBMITTER: Miller J
PROVIDER: S-EPMC6314595 | biostudies-literature | 2019
REPOSITORIES: biostudies-literature
ACCESS DATA