Project description:The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Project description:BackgroundThe increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.Methodology/principal findingsIn this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.ConclusionsThis update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Project description:The accuracy of a diagnostic test, which is often quantified by a pair of measures such as sensitivity and specificity, is critical for medical decision making. Separate studies of an investigational diagnostic test can be combined through meta-analysis; however, such an analysis can be threatened by publication bias. To the best of our knowledge, there is no existing method that accounts for publication bias in the meta-analysis of diagnostic tests involving bivariate outcomes. In this paper, we extend the Copas selection model from univariate outcomes to bivariate outcomes for the correction of publication bias when the probability of a study being published can depend on its sensitivity, specificity, and the associated standard errors. We develop an expectation-maximization algorithm for the maximum likelihood estimation under the proposed selection model. We investigate the finite sample performance of the proposed method through simulation studies and illustrate the method by assessing a meta-analysis of 17 published studies of a rapid diagnostic test for influenza.
Project description:Research on goal priming asks whether the subtle activation of an achievement goal can improve task performance. Studies in this domain employ a range of priming methods, such as surreptitiously displaying a photograph of an athlete winning a race, and a range of dependent variables including measures of creativity and workplace performance. Chen, Latham, Piccolo and Itzchakov (Chen et al. 2021 J. Appl. Psychol. 70, 216-253) recently undertook a meta-analysis of this research and reported positive overall effects in both laboratory and field studies, with field studies yielding a moderate-to-large effect that was significantly larger than that obtained in laboratory experiments. We highlight a number of issues with Chen et al.'s selection of field studies and then report a new meta-analysis (k = 13, N = 683) that corrects these. The new meta-analysis reveals suggestive evidence of publication bias and low power in goal priming field studies. We conclude that the available evidence falls short of demonstrating goal priming effects in the workplace, and offer proposals for how future research can provide stronger tests.
Project description:In this study, we explore the potential for publication bias using market simulation results that estimate the effect of US ethanol expansion on corn prices. We provide a new test of whether the publication process routes market simulation results into one of the following two narratives: food-versus-fuel or greenhouse gas (GHG) emissions. Our research question is whether model results with either high price or large land impact are favored for publication in one body of literature or the other. In other words, a model that generates larger price effects might be more readily published in the food-versus-fuel literature while a model that generates larger land use change and GHG emissions might find a home in the GHG emission literature. We develop a test for publication bias based on matching narrative and normalized price effects from simulated market models. As such, our approach differs from past studies of publication bias that typically focus on statistically estimated parameters. This focus could have broad implications: if in the future more studies assess publication bias of quantitative results that are not statistically estimated parameters, then important inferences about publication bias could be drawn. More specifically, such a body of literature could explore the potential that practices common in either statistical methods or other methods tend to encourage or deter publication bias. Turning back to the present case, our findings in this study do not detect a relationship between food-versus-fuel or GHG narrative orientation and corn price effects. The results are relevant to debates about biofuel impacts and our approach can inform the publication bias literature more generally.
Project description:BackgroundSystematic reviews and meta-analyses of pre-clinical studies, in vivo animal experiments in particular, can influence clinical care. Publication bias is one of the major threats of validity in systematic reviews and meta-analyses. Previous empirical studies suggested that systematic reviews and meta-analyses have become more prevalent until 2010 and found evidence for compromised methodological rigor with a trend towards improvement. We aim to comprehensively summarize and update the evidence base on systematic reviews and meta-analyses of animal studies, their methodological quality and assessment of publication bias in particular.Methods/designThe objectives of this systematic review are as follows: •To investigate the epidemiology of published systematic reviews of animal studies until present. •To examine methodological features of systematic reviews and meta-analyses of animal studies with special attention to the assessment of publication bias. •To investigate the influence of systematic reviews of animal studies on clinical research by examining citations of the systematic reviews by clinical studies. Eligible studies for this systematic review constitute systematic reviews and meta-analyses that summarize in vivo animal experiments with the purpose of reviewing animal evidence to inform human health. We will exclude genome-wide association studies and animal experiments with the main purpose to learn more about fundamental biology, physical functioning or behavior. In addition to the inclusion of systematic reviews and meta-analyses identified by other empirical studies, we will systematically search Ovid Medline, Embase, ToxNet, and ScienceDirect from 2009 to January 2013 for further eligible studies without language restrictions. Two reviewers working independently will assess titles, abstracts, and full texts for eligibility and extract relevant data from included studies. Data reporting will involve a descriptive summary of meta-analyses and systematic reviews.DiscussionResults are expected to be publicly available later in 2013 and may form the basis for recommendations to improve the quality of systematic reviews and meta-analyses of animal studies and their use with respect to clinical care.
Project description:BackgroundPublication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings.MethodsFour tests on publication bias, Egger's test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100%) were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500), and the number of observations for the publication bias tests (K = 100, 1,000) were varied.ResultsAll tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies.DiscussionThe FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems.
Project description:Publication and related biases constitute serious threats to the validity of research synthesis. If research syntheses are based on a biased selection of the available research, there is an increased risk of producing misleading results. The purpose fo this study is to explore the extent of positive outcome bias, time-lag bias, and place-of-publication bias in published research on the effects of psychological, social, and behavioral interventions. The results are based on 527 Swedish outcome trials published in peer-reviewed journals between 1990 and 2019. We found no difference in the number of studies reporting significant compared to non-significant findings or in the number of studies reporting strong effect sizes in the published literature. We found no evidence of time-lag bias or place-of-publication bias in our results. The average reported effect size remained constant over time as did the proportion of studies reporting significant effects.