Project description:Random-effects meta-analyses of observational studies can produce biased estimates if the synthesized studies are subject to unmeasured confounding. We propose sensitivity analyses quantifying the extent to which unmeasured confounding of specified magnitude could reduce to below a certain threshold the proportion of true effect sizes that are scientifically meaningful. We also develop converse methods to estimate the strength of confounding capable of reducing the proportion of scientifically meaningful true effects to below a chosen threshold. These methods apply when a "bias factor" is assumed to be normally distributed across studies or is assessed across a range of fixed values. Our estimators are derived using recently proposed sharp bounds on confounding bias within a single study that do not make assumptions regarding the unmeasured confounders themselves or the functional form of their relationships with the exposure and outcome of interest. We provide an R package, EValue, and a free website that compute point estimates and inference and produce plots for conducting such sensitivity analyses. These methods facilitate principled use of random-effects meta-analyses of observational studies to assess the strength of causal evidence for a hypothesis.
Project description:BACKGROUND:Mediation analysis is a powerful tool for understanding mechanisms, but conclusions about direct and indirect effects will be invalid if there is unmeasured confounding of the mediator-outcome relationship. Sensitivity analysis methods allow researchers to assess the extent of this bias but are not always used. One particularly straightforward technique that requires minimal assumptions is nonetheless difficult to interpret, and so would benefit from a more intuitive parameterization. METHODS:We conducted an exhaustive numerical search over simulated mediation effects, calculating the proportion of scenarios in which a bound for unmeasured mediator-outcome confounding held under an alternative parameterization. RESULTS:In over 99% of cases, the bound for the bias held when we described the strength of confounding directly via the confounder-mediator relationship instead of via the conditional exposure-confounder relationship. CONCLUSIONS:Researchers can conduct sensitivity analysis using a method that describes the strength of the confounder-outcome relationship and the approximate strength of the confounder-mediator relationship that, together, would be required to explain away a direct or indirect effect.
Project description:It is often of interest to decompose the total effect of an exposure into a component that acts on the outcome through some mediator and a component that acts independently through other pathways. Said another way, we are interested in the direct and indirect effects of the exposure on the outcome. Even if the exposure is randomly assigned, it is often infeasible to randomize the mediator, leaving the mediator-outcome confounding not fully controlled. We develop a sensitivity analysis technique that can bound the direct and indirect effects without parametric assumptions about the unmeasured mediator-outcome confounding.
Project description:Evidence for the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss.We used data from the Nurses' Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss strategies. We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Several sensitivity analyses were conducted.The estimated 26-year risk of CHD did not change under weight loss strategies using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no weight loss ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the weight loss strategy to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes.We estimated that maintaining or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-age US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but these did not prevent us from estimating a beneficial effect of weight loss on diabetes.
Project description:Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.
Project description:BackgroundAssessing the real-world comparative effectiveness of common interventions is challenged by unmeasured confounding.ObjectiveTo determine whether the mortality benefit shown for drug-eluting stents (DES) over bare metal stents (BMS) in observational studies persists after controls for/tests for confounding.Data sources/study settingRetrospective observational study involving 38,019 patients, 65 years or older admitted for an index percutaneous coronary intervention receiving DES or BMS in Pennsylvania in 2004-2005 followed up for death through 3 years.Study designAnalysis was at the patient level. Mortality was analyzed with Cox proportional hazards models allowing for stratification by disease severity or DES use propensity, accounting for clustering of patients. Instrumental variables analysis used lagged physician stent usage to proxy for the focal stent type decision. A method originating in work by Cornfield and others in 1954 and popularized by Greenland in 1996 was used to assess robustness to confounding.Principal findingsDES was associated with a significantly lower adjusted risk of death at 3 years in Cox and in instrumented analyses. An implausibly strong hypothetical unobserved confounder would be required to fully explain these results.ConclusionsConfounding by indication can bias observational studies. No strong evidence of such selection biases was found in the reduced risk of death among elderly patients receiving DES instead of BMS in a Pennsylvanian state-wide population.
Project description:Models based on the Ornstein-Uhlenbeck process have become standard for the comparative study of adaptation. Cooper et al. (2016) have cast doubt on this practice by claiming statistical problems with fitting Ornstein-Uhlenbeck models to comparative data. Specifically, they claim that statistical tests of Brownian motion may have too high Type I error rates and that such error rates are exacerbated by measurement error. In this note, we argue that these results have little relevance to the estimation of adaptation with Ornstein-Uhlenbeck models for three reasons. First, we point out that Cooper et al. (2016) did not consider the detection of distinct optima (e.g. for different environments), and therefore did not evaluate the standard test for adaptation. Second, we show that consideration of parameter estimates, and not just statistical significance, will usually lead to correct inferences about evolutionary dynamics. Third, we show that bias due to measurement error can be corrected for by standard methods. We conclude that Cooper et al. (2016) have not identified any statistical problems specific to Ornstein-Uhlenbeck models, and that their cautions against their use in comparative analyses are unfounded and misleading. [adaptation, Ornstein-Uhlenbeck model, phylogenetic comparative method.].
Project description:An important concern in an observational study is whether or not there is unmeasured confounding, that is, unmeasured ways in which the treatment and control groups differ before treatment, which affect the outcome. We develop a test of whether there is unmeasured confounding when an instrumental variable (IV) is available. An IV is a variable that is independent of the unmeasured confounding and encourages a subject to take one treatment level versus another, while having no effect on the outcome beyond its encouragement of a certain treatment level. We show what types of unmeasured confounding can be tested for with an IV and develop a test for this type of unmeasured confounding that has correct type I error rate. We show that the widely used Durbin-Wu-Hausman test can have inflated type I error rates when there is treatment effect heterogeneity. Additionally, we show that our test provides more insight into the nature of the unmeasured confounding than the Durbin-Wu-Hausman test. We apply our test to an observational study of the effect of a premature infant being delivered in a high-level neonatal intensive care unit (one with mechanical assisted ventilation and high volume) versus a lower level unit, using the excess travel time a mother lives from the nearest high-level unit to the nearest lower-level unit as an IV.
Project description:In the absence of a randomized experiment, a key assumption for drawing causal inference about treatment effects is the ignorable treatment assignment. Violations of the ignorability assumption may lead to biased treatment effect estimates. Sensitivity analysis helps gauge how causal conclusions will be altered in response to the potential magnitude of departure from the ignorability assumption. However, sensitivity analysis approaches for unmeasured confounding in the context of multiple treatments and binary outcomes are scarce. We propose a flexible Monte Carlo sensitivity analysis approach for causal inference in such settings. We first derive the general form of the bias introduced by unmeasured confounding, with emphasis on theoretical properties uniquely relevant to multiple treatments. We then propose methods to encode the impact of unmeasured confounding on potential outcomes and adjust the estimates of causal effects in which the presumed unmeasured confounding is removed. Our proposed methods embed nested multiple imputation within the Bayesian framework, which allow for seamless integration of the uncertainty about the values of the sensitivity parameters and the sampling variability, as well as use of the Bayesian Additive Regression Trees for modeling flexibility. Expansive simulations validate our methods and gain insight into sensitivity analysis with multiple treatments. We use the SEER-Medicare data to demonstrate sensitivity analysis using three treatments for early stage non-small cell lung cancer. The methods developed in this work are readily available in the R package SAMTx.