Project description:Random-effects meta-analyses of observational studies can produce biased estimates if the synthesized studies are subject to unmeasured confounding. We propose sensitivity analyses quantifying the extent to which unmeasured confounding of specified magnitude could reduce to below a certain threshold the proportion of true effect sizes that are scientifically meaningful. We also develop converse methods to estimate the strength of confounding capable of reducing the proportion of scientifically meaningful true effects to below a chosen threshold. These methods apply when a "bias factor" is assumed to be normally distributed across studies or is assessed across a range of fixed values. Our estimators are derived using recently proposed sharp bounds on confounding bias within a single study that do not make assumptions regarding the unmeasured confounders themselves or the functional form of their relationships with the exposure and outcome of interest. We provide an R package, EValue, and a free website that compute point estimates and inference and produce plots for conducting such sensitivity analyses. These methods facilitate principled use of random-effects meta-analyses of observational studies to assess the strength of causal evidence for a hypothesis.
Project description:BACKGROUND:Mediation analysis is a powerful tool for understanding mechanisms, but conclusions about direct and indirect effects will be invalid if there is unmeasured confounding of the mediator-outcome relationship. Sensitivity analysis methods allow researchers to assess the extent of this bias but are not always used. One particularly straightforward technique that requires minimal assumptions is nonetheless difficult to interpret, and so would benefit from a more intuitive parameterization. METHODS:We conducted an exhaustive numerical search over simulated mediation effects, calculating the proportion of scenarios in which a bound for unmeasured mediator-outcome confounding held under an alternative parameterization. RESULTS:In over 99% of cases, the bound for the bias held when we described the strength of confounding directly via the confounder-mediator relationship instead of via the conditional exposure-confounder relationship. CONCLUSIONS:Researchers can conduct sensitivity analysis using a method that describes the strength of the confounder-outcome relationship and the approximate strength of the confounder-mediator relationship that, together, would be required to explain away a direct or indirect effect.
Project description:It is often of interest to decompose the total effect of an exposure into a component that acts on the outcome through some mediator and a component that acts independently through other pathways. Said another way, we are interested in the direct and indirect effects of the exposure on the outcome. Even if the exposure is randomly assigned, it is often infeasible to randomize the mediator, leaving the mediator-outcome confounding not fully controlled. We develop a sensitivity analysis technique that can bound the direct and indirect effects without parametric assumptions about the unmeasured mediator-outcome confounding.
Project description:Evidence for the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss.We used data from the Nurses' Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss strategies. We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Several sensitivity analyses were conducted.The estimated 26-year risk of CHD did not change under weight loss strategies using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no weight loss ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the weight loss strategy to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes.We estimated that maintaining or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-age US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but these did not prevent us from estimating a beneficial effect of weight loss on diabetes.
Project description:Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.
Project description:BackgroundAssessing the real-world comparative effectiveness of common interventions is challenged by unmeasured confounding.ObjectiveTo determine whether the mortality benefit shown for drug-eluting stents (DES) over bare metal stents (BMS) in observational studies persists after controls for/tests for confounding.Data sources/study settingRetrospective observational study involving 38,019 patients, 65 years or older admitted for an index percutaneous coronary intervention receiving DES or BMS in Pennsylvania in 2004-2005 followed up for death through 3 years.Study designAnalysis was at the patient level. Mortality was analyzed with Cox proportional hazards models allowing for stratification by disease severity or DES use propensity, accounting for clustering of patients. Instrumental variables analysis used lagged physician stent usage to proxy for the focal stent type decision. A method originating in work by Cornfield and others in 1954 and popularized by Greenland in 1996 was used to assess robustness to confounding.Principal findingsDES was associated with a significantly lower adjusted risk of death at 3 years in Cox and in instrumented analyses. An implausibly strong hypothetical unobserved confounder would be required to fully explain these results.ConclusionsConfounding by indication can bias observational studies. No strong evidence of such selection biases was found in the reduced risk of death among elderly patients receiving DES instead of BMS in a Pennsylvanian state-wide population.
Project description:An important concern in an observational study is whether or not there is unmeasured confounding, that is, unmeasured ways in which the treatment and control groups differ before treatment, which affect the outcome. We develop a test of whether there is unmeasured confounding when an instrumental variable (IV) is available. An IV is a variable that is independent of the unmeasured confounding and encourages a subject to take one treatment level versus another, while having no effect on the outcome beyond its encouragement of a certain treatment level. We show what types of unmeasured confounding can be tested for with an IV and develop a test for this type of unmeasured confounding that has correct type I error rate. We show that the widely used Durbin-Wu-Hausman test can have inflated type I error rates when there is treatment effect heterogeneity. Additionally, we show that our test provides more insight into the nature of the unmeasured confounding than the Durbin-Wu-Hausman test. We apply our test to an observational study of the effect of a premature infant being delivered in a high-level neonatal intensive care unit (one with mechanical assisted ventilation and high volume) versus a lower level unit, using the excess travel time a mother lives from the nearest high-level unit to the nearest lower-level unit as an IV.
Project description:The article tackles the practice of testing latent variable models. The analysis covered recently published studies from 11 psychology journals varying in orientation and impact. Seventy-five studies that matched the criterion of applying some of the latent modeling techniques were reviewed. Results indicate the presence of a general tendency to ignore the model test (?(2)) followed by the acceptance of approximate fit hypothesis without detailed model examination yielding relevant empirical evidence. Due to reduced sensitivity of such a procedure to confront theory with data, there is an almost invariable tendency to accept the theoretical model. This absence of model test consequences, manifested in frequently unsubstantiated neglect of evidence speaking against the model, thus implies the perilous question of whether such empirical testing of latent structures (the way it is widely applied) makes sense at all.
Project description:We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. (PsycINFO Database Record