Project description:ObjectivesMotivated by recent calls to use electronic health records for research, we reviewed the application and development of methods for addressing the bias from unmeasured confounding in longitudinal data.Study design and settingMethodological review of existing literature. We searched MEDLINE and EMBASE for articles addressing the threat to causal inference from unmeasured confounding in nonrandomized longitudinal health data through quasi-experimental analysis.ResultsAmong the 121 studies included for review, 84 used instrumental variable analysis (IVA), of which 36 used lagged or historical instruments. Difference-in-differences (DiD) and fixed effects (FE) models were found in 29 studies. Five of these combined IVA with DiD or FE to try to mitigate for time-dependent confounding. Other less frequently used methods included prior event rate ratio adjustment, regression discontinuity nested within pre-post studies, propensity score calibration, perturbation analysis, and negative control outcomes.ConclusionWell-established econometric methods such as DiD and IVA are commonly used to address unmeasured confounding in nonrandomized longitudinal studies, but researchers often fail to take full advantage of available longitudinal information. A range of promising new methods have been developed, but further studies are needed to understand their relative performance in different contexts before they can be recommended for widespread use.
Project description:BACKGROUND: Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians' diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. EDUCATIONAL OBJECTIVES: In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. PILOT TESTING: We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified. CONCLUSION: Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice.
Project description:Decision analysis is a tool that clinicians can use to choose an option that maximizes the overall net benefit to a patient. It is an explicit, quantitative, and systematic approach to decision making under conditions of uncertainty. In this article, we present two teaching tips aimed at helping clinical learners understand the use and relevance of decision analysis. The first tip demonstrates the structure of a decision tree. With this tree, a clinician may identify the optimal choice among complicated options by calculating probabilities of events and incorporating patient valuations of possible outcomes. The second tip demonstrates how to address uncertainty regarding the estimates used in a decision tree. We field tested the tips twice with interns and senior residents. Teacher preparatory time was approximately 90 minutes. The field test utilized a board and a calculator. Two handouts were prepared. Learners identified the importance of incorporating values into the decision-making process as well as the role of uncertainty. The educational objectives appeared to be reached. These teaching tips introduce clinical learners to decision analysis in a fashion aimed to illustrate principles of clinical reasoning and how patient values can be actively incorporated into complex decision making.
Project description:Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
Project description:ObjectivesIn pragmatic trials, the new treatment is compared with usual care (heterogeneous control arm) that makes the comparison of the new treatment with each treatment within the control arm more difficult. The usual assumption is that we can fully capture the relations between different quantities. In this paper we use simulation to assess the performance of statistical methods that adjust for confounding when the assumed relations are not true. The true relations contain a mediator and heterogeneity with or without confounding, but the assumption is that there is no mediator and that confounding and heterogeneity are fully captured. The statistical methods that are compared include multivariable logistic regression, propensity score, disease risk score, inverse probability weighting, doubly robust inverse probability weighting and standardisation.ResultsThe misconception that there is no mediator can cause to misleading comparative effectiveness of individual treatments when a method that estimates the conditional causal effect is used. Using a method that estimates the marginal causal effect is a better approach, but not for all scenarios.
Project description:Associations between exposures and outcomes reported in epidemiological studies are typically unadjusted for genetic confounding. We propose a two-stage approach for estimating the degree to which such observed associations can be explained by genetic confounding. First, we assess attenuation of exposure effects in regressions controlling for increasingly powerful polygenic scores. Second, we use structural equation models to estimate genetic confounding using heritability estimates derived from both SNP-based and twin-based studies. We examine associations between maternal education and three developmental outcomes - child educational achievement, Body Mass Index, and Attention Deficit Hyperactivity Disorder. Polygenic scores explain between 14.3% and 23.0% of the original associations, while analyses under SNP- and twin-based heritability scenarios indicate that observed associations could be almost entirely explained by genetic confounding. Thus, caution is needed when interpreting associations from non-genetically informed epidemiology studies. Our approach, akin to a genetically informed sensitivity analysis can be applied widely.
Project description:Propensity score matching is a common tool for adjusting for observed confounding in observational studies, but is known to have limitations in the presence of unmeasured confounding. In many settings, researchers are confronted with spatially-indexed data where the relative locations of the observational units may serve as a useful proxy for unmeasured confounding that varies according to a spatial pattern. We develop a new method, termed distance adjusted propensity score matching (DAPSm) that incorporates information on units' spatial proximity into a propensity score matching procedure. We show that DAPSm can adjust for both observed and some forms of unobserved confounding and evaluate its performance relative to several other reasonable alternatives for incorporating spatial information into propensity score adjustment. The method is motivated by and applied to a comparative effectiveness investigation of power plant emission reduction technologies designed to reduce population exposure to ambient ozone pollution. Ultimately, DAPSm provides a framework for augmenting a "standard" propensity score analysis with information on spatial proximity and provides a transparent and principled way to assess the relative trade-offs of prioritizing observed confounding adjustment versus spatial proximity adjustment.
Project description:BackgroundThe investigation of perceived geographical disease clusters serves as a preliminary step that expedites subsequent etiological studies and analysis of epidemicity. With the identification of disease clusters of statistical significance, to determine whether or not the detected disease clusters can be explained by known or suspected risk factors is a logical next step. The models allowing for confounding variables permit the investigators to determine if some risk factors can explain the occurrence of geographical clustering of disease incidence and to investigate other hidden spatially related risk factors if there still exist geographical disease clusters, after adjusting for risk factors.MethodsWe propose to develop statistical methods for differentiating incidence intensity of geographical disease clusters of peak incidence and low incidence in a hierarchical manner, adjusted for confounding variables. The methods prioritize the areas with the highest or lowest incidence anomalies and are designed to recognize hierarchical (in intensity) disease clusters of respectively high-risk areas and low-risk areas within close geographic proximity on a map, with the adjustment for known or suspected risk factors. The data on spatial occurrence of sudden infant death syndrome with a confounding variable of race in North Carolina counties were analyzed, using the proposed methods.ResultsThe proposed Poisson model appears better than the one based on SMR, particularly at facilitating discrimination between the 13 counties with no cases. Our study showed that the difference in racial distribution of live births explained, to a large extent, the 3 previously identified hierarchical high-intensity clusters, and a small region of 4 mutually adjacent counties with the higher race-adjusted rates, which was hidden previously, emerged in the southwest, indicating that unobserved spatially related risk factors may cause the elevated risk. We also showed that a large geographical cluster with the low race-adjusted rates, which was hidden previously, emerged in the mid-east.ConclusionWith the information on hierarchy in adjusted intensity levels, epidemiologists and public health officials can better prioritize the regions with the highest rates for thorough etiologic studies, seeking hidden spatially related risk factors and precisely moving resources to areas with genuine highest abnormalities.