Project description:BackgroundAlthough even randomization (that is, approximately 1:1 randomization ratio in study arms) provides the greatest statistical power, designed uneven randomization (DUR), (for example, 1:2 or 1:3) is used to increase participation rates. Until now, no convincing data exists addressing the impact of DUR on participation rates in trials. The objective of this study is to evaluate the epidemiology and to explore factors associated with DUR.MethodsWe will search for reports of RCTs published within two years in 25 general medical journals with the highest impact factor according to the Journal Citation Report (JCR)-2010. Teams of two reviewers will determine eligibility and extract relevant information from eligible RCTs in duplicate and using standardized forms. We will report the prevalence of DUR trials, the reported reasons for using DUR, and perform a linear regression analysis to estimate the association between the randomization ratio and the associated factors, including participation rate, type of informed consent, clinical area, and so on.DiscussionA clearer understanding of RCTs with DUR and its association with factors in trials, for example, participation rate, can optimize trial design and may have important implications for both researchers and users of the medical literature.
Project description:IntroductionThis study aims to discuss and assess the impact of three prevalent methodological biases: competing risks, immortal-time bias, and confounding bias in real-world observational studies evaluating treatment effectiveness. We use a demonstrative observational data example of COVID-19 patients to assess the impact of these biases and propose potential solutions.MethodsWe describe competing risks, immortal-time bias, and time-fixed confounding bias by evaluating treatment effectiveness in hospitalized patients with COVID-19. For our demonstrative analysis, we use observational data from the registry of patients with COVID-19 who were admitted to the Bellvitge University Hospital in Spain from March 2020 to February 2021 and met our predefined inclusion criteria. We compare estimates of a single-dose, time-dependent treatment with the standard of care. We analyze the treatment effectiveness using common statistical approaches, either by ignoring or only partially accounting for the methodological biases. To address these challenges, we emulate a target trial through the clone-censor-weight approach.ResultsOverlooking competing risk bias and employing the naïve Kaplan-Meier estimator led to increased in-hospital death probabilities in patients with COVID-19. Specifically, in the treatment effectiveness analysis, the Kaplan-Meier estimator resulted in an in-hospital mortality of 45.6% for treated patients and 59.0% for untreated patients. In contrast, employing an emulated trial framework with the weighted Aalen-Johansen estimator, we observed that in-hospital death probabilities were reduced to 27.9% in the "X"-treated arm and 40.1% in the non-"X"-treated arm. Immortal-time bias led to an underestimated hazard ratio of treatment.ConclusionOverlooking competing risks, immortal-time bias, and confounding bias leads to shifted estimates of treatment effects. Applying the naïve Kaplan-Meier method resulted in the most biased results and overestimated probabilities for the primary outcome in analyses of hospital data from COVID-19 patients. This overestimation could mislead clinical decision-making. Both immortal-time bias and confounding bias must be addressed in assessments of treatment effectiveness. The trial emulation framework offers a potential solution to address all three methodological biases.
Project description:BackgroundPneumonitis is one of the most common adverse events induced by the use of immune checkpoint inhibitors (ICI), accounting for a 20% of all ICI-associated deaths. Despite numerous efforts to identify risk factors and develop predictive models, there is no clinically deployed risk prediction model for patient risk stratification or for guiding subsequent monitoring. We believe this is due to systemic suboptimal approaches in study designs and methodologies in the literature. The nature and prevalence of different methodological approaches has not been thoroughly examined in prior systematic reviews.MethodsThe PubMed, medRxiv and bioRxiv databases were used to identify studies that aimed at risk factor discovery and/or risk prediction model development for ICI-induced pneumonitis (ICI pneumonitis). Studies were then analysed to identify common methodological pitfalls and their contribution to the risk of bias, assessed using the QUIPS and PROBAST tools.ResultsThere were 51 manuscripts eligible for the review, with Japan-based studies over-represented, being nearly half (24/51) of all papers considered. Only 2/51 studies had a low risk of bias overall. Common bias-inducing practices included unclear diagnostic method or potential misdiagnosis, lack of multiple testing correction, the use of univariate analysis for selecting features for multivariable analysis, discretization of continuous variables, and inappropriate handling of missing values. Results from the risk model development studies were also likely to have been overoptimistic due to lack of holdout sets.ConclusionsStudies with low risk of bias in their methodology are lacking in the existing literature. High-quality risk factor identification and risk model development studies are urgently required by the community to give the best chance of them progressing into a clinically deployable risk prediction model. Recommendations and alternative approaches for reducing the risk of bias were also discussed to guide future studies.
Project description:Latent class analysis is a probabilistic modeling algorithm that allows clustering of data and statistical inference. There has been a recent upsurge in the application of latent class analysis in the fields of critical care, respiratory medicine, and beyond. In this review, we present a brief overview of the principles behind latent class analysis. Furthermore, in a stepwise manner, we outline the key processes necessary to perform latent class analysis including some of the challenges and pitfalls faced at each of these steps. The review provides a one-stop shop for investigators seeking to apply latent class analysis to their data.
Project description:Randomized controlled trials provide important evidence to guide clinical practice. These full-scale trials are expensive, time consuming and many are never successfully completed. Well conducted pilot studies help with full-scale trial design, assessment and optimization of feasibility, and can avoid the waste of resources associated with starting a full-scale trial that will not succeed. They also provide an opportunity for capacity growth and mentorship of new investigators. It is important to appreciate that the usual goal of a pilot trial is assessment of feasibility and refinement of trial design rather than to gain preliminary evidence of efficacy. Indeed, using event rates from a pilot trial to calculate sample sizes can be misleading in therapeutic trials. Misconceptions exist that pilot trials are just "small trials," are easy to perform, and are not worthy of publication. While, in the past, many pilot trials were poorly conducted and not followed by a full-scale trial, by following the recommendations in the "CONSORT 2010 statement: extension to randomized pilot and feasibility trials," high-quality pilot trials can be performed and reported that will greatly improve the chances of successfully completing a practice-changing trial. We propose that pilot trials are a valuable investment and describe the TRIM-Line pilot trial (NCT03506815), a pilot study assessing the feasibility of a randomized controlled trial investigating primary thromboprophylaxis with rivaroxaban in patients with malignancy and central venous catheters, as an illustrative example of how a pilot trial in the area of thrombosis should be designed.
Project description:Learning new words is difficult. In any naming situation, there are multiple possible interpretations of a novel word. Recent approaches suggest that learners may solve this problem by tracking co-occurrence statistics between words and referents across multiple naming situations (e.g. Yu & Smith, 2007), overcoming the ambiguity in any one situation. Yet, there remains debate around the underlying mechanisms. We conducted two experiments in which learners acquired eight word-object mappings using cross-situational statistics while eye-movements were tracked. These addressed four unresolved questions regarding the learning mechanism. First, eye-movements during learning showed evidence that listeners maintain multiple hypotheses for a given word and bring them all to bear in the moment of naming. Second, trial-by-trial analyses of accuracy suggested that listeners accumulate continuous statistics about word/object mappings, over and above prior hypotheses they have about a word. Third, consistent, probabilistic context can impede learning, as false associations between words and highly co-occurring referents are formed. Finally, a number of factors not previously considered in prior analysis impact observational word learning: knowledge of the foils, spatial consistency of the target object, and the number of trials between presentations of the same word. This evidence suggests that observational word learning may derive from a combination of gradual statistical or associative learning mechanisms and more rapid real-time processes such as competition, mutual exclusivity and even inference or hypothesis testing.
Project description:ObjectiveTo describe the characteristics of Covid-19 randomized clinical trials (RCTs) and examine the association between trial characteristics and the likelihood of finding a significant effect.Study designWe conducted a systematic review to identify RCTs (up to October 21, 2020) evaluating drugs or blood products to treat or prevent Covid-19. We extracted trial characteristics (number of centers, funding sources, and sample size) and assessed risk of bias (RoB) using the Cochrane RoB 2.0 tool. We performed logistic regressions to evaluate the association between RoB due to randomization, single vs. multicentre, funding source, and sample size, and finding a statistically significant effect.ResultsWe included 91 RCTs (n = 46,802); 40 (44%) were single-center, 23 (25.3%) enrolled <50 patients, 28 (30.8%) received industry funding, and 75 (82.4%) had high or probably high RoB. Thirty-eight trials (41.8%) reported a statistically significant effect. RoB due to randomization and being a single-center trial were associated with increased odds of finding a statistically significant effect.ConclusionsThere is high variability in RoB among Covid-19 trials. Researchers, funders, and knowledge-users should be cognizant of the impact of RoB due to randomization and single-center trial status in designing, evaluating, and interpreting the results of RCTs.RegistrationCRD42020192095.
Project description:Registry randomised clinical trials (RRCTs) have the potential to provide pragmatic answers to important clinical questions. RRCTs can be embedded into large population-based registries or smaller single site registries to provide timely answers at a reduced cost compared with traditional randomised controlled trials. RRCTs can take a number of forms in addition to the traditional individual-level randomised trial, including parallel group trials, platform or adaptive trials, cluster randomised trials and cluster randomised stepped-wedge trials. From an implementation perspective, initially it is advantageous to embed RRCT into well-established registries as these have typically already overcome any issues with end point validation and adjudication. With advances in data linkage and data quality, RRCTs can play an important role in answering clinical questions in a pragmatic, cost-effective way.
Project description:BackgroundTrajectory analyses are being increasingly used in efforts to increase understanding about the heterogeneity in the development of different longitudinal outcomes such as sickness absence, use of medication, income, or other time varying outcomes. However, several methodological and interpretational challenges are related to using trajectory analyses. This methodological study aimed to compare results using two different types of software to identify trajectories and to discuss methodological aspects related to them and the interpretation of the results.MethodsGroup-based trajectory models (GBTM) and latent class growth models (LCGM) were fitted, using SAS and Mplus, respectively. The data for the examples were derived from a representative sample of Spanish workers in Catalonia, covered by the social security system (n = 166,192). Repeatedly measured sickness absence spells per trimester (n = 96,453) were from the Catalan Institute of Medical Evaluations. The analyses were stratified by sex and two birth cohorts (1949-1969 and 1970-1990).ResultsNeither of the software were superior to the other. Four groups were the optimal number of groups in both software, however, we detected differences in the starting values and shapes of the trajectories between the two software used, which allow for different conclusions when they are applied. We cover questions related to model fit, selecting the optimal number of trajectory groups, investigating covariates, how to interpret the results, and what are the key pitfalls and strengths of using these person-oriented methods.ConclusionsFuture studies could address further methodological aspects around these statistical techniques, to facilitate epidemiological and other research dealing with longitudinal study designs.