Project description:The analysis was performed in 2 parts: a descriptive analysis of the response within each adjuvant group and an analysis at the individual subject level. We used blood transcriptional modules to perform interpretation of the results.
Project description:Little is known on how to best prioritize various tele-ICU specific tasks and workflows to maximize operational efficiency. We set out to: 1) develop an operational model that accurately reflects tele-ICU workflows at baseline, 2) identify workflow changes that optimize operational efficiency through discrete-event simulation and multi-class priority queuing modeling, and 3) implement the predicted favorable workflow changes and validate the simulation model through prospective correlation of actual-to-predicted change in performance measures linked to patient outcomes.SettingTele-ICU of a large healthcare system in New York State covering nine ICUs across the spectrum of adult critical care.PatientsSeven-thousand three-hundred eighty-seven adult critically ill patients admitted to a system ICU (1,155 patients pre-intervention in 2016Q1 and 6,232 patients post-intervention 2016Q3 to 2017Q2).InterventionsChange in tele-ICU workflow process structure and hierarchical process priority based on discrete-event simulation.Measurements and main resultsOur discrete-event simulation model accurately reflected the actual baseline average time to first video assessment by both the tele-ICU intensivist (simulated 132.8 ± 6.7 min vs 132 ± 12.2 min actual) and the tele-ICU nurse (simulated 128.4 ± 7.6 min vs 123 ± 9.8 min actual). For a simultaneous priority and process change, the model simulated a reduction in average TVFA to 51.3 ± 1.6 min (tele-ICU intensivist) and 50.7 ± 2.1 min (tele-ICU nurse), less than the added simulated reductions for each change alone, suggesting correlation of the changes to some degree. Subsequently implementing both changes simultaneously resulted in actual reductions in average time to first video assessment to values within the 95% CIs of the simulations (50 ± 5.5 min for tele-intensivists and 49 ± 3.9 min for tele-nurses).ConclusionsDiscrete-event simulation can accurately predict the effects of contemplated multidisciplinary tele-ICU workflow changes. The value of workflow process and task priority modeling is likely to increase with increasing operational complexities and interdependencies.
Project description:BackgroundIn Huntington's disease clinical trials, recruitment and stratification approaches primarily rely on genetic load, cognitive and motor assessment scores. They focus less on in vivo brain imaging markers, which reflect neuropathology well before clinical diagnosis. Machine learning methods offer a degree of sophistication which could significantly improve prognosis and stratification by leveraging multimodal biomarkers from large datasets. Such models specifically tailored to HD gene expansion carriers could further enhance the efficacy of the stratification process.ObjectivesTo improve stratification of Huntington's disease individuals for clinical trials.MethodsWe used data from 451 gene positive individuals with Huntington's disease (both premanifest and diagnosed) from previously published cohorts (PREDICT, TRACK, TrackON, and IMAGE). We applied whole-brain parcellation to longitudinal brain scans and measured the rate of lateral ventricular enlargement, over 3 years, which was used as the target variable for our prognostic random forest regression models. The models were trained on various combinations of features at baseline, including genetic load, cognitive and motor assessment score biomarkers, as well as brain imaging-derived features. Furthermore, a simplified stratification model was developed to classify individuals into two homogenous groups (low risk and high risk) based on their anticipated rate of ventricular enlargement.ResultsThe predictive accuracy of the prognostic models substantially improved by integrating brain imaging features alongside genetic load, cognitive and motor biomarkers: a 24 % reduction in the cross-validated mean absolute error, yielding an error of 530 mm3/year. The stratification model had a cross-validated accuracy of 81 % in differentiating between moderate and fast progressors (precision = 83 %, recall = 80 %).ConclusionsThis study validated the effectiveness of machine learning in differentiating between low- and high-risk individuals based on the rate of ventricular enlargement. The models were exclusively trained using features from HD individuals, which offers a more disease-specific, simplified, and accurate approach for prognostic enrichment compared to relying on features extracted from healthy control groups, as done in previous studies. The proposed method has the potential to enhance clinical utility by: i) enabling more targeted recruitment of individuals for clinical trials, ii) improving post-hoc evaluation of individuals, and iii) ultimately leading to better outcomes for individuals through personalized treatment selection.
Project description:BackgroundLeaders play a crucial role in implementing and sustaining changes in clinical practice, yet there is limited evidence on the strategies to engage them in team problem solving and communication.ObjectiveExamine the impact of an intervention focused on facilitating leadership during daily huddles on optimizing team-based care and improving outcomes.DesignCluster-randomized trial using intention-to-treat analysis to measure the effects of the intervention (n = 13 teams) compared with routine practice (n = 16 teams).ParticipantsTwenty-nine primary care clinics affiliated with a large integrated health system in the upper Midwest; representing differing practice types and geographic settings.InterventionFull-day leadership training retreat for team leaders to facilitate of care team huddles. Biweekly coaching calls and two site visits with an assigned coach.Main measuresPrimary outcomes of team development and function were collected, pre- and post-intervention using surveys. Patient satisfaction and quality outcomes were compared pre- and post-intervention as secondary outcomes. Leadership engagement and adherence to the intervention were also assessed.Key resultsA total of 279 pre-intervention and 272 post-intervention surveys were completed. We found no impact on team development (- 0.98, 95% CI (- 3.18, 1.22)), improved team credibility (0.18, 95% CI (0.00, 0.35)), but worse psychological safety (- 0.19, 95% CI (- 0.38, 0.00)). No differences were observed in patient satisfaction; however, results were mixed among quality outcomes. Post hoc analysis within the intervention group showed higher adherence to the intervention was associated with improvement in team coordination (0.47, 95% CI (0.18, 0.76)), credibility (0.28, 95% CI (0.02, 0.53)), team learning (0.42, 95% CI (0.10, 0.74)), and knowledge creation (0.74, 95% CI (0.35, 1.13)) compared to teams that were less engaged.ConclusionsResults of this evaluation showed that leadership training and facilitation were not associated with better team functioning. Additional components to the intervention tested may be necessary to enhance team functioning.Trial registrationClinicaltrials.gov Identifier NCT03062670. Registration Date: February 23, 2017. URL: https://clinicaltrials.gov/ct2/show/NCT03062670.
Project description:Clinical trial planning and site selection require an accurate estimate of the number of eligible patients at each site. In this study, we developed a tool to calculate the proportion of patients who would meet a specific trial's age, baseline severity, and time to treatment inclusion criteria.From a sample of 1322 consecutive patients with acute ischemic cerebrovascular syndromes, we developed regression curves relating the proportion of patients within each range of the 3 variables. We used half the patients to develop the model and the other half to validate it by comparing predicted vs actual proportions who met the criteria for 4 current stroke trials.The predicted proportion of patients meeting inclusion criteria ranged from 6% to 28% among the different trials. The proportion of trial-eligible patients predicted from the first half of the data were within 0.4% to 1.4% of the actual proportion of eligible patients. This proportion increased logarithmically with National Institutes of Health Stroke Scale score and time from onset; lowering the baseline limits of the National Institutes of Health Stroke Scale score and extending the treatment window would have the greatest impact on the proportion of patients eligible for a stroke trial.This model helps estimate the proportion of stroke patients eligible for a study based on different upper and lower limits for age, stroke severity, and time to treatment, and it may be a useful tool in clinical trial planning.
Project description:For HIV-infected children, formulation development, pharmacokinetic (PK) data, and evaluation of early toxicity are critical for licensing new antiretroviral drugs; direct evidence of efficacy in children may not be needed if acceptable safety and PK parameters are demonstrated in children. However, it is important to address questions where adult trial data cannot be extrapolated to children. In this fast-moving area, interventions need to be tailored to resource-limited settings where most HIV-infected children live and take account of decreasing numbers of younger HIV-infected children after successful prevention of mother-to-child HIV transmission. Innovative randomized controlled trial (RCT) designs enable several questions relevant to children's treatment and care to be answered within the same study. We reflect on key considerations, and, with examples, discuss the relative merits of different RCT designs for addressing multiple scientific questions including parallel multi-arm RCTs, factorial RCTs, and cross-over RCTs. We discuss inclusion of several populations (eg, untreated and pretreated children; children and adults) in "basket" trials; incorporation of secondary randomizations after enrollment and use of nested substudies (particularly PK and formulation acceptability) within large RCTs. We review the literature on trial designs across other disease areas in pediatrics and rare diseases and discuss their relevance for addressing questions relevant to HIV-infected children; we provide an example of a Bayesian trial design in prevention of mother-to-child HIV transmission and consider this approach for future pediatric trials. Finally, we discuss the relevance of these approaches to other areas, in particular, childhood tuberculosis and hepatitis.
Project description:The current commonly used single-guide RNA (sgRNA) structure has a shortened duplex compared with the native bacterial clustered regularly interspaced short palindromic repeats RNA (crRNA)–transactivating crRNA (tracrRNA) duplex. Here we show that modifying the sgRNA structure by extending the duplex length and mutating the fourth T of the continuous sequence of Ts (which is the pause signal for RNA polymerase III [pol III]) to C or G significantly, and sometimes dramatically, improves knockout efficiency in cells. In addition, the new sgRNA structure also significantly increases the efficiency of more challenging genome-editing procedures, such as gene deletion, which is important for inducing a loss-of-function in non-coding genes.
Project description:AIM:The SONAR trial uses an enrichment design based on the individual response to the selective endothelin receptor antagonist atrasentan on efficacy (the degree of the individual response in the urinary albumin-to-creatinine ratio [UACR]) and safety/tolerability (signs of sodium retention and acute increases in serum creatinine) to assess the effects of this agent on major renal outcomes. The patient population and enrichment results are described here. METHODS:Patients with type 2 diabetes with an estimated glomerular filtration rate (eGFR) within 25 to 75?mL/min/1.73?m2 and UACR between 300 and 5000?mg/g were enrolled. After a run-in period, eligible patients received 0.75?mg/d of atrasentan for 6?weeks. A total of 2648 responder patients in whom UACR decreased by ?30% compared to baseline were enrolled, as were 1020 non-responders with a UACR decrease of <30%. Patients who experienced a weight gain of >3?kg and in whom brain natriuretic peptide exceeded ?300?pg/mL, or who experienced an increase in serum creatinine >20% (0.5?mg/dL), were not randomized. RESULTS:Baseline characteristics were similar for atrasentan responders and non-responders. Upon entry to the study, median UACR was 802?mg/g in responders and 920?mg/g in non-responders. After 6?weeks of treatment with atrasentan, the UACR change in responders was -48.8% (95% CI, -49.8% to -47.9%) and in non-responders was -1.2% (95% CI, -6.4% to 3.9%). Changes in other renal risk markers were similar between responders and non-responders except for a marginally greater reduction in systolic blood pressure and eGFR in responders. CONCLUSIONS:The enrichment period has successfully identified a population with a profound UACR reduction without clinical signs of sodium retention in whom a large atrasentan effect on clinically important renal outcomes is possible. The SONAR trial aims to establish whether atrasentan confers renal protection.
Project description:Lentiviral vector (LV)-based hematopoietic stem cell (HSC) gene therapy is becoming a promising clinical strategy for the treatment of genetic blood diseases. However, the current approach of modifying 1 × 108 to 1 × 109 CD34+ cells per patient requires large amounts of LV, which is expensive and technically challenging to produce at clinical scale. Modification of bulk CD34+ cells uses LV inefficiently, because the majority of CD34+ cells are short-term progenitors with a limited post-transplant lifespan. Here, we utilized a clinically relevant, immunomagnetic bead (IB)-based method to purify CD34+CD38- cells from human bone marrow (BM) and mobilized peripheral blood (mPB). IB purification of CD34+CD38- cells enriched severe combined immune deficiency (SCID) repopulating cell (SRC) frequency an additional 12-fold beyond standard CD34+ purification and did not affect gene marking of long-term HSCs. Transplant of purified CD34+CD38- cells led to delayed myeloid reconstitution, which could be rescued by the addition of non-transduced CD38+ cells. Importantly, LV modification and transplantation of IB-purified CD34+CD38- cells/non-modified CD38+ cells into immune-deficient mice achieved long-term gene-marked engraftment comparable with modification of bulk CD34+ cells, while utilizing ?7-fold less LV. Thus, we demonstrate a translatable method to improve the clinical and commercial viability of gene therapy for genetic blood cell diseases.
Project description:BackgroundSepsis costs and incidence vary dramatically across diagnostic categories, warranting a customized approach for implementing predictive models.ObjectiveThe aim of this study was to optimize the parameters of a sepsis prediction model within distinct patient groups to minimize the excess cost of sepsis care and analyze the potential effect of factors contributing to end-user response to sepsis alerts on overall model utility.MethodsWe calculated the excess costs of sepsis to the Centers for Medicare and Medicaid Services (CMS) by comparing patients with and without a secondary sepsis diagnosis but with the same primary diagnosis and baseline comorbidities. We optimized the parameters of a sepsis prediction algorithm across different diagnostic categories to minimize these excess costs. At the optima, we evaluated diagnostic odds ratios and analyzed the impact of compliance factors such as noncompliance, treatment efficacy, and tolerance for false alarms on the net benefit of triggering sepsis alerts.ResultsCompliance factors significantly contributed to the net benefit of triggering a sepsis alert. However, a customized deployment policy can achieve a significantly higher diagnostic odds ratio and reduced costs of sepsis care. Implementing our optimization routine with powerful predictive models could result in US $4.6 billion in excess cost savings for CMS.ConclusionsWe designed a framework for customizing sepsis alert protocols within different diagnostic categories to minimize excess costs and analyzed model performance as a function of false alarm tolerance and compliance with model recommendations. We provide a framework that CMS policymakers could use to recommend minimum adherence rates to the early recognition and appropriate care of sepsis that is sensitive to hospital department-level incidence rates and national excess costs. Customizing the implementation of clinical predictive models by accounting for various behavioral and economic factors may improve the practical benefit of predictive models.