Project description:The analysis was performed in 2 parts: a descriptive analysis of the response within each adjuvant group and an analysis at the individual subject level. We used blood transcriptional modules to perform interpretation of the results.
Project description:BackgroundOver-crowded surgical trays result in perioperative inefficiency and unnecessary costs. While methodologies to reduce the size of surgical trays have been described in the literature, they each have their own drawbacks. In this study, we compared three methods: (1) clinician review (CR), (2) mathematical programming (MP), and (3) a novel hybrid model (HM) based on surveys and cost analysis. While CR and MP are well documented, CR can yield suboptimal reductions and MP can be laborious and technically challenging. We hypothesized our easy-to-implement HM would result in a reduction of surgical instruments in both the laminectomy tray (LT) and basic neurosurgery tray (BNT) that is comparable to CR and MP.MethodsThree approaches were tested: CR, MP, and HM. We interviewed 5 neurosurgeons and 3 orthopedic surgeons, at our institution, who performed a total of 5437 spine cases, requiring the use of the LT and BNT over a 4-year (2017-2021) period. In CR, surgeons suggested which surgical instruments should be removed. MP was performed via the mathematical analysis of 25 observations of the use of a LT and BNT tray. The HM was performed via a structured survey of the surgeons' estimated instrument usage, followed by a cost-based inflection point analysis.ResultsThe CR, MP, and HM approaches resulted in a total instrument reduction of 41%, 35%, and 38%, respectively, corresponding to total cost savings per annum of $50,211.20, $46,348.80, and $44,417.60, respectively.ConclusionsWhile hospitals continue to examine perioperative services for potential inefficiencies, surgical inventory will be increasingly scrutinized. Despite MP being the most accurate methodology to do so, our results suggest that savings were similar across all three methods. CR and HM are significantly less laborious and thus are practical alternatives.
Project description:BackgroundAlthough compressed sensing (CS) accelerated cine holds immense potential to replace conventional cardiovascular magnetic resonance (CMR) cine, how to use CS-based cine appropriately during clinical CMR examinations still needs exploring.MethodsA total of 104 patients (46.5 ± 17.1 years) participated in this prospective study. For each participant, a balanced steady state free precession (bSSFP) cine was acquired as a reference, followed by two CS accelerated cine sequences with identical parameters before and after contrast injection. Lastly, a CS accelerated cine sequence with an increased flip angle was obtained. We subsequently compared scanning time, image quality, and biventricular function parameters between these sequences.ResultsAll CS cine sequences demonstrated significantly shorter acquisition times compared to bSSFPref cine (p < 0.001). The bSSFPref cine showed higher left ventricular ejection fraction (LVEF) than all CS cine sequences (all p < 0.001), but no significant differences in LVEF were observed among the three CS cine sequences. Additionally, CS cine sequences displayed superior global image quality (p < 0.05) and fewer artifacts than bSSFPref cine (p < 0.005). Unenhanced CS cine and enhanced CS cine with increased flip angle showed higher global image quality than other cine sequences (p < 0.005).ConclusionSingle breath-hold CS cine delivers precise biventricular function parameters and offers a range of benefits including shorter scan time, better global image quality, and diminished motion artifacts. This innovative approach holds great promise in replacing conventional bSSFP cine and optimizing the CMR examination workflow.
Project description:BackgroundIn Huntington's disease clinical trials, recruitment and stratification approaches primarily rely on genetic load, cognitive and motor assessment scores. They focus less on in vivo brain imaging markers, which reflect neuropathology well before clinical diagnosis. Machine learning methods offer a degree of sophistication which could significantly improve prognosis and stratification by leveraging multimodal biomarkers from large datasets. Such models specifically tailored to HD gene expansion carriers could further enhance the efficacy of the stratification process.ObjectivesTo improve stratification of Huntington's disease individuals for clinical trials.MethodsWe used data from 451 gene positive individuals with Huntington's disease (both premanifest and diagnosed) from previously published cohorts (PREDICT, TRACK, TrackON, and IMAGE). We applied whole-brain parcellation to longitudinal brain scans and measured the rate of lateral ventricular enlargement, over 3 years, which was used as the target variable for our prognostic random forest regression models. The models were trained on various combinations of features at baseline, including genetic load, cognitive and motor assessment score biomarkers, as well as brain imaging-derived features. Furthermore, a simplified stratification model was developed to classify individuals into two homogenous groups (low risk and high risk) based on their anticipated rate of ventricular enlargement.ResultsThe predictive accuracy of the prognostic models substantially improved by integrating brain imaging features alongside genetic load, cognitive and motor biomarkers: a 24 % reduction in the cross-validated mean absolute error, yielding an error of 530 mm3/year. The stratification model had a cross-validated accuracy of 81 % in differentiating between moderate and fast progressors (precision = 83 %, recall = 80 %).ConclusionsThis study validated the effectiveness of machine learning in differentiating between low- and high-risk individuals based on the rate of ventricular enlargement. The models were exclusively trained using features from HD individuals, which offers a more disease-specific, simplified, and accurate approach for prognostic enrichment compared to relying on features extracted from healthy control groups, as done in previous studies. The proposed method has the potential to enhance clinical utility by: i) enabling more targeted recruitment of individuals for clinical trials, ii) improving post-hoc evaluation of individuals, and iii) ultimately leading to better outcomes for individuals through personalized treatment selection.
Project description:Little is known on how to best prioritize various tele-ICU specific tasks and workflows to maximize operational efficiency. We set out to: 1) develop an operational model that accurately reflects tele-ICU workflows at baseline, 2) identify workflow changes that optimize operational efficiency through discrete-event simulation and multi-class priority queuing modeling, and 3) implement the predicted favorable workflow changes and validate the simulation model through prospective correlation of actual-to-predicted change in performance measures linked to patient outcomes.SettingTele-ICU of a large healthcare system in New York State covering nine ICUs across the spectrum of adult critical care.PatientsSeven-thousand three-hundred eighty-seven adult critically ill patients admitted to a system ICU (1,155 patients pre-intervention in 2016Q1 and 6,232 patients post-intervention 2016Q3 to 2017Q2).InterventionsChange in tele-ICU workflow process structure and hierarchical process priority based on discrete-event simulation.Measurements and main resultsOur discrete-event simulation model accurately reflected the actual baseline average time to first video assessment by both the tele-ICU intensivist (simulated 132.8 ± 6.7 min vs 132 ± 12.2 min actual) and the tele-ICU nurse (simulated 128.4 ± 7.6 min vs 123 ± 9.8 min actual). For a simultaneous priority and process change, the model simulated a reduction in average TVFA to 51.3 ± 1.6 min (tele-ICU intensivist) and 50.7 ± 2.1 min (tele-ICU nurse), less than the added simulated reductions for each change alone, suggesting correlation of the changes to some degree. Subsequently implementing both changes simultaneously resulted in actual reductions in average time to first video assessment to values within the 95% CIs of the simulations (50 ± 5.5 min for tele-intensivists and 49 ± 3.9 min for tele-nurses).ConclusionsDiscrete-event simulation can accurately predict the effects of contemplated multidisciplinary tele-ICU workflow changes. The value of workflow process and task priority modeling is likely to increase with increasing operational complexities and interdependencies.
Project description:BackgroundLeaders play a crucial role in implementing and sustaining changes in clinical practice, yet there is limited evidence on the strategies to engage them in team problem solving and communication.ObjectiveExamine the impact of an intervention focused on facilitating leadership during daily huddles on optimizing team-based care and improving outcomes.DesignCluster-randomized trial using intention-to-treat analysis to measure the effects of the intervention (n = 13 teams) compared with routine practice (n = 16 teams).ParticipantsTwenty-nine primary care clinics affiliated with a large integrated health system in the upper Midwest; representing differing practice types and geographic settings.InterventionFull-day leadership training retreat for team leaders to facilitate of care team huddles. Biweekly coaching calls and two site visits with an assigned coach.Main measuresPrimary outcomes of team development and function were collected, pre- and post-intervention using surveys. Patient satisfaction and quality outcomes were compared pre- and post-intervention as secondary outcomes. Leadership engagement and adherence to the intervention were also assessed.Key resultsA total of 279 pre-intervention and 272 post-intervention surveys were completed. We found no impact on team development (- 0.98, 95% CI (- 3.18, 1.22)), improved team credibility (0.18, 95% CI (0.00, 0.35)), but worse psychological safety (- 0.19, 95% CI (- 0.38, 0.00)). No differences were observed in patient satisfaction; however, results were mixed among quality outcomes. Post hoc analysis within the intervention group showed higher adherence to the intervention was associated with improvement in team coordination (0.47, 95% CI (0.18, 0.76)), credibility (0.28, 95% CI (0.02, 0.53)), team learning (0.42, 95% CI (0.10, 0.74)), and knowledge creation (0.74, 95% CI (0.35, 1.13)) compared to teams that were less engaged.ConclusionsResults of this evaluation showed that leadership training and facilitation were not associated with better team functioning. Additional components to the intervention tested may be necessary to enhance team functioning.Trial registrationClinicaltrials.gov Identifier NCT03062670. Registration Date: February 23, 2017. URL: https://clinicaltrials.gov/ct2/show/NCT03062670.
Project description:IntroductionDue to limited arable land resources, intercropping has emerged as an efficient and sustainable production method for increasing total grain yield per unit land area. Maize-soybean strip intercropping (MSSI) technology is being widely promoted and applied across China. However, the combination of optimal density for achieving higher production efficiency of both soybean and maize remains unclear. The objective of this study was to evaluate the differences in yield, economic benefits, land, and nitrogen (N) efficiency in MSSI systems under different densities.MethodsFive maize/soybean density combinations (67,500/97,500 plants ha-1, D1; 67,500/120,000 plants ha-1, D2; 67,500/142,500 plants ha-1, D3; 60,000/142,500 plants ha-1, D4; 52,500/142,500 plants ha-1, D5) were set under the same N input in the field experiment.Results and discussionThe results demonstrated that optimizing the density in the intercropping system could enhance production efficiency. Increasing the density of soybean and maize significantly increased the total grain yield (D3 > D2 > D1 > D4 > D5). The D3 treatment, exhibiting the best comprehensive performance, also promoted increases in leaf area index, dry matter accumulation, and N absorption and utilization. Path analysis indicated that density had the most substantial impact on maize yield, while grain number had the greatest influence on soybean yield, with contribution rates of 49.7% and 61.0%, respectively. These results provide valuable insights into optimal field density for summer planting in MSSI, facilitating its wider adoption.
Project description:Clinical trial planning and site selection require an accurate estimate of the number of eligible patients at each site. In this study, we developed a tool to calculate the proportion of patients who would meet a specific trial's age, baseline severity, and time to treatment inclusion criteria.From a sample of 1322 consecutive patients with acute ischemic cerebrovascular syndromes, we developed regression curves relating the proportion of patients within each range of the 3 variables. We used half the patients to develop the model and the other half to validate it by comparing predicted vs actual proportions who met the criteria for 4 current stroke trials.The predicted proportion of patients meeting inclusion criteria ranged from 6% to 28% among the different trials. The proportion of trial-eligible patients predicted from the first half of the data were within 0.4% to 1.4% of the actual proportion of eligible patients. This proportion increased logarithmically with National Institutes of Health Stroke Scale score and time from onset; lowering the baseline limits of the National Institutes of Health Stroke Scale score and extending the treatment window would have the greatest impact on the proportion of patients eligible for a stroke trial.This model helps estimate the proportion of stroke patients eligible for a study based on different upper and lower limits for age, stroke severity, and time to treatment, and it may be a useful tool in clinical trial planning.
Project description:For HIV-infected children, formulation development, pharmacokinetic (PK) data, and evaluation of early toxicity are critical for licensing new antiretroviral drugs; direct evidence of efficacy in children may not be needed if acceptable safety and PK parameters are demonstrated in children. However, it is important to address questions where adult trial data cannot be extrapolated to children. In this fast-moving area, interventions need to be tailored to resource-limited settings where most HIV-infected children live and take account of decreasing numbers of younger HIV-infected children after successful prevention of mother-to-child HIV transmission. Innovative randomized controlled trial (RCT) designs enable several questions relevant to children's treatment and care to be answered within the same study. We reflect on key considerations, and, with examples, discuss the relative merits of different RCT designs for addressing multiple scientific questions including parallel multi-arm RCTs, factorial RCTs, and cross-over RCTs. We discuss inclusion of several populations (eg, untreated and pretreated children; children and adults) in "basket" trials; incorporation of secondary randomizations after enrollment and use of nested substudies (particularly PK and formulation acceptability) within large RCTs. We review the literature on trial designs across other disease areas in pediatrics and rare diseases and discuss their relevance for addressing questions relevant to HIV-infected children; we provide an example of a Bayesian trial design in prevention of mother-to-child HIV transmission and consider this approach for future pediatric trials. Finally, we discuss the relevance of these approaches to other areas, in particular, childhood tuberculosis and hepatitis.
Project description:AimThe SONAR trial uses an enrichment design based on the individual response to the selective endothelin receptor antagonist atrasentan on efficacy (the degree of the individual response in the urinary albumin-to-creatinine ratio [UACR]) and safety/tolerability (signs of sodium retention and acute increases in serum creatinine) to assess the effects of this agent on major renal outcomes. The patient population and enrichment results are described here.MethodsPatients with type 2 diabetes with an estimated glomerular filtration rate (eGFR) within 25 to 75 mL/min/1.73 m2 and UACR between 300 and 5000 mg/g were enrolled. After a run-in period, eligible patients received 0.75 mg/d of atrasentan for 6 weeks. A total of 2648 responder patients in whom UACR decreased by ≥30% compared to baseline were enrolled, as were 1020 non-responders with a UACR decrease of <30%. Patients who experienced a weight gain of >3 kg and in whom brain natriuretic peptide exceeded ≥300 pg/mL, or who experienced an increase in serum creatinine >20% (0.5 mg/dL), were not randomized.ResultsBaseline characteristics were similar for atrasentan responders and non-responders. Upon entry to the study, median UACR was 802 mg/g in responders and 920 mg/g in non-responders. After 6 weeks of treatment with atrasentan, the UACR change in responders was -48.8% (95% CI, -49.8% to -47.9%) and in non-responders was -1.2% (95% CI, -6.4% to 3.9%). Changes in other renal risk markers were similar between responders and non-responders except for a marginally greater reduction in systolic blood pressure and eGFR in responders.ConclusionsThe enrichment period has successfully identified a population with a profound UACR reduction without clinical signs of sodium retention in whom a large atrasentan effect on clinically important renal outcomes is possible. The SONAR trial aims to establish whether atrasentan confers renal protection.