Ignoring instead of chasing after coagulation factor VII during warfarin management: an interrupted time series study.
Ontology highlight
ABSTRACT: During warfarin management, variability in prothrombin time-based international normalized ratio (PT-INR) is caused, in part, by clinically inconsequential fluctuations in factor VII (FVII). The new factor II and X (Fiix)-prothrombin time (Fiix-PT) and Fiix-normalized ratio (Fiix-NR), unlike PT-INR, are only affected by reduced FII and FX. We assessed the incidence of thromboembolism (TE) and major bleeding (MB) in all 2667 patients on maintenance-phase warfarin managed at our anticoagulation management service during 30 months; 12 months prior to and 18 months after replacing PT-INR monitoring with Fiix-NR monitoring. Months 13 to 18 were predefined as transitional months. Using 2-segmented regression, a breakpoint in the monthly incidence of TE became evident 6 months after test replacement, that was followed by a 56% reduction in incidence (from 2.82% to 1.23% per patient-year; P = .019). Three-segmented regression did not find any significant trend in TE incidence (slope, +0.03) prior to test replacement; however, during months 13 to 18 and 19 to 30, the incidence of TE decreased gradually (slope, -0.12; R2 = 0.20; P = .007). The incidence of MB (2.79% per patient-year) did not differ. Incidence comparison during the 12-month Fiix and PT periods confirmed a statistically significant reduction (55-62%) in TE. Fiix monitoring reduced testing, dose adjustments, and normalized ratio variability and prolonged testing intervals and time in range. We conclude that ignoring FVII during Fiix-NR monitoring in real-world practice stabilizes the anticoagulant effect of warfarin and associates with a major reduction in TEs without increasing bleeding.
Project description:OBJECTIVE:To evaluate the impact of an opioid-sparing pain management protocol on overall opioid consumption and clinical outcomes. METHODS:This was a single-center, quasi-experimental, retrospective, before and after cohort study. We used an interrupted time series to analyze changes in the levels and trends of the utilization of different analgesics. We used bivariate comparisons in the before and after cohorts as well as logistic regression and quantile regression for adjusted estimates. RESULTS:We included 988 patients in the preintervention period and 1,838 in the postintervention period. Fentanyl consumption was slightly increasing before the intervention (? = 16; 95%CI 7 - 25; p = 0.002) but substantially decreased in level with the intervention (? = - 128; 95%CI -195 - -62; p = 0.001) and then progressively decreased (? = - 24; 95%CI -35 - -13; p < 0.001). There was an increasing trend in the utilization of dipyrone. The mechanical ventilation duration was significantly lower (median difference: - 1 day; 95%CI -1 - 0; p < 0.001), especially for patients who were mechanically ventilated for a longer time (50th percentile difference: -0.78; 95%CI -1.51 - -0.05; p = 0.036; 75th percentile difference: -2.23; 95%CI -3.47 - -0.98; p < 0.001). CONCLUSION:A pain management protocol could reduce the intensive care unit consumption of fentanyl. This strategy was associated with a shorter mechanical ventilation duration.
Project description:IntroductionInterrupted Time Series (ITS) studies may be used to assess the impact of an interruption, such as an intervention or exposure. The data from such studies are particularly amenable to visual display and, when clearly depicted, can readily show the short- and long-term impact of an interruption. Further, well-constructed graphs allow data to be extracted using digitizing software, which can facilitate their inclusion in systematic reviews and meta-analyses.AimWe provide recommendations for graphing ITS data, examine the properties of plots presented in ITS studies, and provide examples employing our recommendations.Methods and resultsGraphing recommendations from seminal data visualization resources were adapted for use with ITS studies. The adapted recommendations cover plotting of data points, trend lines, interruptions, additional lines and general graph components. We assessed whether 217 graphs from recently published (2013-2017) ITS studies met our recommendations and found that 130 graphs (60%) had clearly distinct data points, 100 (46%) had trend lines, and 161 (74%) had a clearly defined interruption. Accurate data extraction (requiring distinct points that align with axis tick marks and labels that allow the points to be interpreted) was possible in only 72 (33%) graphs.ConclusionWe found that many ITS graphs did not meet our recommendations and could be improved with simple changes. Our proposed recommendations aim to achieve greater standardization and improvement in the display of ITS data, and facilitate re-use of the data in systematic reviews and meta-analyses.
Project description:BACKGROUND:As part of a partnership between the Institute for Healthcare Improvement and the Ethiopian Federal Ministry of Health, woreda-based quality improvement collaboratives took place between November 2016 and December 2017 aiming to accelerate reduction of maternal and neonatal mortality in Lemu Bilbilu, Tanqua Abergele and Duguna Fango woredas. Before starting the collaboratives, assessments found inaccuracies in core measures obtained from Health Management Information System reports. METHODS AND RESULTS:Building on the quality improvement collaborative design, data quality improvement activities were added and we used the World Health Organization review methodology to drive a verification factor for the core measures of number of pregnant women that received their first antenatal care visit, number of pregnant women that received antenatal care on at least four visits, number of pregnant women tested for syphilis and number of births attended by skilled health personnel. Impact of the data quality improvement was assessed using interrupted time series analysis. We found accurate data across all time periods for Tanqua Abergele. In Lemu Bilbilu and Duguna Fango, data quality improved for all core metrics over time. In Duguna Fango, the verification factor for number of pregnant women that received their first antenatal care visit improved from 0.794 (95%CI 0.753, 0.836; p<0.001) pre-intervention by 0.173 (95%CI 0.128, 0.219; p<0.001) during the collaborative; and the verification factor for number of pregnant women tested for syphilis improved from 0.472 (95%CI 0.390, 0.554; p<0.001) pre-intervention by 0.460 (95%CI 0.369, 0.552; p<0.001) during the collaborative. In Lemu Bilbilu, the verification factor for number of pregnant women receiving a fourth antenatal visit rose from 0.589 (95%CI 0.513, 0.664; p<0.001) at baseline by 0.358 (95%CI 0.258, 0.458; p<0.001) post-intervention; and skilled birth attendance rose from 0.917 (95%CI 0.869, 0.965) at baseline by 0.083 (95%CI 0.030, 0.136; p<0.001) during the collaborative. CONCLUSIONS:A Data quality improvement initiative embedded within woreda clinical improvement collaborative improved accuracy of data used to monitor maternal and newborn health services in Ethiopia.
Project description:BackgroundThe Quality and Outcomes Framework (QOF), a major pay-for-performance programme, was introduced into United Kingdom primary care in April 2004. The impact of this programme on disparities in health care remains unclear. This study examines the following questions: has this pay for performance programme improved the quality of care for coronary heart disease, stroke and hypertension in white, black and south Asian patients? Has this programme reduced disparities in the quality of care between these ethnic groups? Did general practices with different baseline performance respond differently to this programme?Methodology/principal findingsRetrospective cohort study of patients registered with family practices in Wandsworth, London during 2007. Segmented regression analysis of interrupted time series was used to take into account the previous time trend. Primary outcome measures were mean systolic and diastolic blood pressure, and cholesterol levels. Our findings suggest that the implementation of QOF resulted in significant short term improvements in blood pressure control. The magnitude of benefit varied between ethnic groups with a statistically significant short term reduction in systolic BP in white and black but not in south Asian patients with hypertension. Disparities in risk factor control were attenuated only on few measures and largely remained intact at the end of the study period.Conclusions/significancePay for performance programmes such as the QOF in the UK should set challenging but achievable targets. Specific targets aimed at reducing ethnic disparities in health care may also be needed.
Project description:ObjectiveTo assess the impact of a pay for performance incentive on quality of care and outcomes among UK patients with hypertension in primary care.DesignInterrupted time series.SettingThe Health Improvement Network (THIN) database, United Kingdom.Participants470 725 patients with hypertension diagnosed between January 2000 and August 2007.InterventionThe UK pay for performance incentive (the Quality and Outcomes Framework), which was implemented in April 2004 and included specific targets for general practitioners to show high quality care for patients with hypertension (and other diseases).Main outcome measuresCentiles of systolic and diastolic blood pressures over time, rates of blood pressure monitoring, blood pressure control, and treatment intensity at monthly intervals for baseline (48 months) and 36 months after the implementation of pay for performance. Cumulative incidence of major hypertension related outcomes and all cause mortality for subgroups of newly treated (treatment started six months before pay for performance) and treatment experienced (started treatment in year before January 2001) patients to examine different stages of illness.ResultsAfter accounting for secular trends, no changes in blood pressure monitoring (level change 0.85, 95% confidence interval -3.04 to 4.74, P=0.669 and trend change -0.01, -0.24 to 0.21, P=0.615), control (-1.19, -2.06 to 1.09, P=0.109 and -0.01, -0.06 to 0.03, P=0.569), or treatment intensity (0.67, -1.27 to 2.81, P=0.412 and 0.02, -0.23 to 0.19, P=0.706) were attributable to pay for performance. Pay for performance had no effect on the cumulative incidence of stroke, myocardial infarction, renal failure, heart failure, or all cause mortality in both treatment experienced and newly treated subgroups.ConclusionsGood quality of care for hypertension was stable or improving before pay for performance was introduced. Pay for performance had no discernible effects on processes of care or on hypertension related clinical outcomes. Generous financial incentives, as designed in the UK pay for performance policy, may not be sufficient to improve quality of care and outcomes for hypertension and other common chronic conditions.
Project description:Canine coagulation factor VII (FVII) deficiency can be hereditary or acquired and may cause life threatening bleeding episodes if untreated. FVII procoagulant activity can be measured by FVII activity (FVII:C), but assays for measurement of canine specific FVII antigen (FVII:Ag) have not been available to date. In this study, a canine specific ELISA for measurement of FVII:Ag in plasma was developed and validated. The FVII:Ag ELISA correctly diagnosed homozygous and heterozygous hereditary FVII deficiency. Together with activity based assays, such as FVII:C, the FVII:Ag ELISA should be valuable in the diagnosis of hereditary canine FVII deficiency.
Project description:BackgroundThe Interrupted Time Series (ITS) is a quasi-experimental design commonly used in public health to evaluate the impact of interventions or exposures. Multiple statistical methods are available to analyse data from ITS studies, but no empirical investigation has examined how the different methods compare when applied to real-world datasets.MethodsA random sample of 200 ITS studies identified in a previous methods review were included. Time series data from each of these studies was sought. Each dataset was re-analysed using six statistical methods. Point and confidence interval estimates for level and slope changes, standard errors, p-values and estimates of autocorrelation were compared between methods.ResultsFrom the 200 ITS studies, including 230 time series, 190 datasets were obtained. We found that the choice of statistical method can importantly affect the level and slope change point estimates, their standard errors, width of confidence intervals and p-values. Statistical significance (categorised at the 5% level) often differed across the pairwise comparisons of methods, ranging from 4 to 25% disagreement. Estimates of autocorrelation differed depending on the method used and the length of the series.ConclusionsThe choice of statistical method in ITS studies can lead to substantially different conclusions about the impact of the interruption. Pre-specification of the statistical method is encouraged, and naive conclusions based on statistical significance should be avoided.
Project description:IntroductionIn March 2016, the Centers for Disease Control and Prevention issued opioid prescribing guidelines for chronic noncancer pain. In response, in April 2016, the North Carolina Medical Board launched the Safe Opioid Prescribing Initiative, an investigative program intended to limit the overprescribing of opioids. This study focuses on the association of the Safe Opioid Prescribing Initiative with immediate and sustained changes in opioid prescribing among all patients who received opioid and opioid discontinuation and tapering among patients who received high-dose (>90 milligrams of morphine equivalents), long-term (>90 days) opioid therapy.MethodsControlled and single interrupted time series analysis of opioid prescribing outcomes before and after the implementation of Safe Opioid Prescribing Initiative was conducted using deidentified data from the North Carolina Controlled Substances Reporting System from January 2010 through March 2017. Analysis was conducted in 2019-2020.ResultsIn an average study month, 513,717 patients, including patients who received 47,842 high-dose, long-term opioid therapy, received 660,912 opioid prescriptions at 1.3 prescriptions per patient. There was a 0.52% absolute decline (95% CI= -0.87, -0.19) in patients receiving opioid prescriptions in the month after Safe Opioid Prescribing Initiative implementation. Abrupt discontinuation, rapid tapering, and gradual tapering of opioids among patients who received high-dose, long-term opioid therapy increased by 1% (95% CI= -0.22, 2.23), 2.2% (95% CI=0.91, 3.47), and 1.3% (95% CI=0.96, 1.57), respectively, in the month after Safe Opioid Prescribing Initiative implementation.ConclusionsAlthough Safe Opioid Prescribing Initiative implementation was associated with an immediate decline in overall opioid prescribing, it was also associated with an unintended immediate increase in discontinuations and rapid tapering among patients who received high-dose, long-term opioid therapy. Better policy communication and prescriber education regarding opioid tapering best practices may help mitigate unintended consequences of statewide policies.
Project description:BackgroundDetailed intervention descriptions and robust evaluations that test intervention impact--and explore reasons for impact--are an essential part of progressing implementation science. Time series designs enable the impact and sustainability of intervention effects to be tested. When combined with time series designs, qualitative methods can provide insight into intervention effectiveness and help identify areas for improvement for future interventions. This paper describes the development, delivery, and evaluation of a tailored intervention designed to increase primary health care professionals' adoption of a national recommendation that women with mild to moderate postnatal depression (PND) are referred for psychological therapy as a first stage treatment.MethodsThree factors influencing referral for psychological treatment were targeted using three related intervention components: a tailored educational meeting, a tailored educational leaflet, and changes to an electronic system data template used by health professionals during consultations for PND. Evaluation comprised time series analysis of monthly audit data on percentage referral rates and monthly first prescription rates for anti-depressants. Interviews were conducted with a sample of health professionals to explore their perceptions of the intervention components and to identify possible factors influencing intervention effectiveness.ResultsThe intervention was associated with a significant, immediate, positive effect upon percentage referral rates for psychological treatments. This effect was not sustained over the ten month follow-on period. Monthly rates of anti-depressant prescriptions remained consistently high after the intervention. Qualitative interview findings suggest key messages received from the intervention concerned what appropriate antidepressant prescribing is, suggesting this to underlie the lack of impact upon prescribing rates. However, an understanding that psychological treatment can have long-term benefits was also cited. Barriers to referral identified before intervention were cited again after the intervention, suggesting the intervention had not successfully tackled the barriers targeted.ConclusionA time series design allowed the initial and sustained impact of our intervention to be tested. Combined with qualitative interviews, this provided insight into intervention effectiveness. Future research should test factors influencing intervention sustainability, and promote adoption of the targeted behavior and dis-adoption of competing behaviors where appropriate.
Project description:BackgroundFollowing studies reporting sub-optimal gout management, European (EULAR) and British (BSR) guidelines were updated to encourage the prescription of urate-lowering therapy (ULT) with a treat-to-target approach. We investigated whether ULT initiation and urate target attainment has improved following publication of these guidelines, and assessed predictors of these outcomes.MethodsWe used the Clinical Practice Research Datalink to assess attainment of the following outcomes in people (n = 129,972) with index gout diagnoses in the UK from 2004-2020: i) initiation of ULT; ii) serum urate ≤360 µmol/L and ≤300 µmol/L; iii) treat-to-target urate monitoring. Interrupted time-series analyses were used to compare trends in outcomes before and after updated EULAR and BSR management guidelines, published in 2016 and 2017, respectively. Predictors of ULT initiation and urate target attainment were modelled using logistic regression and Cox proportional hazards.Findings37,529 (28.9%) of 129,972 people with newly-diagnosed gout had ULT initiated within 12 months. ULT initiation improved modestly over the study period, from 26.8% for those diagnosed in 2004 to 36.6% in 2019 and 34.7% in 2020. Of people diagnosed in 2020 with a serum urate performed within 12 months, 17.1% attained a urate ≤300 µmol/L, while 36.0% attained a urate ≤360 µmol/L. 18.9% received treat-to-target urate monitoring. No significant improvements in ULT initiation or urate target attainment were observed after updated BSR or EULAR management guidance, relative to before. Comorbidities, including chronic kidney disease (CKD), heart failure and obesity, and diuretic use associated with increased odds of ULT initiation but decreased odds of attaining urate targets within 12 months: CKD (adjusted OR 1.61 for ULT initiation, 95% CI 1.55 to 1.67; adjusted OR 0.51 for urate ≤300 µmol/L, 95% CI 0.48 to 0.55; both p < 0.001); heart failure (adjusted OR 1.56 for ULT initiation, 95% CI 1.48 to 1.64; adjusted OR 0.85 for urate ≤300 µmol/L, 95% CI 0.76 to 0.95; both p < 0.001); obesity (adjusted OR 1.32 for ULT initiation, 95% CI 1.29 to 1.36; adjusted OR 0.61 for urate ≤300 µmol/L, 95% CI 0.58 to 0.65; both p < 0.001); and diuretic use (adjusted OR 1.49 for ULT initiation, 95% CI 1.44 to 1.55; adjusted OR 0.61 for urate ≤300 µmol/L, 95% CI 0.57 to 0.66; both p < 0.001).InterpretationInitiation of ULT and attainment of urate targets remain poor for people diagnosed with gout in the UK, despite updated management guidelines. If the evidence-practice gap in gout management is to be bridged, strategies to implement best practice care are needed.FundingNational Institute for Health Research.