Project description:Background:A reliable screening tool that could contribute to the identification of women with an increased risk of postpartum hemorrhage would be of great clinical significance. Objectives:The aim of this study was to examine the added predictive value of a bleeding assessment tool for postpartum hemorrhage exceeding 1000 mL. Patients/Methods:Prospective two-center cohort study among 1147 pregnant women visiting the outpatient clinic or the maternity ward who completed a bleeding assessment tool prior to birth. The condensed MCMDM-1VWD bleeding assessment tool was adjusted to a questionnaire that could be used as a self-assessment bleeding tool. A score of ?4 was considered to be abnormal. Results:In the 1147 pregnant women in our cohort, bleeding scores ranged from -3 to 13, with a median of 1 (IQR -1 to 3); 197 (17%) women developed postpartum hemorrhage. Among women with a history of postpartum hemorrhage 29% developed postpartum hemorrhage. Among 147 women with an abnormal bleeding score (?4), 27 (18%) developed postpartum hemorrhage, whereas the remaining 170 cases of postpartum hemorrhage had a normal bleeding score. Despite the high incidence of postpartum hemorrhage, the ability of the bleeding score to predict postpartum hemorrhage was poor: area under receiver operating curve 0.53 (95% CI 0.49-0.58) for postpartum hemorrhage (PPH) ?1000 mL. Conclusions:A history of significant postpartum hemorrhage was associated with an increased risk of subsequent postpartum hemorrhage. However, screening with a bleeding assessment tool did not help to discriminate women who will develop postpartum hemorrhage from women who will not.
Project description:BackgroundThis study aimed to analyse the role of the HAS-BLED score with the addition of genotype bins for bleeding risk prediction in warfarin-treated patients with atrial fibrillation (AF).Methods and resultsConsecutive patients with AF on initial warfarin treatment were recruited. For each patient, CYP2C9 ∗ 3 and VKORC1-1639 A/G genotyping was performed to create 3 genotype functional bins. The predictive values of the HAS-BLED score with or without the addition of genotype bins were compared. According to the carrier status of the genotype bins, the numbers of normal, sensitive, and highly sensitive responders among 526 patients were 64 (12.17%), 422 (80.23%), and 40 (7.60%), respectively. A highly sensitive response was independently associated with clinically relevant bleeding (HR: 3.85, 95% CI: 1.88-7.91, P=0.001) and major bleeding (HR:3.75, 95% CI: 1.17-11.97, P=0.03). With the addition of genotype bins, the performance of the HAS-BLED score for bleeding risk prediction was significantly improved (c-statistic from 0.60 to 0.64 for clinically relevant bleeding and from 0.64 to 0.70 for major bleeding, P < 0.01). Using the integrated discriminatory, net reclassification improvement, and decision curve analysis, the HAS-BLED score plus genotype bins could perform better in predicting any clinically relevant bleeding than the HAS-BLED score alone.ConclusionsGenotypes have an incremental predictive value when combined with the HAS-BLED score for the prediction of clinically relevant bleeding in warfarin-treated patients with AF.
Project description:Background Patients with atrial fibrillation (AF) treated with oral anticoagulants may be exposed to an increased risk of bleeding events. The HAS-BLED (Hypertension, Abnormal renal and liver function, Stroke, Bleeding, Labile INRs, Elderly, Drugs or alcohol) score is a simple, well-established, clinical bleeding-risk prediction score. Recently, a new algorithm-based score was proposed, the GARFIELD-AF (Global Anticoagulant in the Field-AF) bleeding score. We compared HAS-BLED and GARFIELD-AF scores in predicting adjudicated bleeding events in a clinical trial cohort of patients with AF taking anticoagulants, in the first external comparative validation of both scores. Methods and Results We analyzed patients from the SPORTIF (Stroke Prevention Using an Oral Thrombin Inhibitor in Patients With AF) III and V trials. All patients assigned to the warfarin arm with information to calculate the scores were considered. Outcomes were major, major/clinically relevant nonmajor, and any bleeding. A total of 3550 warfarin-treated patients were available for analysis. Of these patients, 2519 (71.0%) had a HAS-BLED score ≥3, whereas based on GARFIELD-AF median value, 2056 (57.9%) were categorized as "high score." Both HAS-BLED and GARFIELD-AF C-indexes showed modest predictive value (C-index [95% confidence interval] for major bleeding, 0.58 [0.56-0.60] and 0.56 [0.54-0.57], respectively); however, GARFIELD-AF was not predictive of any bleeding. The GARFIELD-AF bleeding score had a significantly lower sensitivity and a negative reclassification for any bleeding compared with HAS-BLED, assessed by integrated discrimination improvement and net reclassification improvement (both P<0.001). HAS-BLED showed a 5% net benefit for any bleeding occurrence. Conclusions The algorithm-based GARFIELD-AF bleeding score did not show any significant improvement in major and major/clinically relevant nonmajor prediction compared with the simple HAS-BLED score. For clinical usefulness in prediction of any bleeding, the HAS-BLED score showed a significant net benefit compared with the GARFIELD-AF.
Project description:Early non-invasive detection and prediction of graft function after kidney transplantation is essential since interventions might prevent further deterioration. The aim of this study was to analyze the dynamics and predictive value of four urinary biomarkers: kidney injury molecule-1 (KIM-1), heart-type fatty acid binding protein (H-FABP), N-acetyl-β-D-glucosaminidase (NAG), and neutrophil gelatinase-associated lipocalin (NGAL) in a living donor kidney transplantation (LDKT) cohort. Biomarkers were measured up to 9 days after the transplantation of 57 recipients participating in the VAPOR-1 trial. Dynamics of KIM-1, NAG, NGAL, and H-FABP significantly changed over the course of 9 days after transplantation. KIM-1 at day 1 and NAG at day 2 after transplantation were significant predictors for the estimated glomerular filtration rate (eGFR) at various timepoints after transplantation with a positive estimate (p < 0.05), whereas NGAL and NAG at day 1 after transplantation were negative significant predictors (p < 0.05). Multivariable analysis models for eGFR outcome improved after the addition of these biomarker levels. Several donor, recipient and transplantation factors significantly affected the baseline of urinary biomarkers. In conclusion, urinary biomarkers are of added value for the prediction of graft outcome, but influencing factors such as the timing of measurement and transplantation factors need to be considered.
Project description:IntroductionVTE-BLEED is a validated score for identification of patients at increased risk of major bleeding during extended anticoagulation for venous thromboembolism (VTE). It is unknown whether VTE-BLEED high-risk patients also have an increased risk for recurrent VTE, which would limit the potential usefulness of the score.MethodsThis was a post hoc analysis of the randomized, double-blind, placebo-controlled PADIS-PE trial that randomized patients with a first unprovoked pulmonary embolism (PE) initially treated during 6 months to receive an additional 18-month of warfarin vs. placebo. The primary outcome of this analysis was recurrent VTE during 2-year follow-up after anticoagulant discontinuation, that is, after the initial 6-month treatment in the placebo arm and after 24 months of anticoagulation in the active treatment arm. This rate, adjusted for study treatment allocation, was compared between patients in the high- vs. low-risk VTE-BLEED group.ResultsIn complete case analysis (n = 308; 82.4% of total population), 89 (28.9%) patients were classified as high risk; 44 VTE events occurred after anticoagulant discontinuation during 668 patient-years. The cumulative incidence of recurrent VTE was 16.4% (95% confidence interval [CI], 10.0%-26.1%; 14 events) and 14.6% (95% CI, 10.4%-20.3%; 30 events) in the high-risk and low-risk VTE-BLEED groups, respectively, for an adjusted hazard ratio of 1.16 (95% CI, 0.62-2.19).ConclusionIn this study, patients with unprovoked PE classified at high risk of major bleeding by VTE-BLEED did not have a higher incidence of recurrent VTE after cessation of anticoagulant therapy, supporting the potential yield of the score for making management decisions on the optimal duration of anticoagulant therapy.
Project description:Donor organ biomarkers with sufficient predictive value in liver transplantation (LT) are lacking. We herein evaluate liver viability and mitochondrial bioenergetics for their predictive capacity towards the outcome in LT. We enrolled 43 consecutive patients undergoing LT. Liver biopsy samples taken upon arrival after static cold storage were assessed by histology, real-time confocal imaging analysis (RTCA), and high-resolution respirometry (HRR) for mitochondrial respiration of tissue homogenates. Early allograft dysfunction (EAD) served as primary endpoint. HRR data were analysed with a focus on the efficacy of ATP production or P-L control efficiency, calculated as 1-L/P from the capacity of oxidative phosphorylation P and non-phosphorylating respiration L. Twenty-two recipients experienced EAD. Pre-transplant histology was not predictive of EAD. The mean RTCA score was significantly lower in the EAD cohort (-0.75 ± 2.27) compared to the IF cohort (0.70 ± 2.08; p = 0.01), indicating decreased cell viability. P-L control efficiency was predictive of EAD (0.76 ± 0.06 in IF vs. 0.70 ± 0.08 in EAD-livers; p = 0.02) and correlated with the RTCA score. Both RTCA and P-L control efficiency in biopsy samples taken during cold storage have predictive capacity towards the outcome in LT. Therefore, RTCA and HRR should be considered for risk stratification, viability assessment, and bioenergetic testing in liver transplantation.
Project description:BackgroundDiastolic dysfunction (DD), one of the earliest signs of cirrhotic cardiomyopathy (CCM), is included in the revised 2019 CCM criteria. Nonetheless, relevant research regarding the effects of revised DD on post-liver transplantation (LT) outcomes remains limited.MethodsThis retrospective study enrolled patients who underwent LT for decompensated cirrhosis, from January 2018 to March 2021. Patients were divided into DD and non-DD groups. Clinical data were collected. Patients were followed up with, for at least 1 year post-LT; cardiovascular adverse events (AEs) and survival status were recorded. Risk factors were identified using 1:2 propensity score matching (PSM), after adjusting for confounding factors. The caliper value was set to 0.02.ResultsOf 231 patients, 153 were diagnosed with DD (male, 81.8%; mean age, 51.5 ± 9.5 years). Nineteen patients with DD died within 1 year, post-LT. After PSM, 97 and 60 patients were diagnosed with and without DD, respectively. Patients with DD had longer intensive care unit (ICU) stays, higher perioperative cardiovascular AEs, and higher mortality rates than those without DD. In a multivariate analysis, interventricular septum (IVS), left atrial volume index (LAVI), and potassium levels were independent prognostic factors of perioperative cardiovascular AEs, while a decreased early diastolic mitral annular tissue velocity (e'), increased neutrophil-to-lymphocyte ratio (NLR) and tumor markers were predictors of mortality within 1 year post-LT after PSM (P < 0.05).ConclusionCardiac DD may contribute to perioperative cardiovascular AEs and mortality post-LT. Clinicians should be aware of decompensated cirrhosis in patients with DD.
Project description:Background: Early allograft dysfunction (EAD) is correlated with poor patient or graft survival in liver transplantation. However, the power of distinct definitions of EAD in prediction of graft survival is unclear. Methods: This retrospective, single-center study reviewed data of 677 recipients undergoing orthotopic liver transplant between July 2015 and June 2020. The following EAD definitions were compared: liver graft assessment following transplantation (L-GrAFT) risk score model, early allograft failure simplified estimation score (EASE), model for early allograft function (MEAF) scoring, and Olthoff criteria. Risk factors for L-GrAFT7 high risk group were evaluated with univariate and multivariable logistic regression analysis. Results: L-GrAFT7 had a satisfied C-statistic of 0.87 in predicting a 3-month graft survival which significantly outperformed MEAF (C-statistic = 0.78, P = 0.01) and EAD (C-statistic = 0.75, P < 0.001), respectively. L-GrAFT10, EASE was similar to L-GrAFT7, and they had no statistical significance in predicting survival. Laboratory model for end-stage liver disease score and cold ischemia time are risk factors of L-GrAFT7 high-risk group. Conclusion: L-GrAFT7 risk score is capable for better predicting the 3-month graft survival than the MEAF and EAD in a Chinese cohort, which might standardize assessment of early graft function and serve as a surrogate endpoint in clinical trial.
Project description:BackgroundPatients with hepatocellular carcinoma (HCC) tend to be referred for liver transplantation (LT) at an early stage of cirrhosis, with lower pre-LT Model of End-Stage Liver Disease (MELD) scores. We investigated the impact of high MELD scores on post-LT outcomes in patients with HCC and validated the prognostic significance of the neutrophil-to-lymphocyte ratio (NLR).Patients and methodThis retrospective single-center cohort study enrolled 230 patients with HCC who underwent LDLT from 2004-2019 in our institute. We defined a high MELD score as ≥20.ResultsThe MELD < 20 and MELD ≥ 20 groups comprised 205 and 25 cases, respectively. Although there was no significant difference in disease-free survival between the two groups (p = 0.629), the incidence of septic shock (p = 0.019) was significantly higher in the high MELD group. The one-, three-, and five-year overall survival rates were not significantly different between the two groups (p = 0.056). In univariate analysis, a high pre-LT NLR was associated with poorer survival in the high MELD group (p = 0.029, hazard ratio [HR]: 1.07, 90% confidence interval [CI]: 1.02-1.13). NLR cut-off values of ≥10.7 and <10.7 were predictive of mortality, with an AUC of 0.705 (90% CI: 0.532-0.879). The one-, three-, and five-year post-LT survival rates were significantly higher among the recipients with an NLR < 10.7 than those with an NLR ≥ 10.7 (p = 0.005).ConclusionsPre-LT MELD score ≥ 20 was associated with a higher risk of developing post-LT septic shock and mortality. The pre-LT serum NLR is a useful predictive factor for clinical outcomes in patients with HCC with high MELD scores.