Project description:Socioeconomically deprived individuals with renal disease are less likely to receive a live-donor kidney transplant than less-deprived individuals. This qualitative study aimed to identify reasons for the observed socioeconomic disparity in live-donor kidney transplantation.A qualitative study using face-to-face in-depth semistructured interviews.A UK tertiary renal referral hospital and transplant centre.Purposive sampling was used to select deceased-donor transplant recipients from areas of high socioeconomic deprivation (SED) (19 participants), followed by a low SED comparison group (13 participants), aiming for maximum diversity in terms of age, gender, ethnicity, primary renal disease and previous renal replacement therapy.Participants were interviewed following their routine transplant clinic review. Interviews were digitally audio-recorded and transcribed verbatim. Transcripts were coded using NVivo software and analysed using the constant comparison method described in Grounded Theory.Themes common and distinct to each socioeconomic group emerged. 6 themes appeared to distinguish between individuals from areas of high and low SED. 4 themes were distinct to participants from areas of high SED: (1) Passivity, (2) Disempowerment, (3) Lack of social support and (4) Short-term focus. 2 themes were distinct to the low SED group: (1) Financial concerns and (2) Location of donor.Several of the emerging themes from the high SED individuals relate to an individual's lack of confidence and skill in managing their health and healthcare; themes that are in keeping with low levels of patient activation. Inadequate empowerment of socioeconomically deprived individuals by healthcare practitioners was also described. Financial concerns did not emerge as a barrier from interviews with the high SED group. Interventions aiming to redress the observed socioeconomic inequity should be targeted at both patients and clinical teams to increase empowerment and ensure shared decision-making.
Project description:IntroductionMuch of the higher risk for end-stage kidney disease (ESKD) in African American individuals relates to ancestry-specific variation in the apolipoprotein L1 gene (APOL1). Relative to kidneys from European American deceased-donors, kidneys from African American deceased-donors have shorter allograft survival and African American living-kidney donors more often develop ESKD. The National Institutes of Health (NIH)-sponsored APOL1 Long-term Kidney Transplantation Outcomes Network (APOLLO) is prospectively assessing kidney allograft survival from donors with recent African ancestry based on donor and recipient APOL1 genotypes.MethodsAPOLLO will evaluate outcomes from 2614 deceased kidney donor-recipient pairs, as well as additional living-kidney donor-recipient pairs and unpaired deceased-donor kidneys.ResultsThe United Network for Organ Sharing (UNOS), Association of Organ Procurement Organizations, American Society of Transplantation, American Society for Histocompatibility and Immunogenetics, and nearly all U.S. kidney transplant programs, organ procurement organizations (OPOs), and histocompatibility laboratories are participating in this observational study. APOLLO employs a central institutional review board (cIRB) and maintains voluntary partnerships with OPOs and histocompatibility laboratories. A Community Advisory Council composed of African American individuals with a personal or family history of kidney disease has advised the NIH Project Office and Steering Committee since inception. UNOS is providing data for outcome analyses.ConclusionThis article describes unique aspects of the protocol, design, and performance of APOLLO. Results will guide use of APOL1 genotypic data to improve the assessment of quality in deceased-donor kidneys and could increase numbers of transplanted kidneys, reduce rates of discard, and improve the safety of living-kidney donation.
Project description:Two apolipoprotein L1 gene (APOL1) renal-risk variants in donors and African American (AA) recipient race are associated with worse allograft survival in deceased-donor kidney transplantation (DDKT) from AA donors. To detect other factors impacting allograft survival from deceased AA kidney donors, APOL1 renal-risk variants were genotyped in additional AA kidney donors.The APOL1 genotypes were linked to outcomes in 478 newly analyzed DDKTs in the Scientific Registry of Transplant Recipients. Multivariate analyses accounting for recipient age, sex, race, panel-reactive antibody level, HLA match, cold ischemia time, donor age, and expanded criteria donation were performed. These 478 transplantations and 675 DDKTs from a prior report were jointly analyzed.Fully adjusted analyses limited to the new 478 DDKTs replicated shorter renal allograft survival in recipients of APOL1 2-renal-risk-variant kidneys (hazard ratio [HR], 2.00; P = 0.03). Combined analysis of 1153 DDKTs from AA donors revealed donor APOL1 high-risk genotype (HR, 2.05; P = 3 × 10), older donor age (HR, 1.18; P = 0.05), and younger recipient age (HR, 0.70; P = 0.001) adversely impacted allograft survival. Although prolonged allograft survival was seen in many recipients of APOL1 2-renal-risk-variant kidneys, follow-up serum creatinine concentrations were higher than that in recipients of 0/1 APOL1 renal-risk-variant kidneys. A competing risk analysis revealed that APOL1 impacted renal allograft survival, but not recipient survival. Interactions between donor age and APOL1 genotype on renal allograft survival were nonsignificant.Shorter renal allograft survival is reproducibly observed after DDKT from APOL1 2-renal-risk-variant donors. Younger recipient age and older donor age have independent adverse effects on renal allograft survival.
Project description:Objective and backgroundHyperspectral imaging (HSI) is an innovative, noninvasive technique that assesses tissue and organ perfusion and oxygenation. This study aimed to evaluate HSI as a predictive tool for early postoperative graft function and long-term outcomes in living donor (LD) and deceased donor (DD) kidney transplantation (KT).Patients and methodsHSI of kidney allograft parenchyma from 19 LD and 51 DD kidneys was obtained intraoperatively 15 minutes after reperfusion. Using the dedicated HSI TIVITA Tissue System, indices of tissue oxygenation (StO2), perfusion (near-infrared [NIR]), organ hemoglobin (OHI), and tissue water (TWI) were calculated and then analyzed retrospectively.ResultsLD kidneys had superior intraoperative HSI values of StO2 (0.78 ± 0.13 versus 0.63 ± 0.24; P = 0.001) and NIR (0.67 ± 0.10 versus 0.56 ± 0.27; P = 0.016) compared to DD kidneys. Delayed graft function (DGF) was observed in 18 cases (26%), in which intraoperative HSI showed significantly lower values of StO2 (0.78 ± 0.07 versus 0.35 ± 0.21; P < 0.001) and NIR (0.67 ± 0.11 versus 0.34 ± 0.32; P < 0.001). Receiver operating characteristic curve analysis demonstrated an excellent predictive value of HSI for the development of DGF, with an area under the curve of 0.967 for StO2 and 0.801 for NIR. Kidney grafts with low StO2 values (cut-off point 0.6) showed reduced renal function with a low glomerular filtration rate and elevated urea levels in the first two weeks after KT. Three years after KT, graft survival was also inferior in the group with initially low StO2 values.ConclusionHSI is a useful tool for predicting DGF in living and deceased KT and may assist in estimating short-term allograft function. However, further studies with expanded cohorts are needed to evaluate the association between HSI and long-term graft outcomes.
Project description:ImportanceIn the US, live donor (LD) kidney transplant rates have decreased in pediatric recipients. Pediatric patients with kidney failure will likely need more than 1 kidney transplant during their lifetime, but the optimal sequence of transplant (ie, deceased donor [DD] followed by LD or vice versa) is not known.ObjectiveTo determine whether pediatric recipients should first receive a DD allograft followed by an LD allograft (DD-LD sequence) or an LD allograft followed by a DD allograft (LD-DD sequence).Design, setting, and participantsThis decision analytical model examined US pediatric patients with kidney failure included in the US Renal Data System 2019 Report who were waiting for a kidney transplant, received a transplant, or experienced graft failure.InterventionsKidney transplant sequences of LD-DD vs DD-LD.Main outcomes and measuresDifference in projected life-years between the 2 sequence options.ResultsAmong patients included in the analysis, the LD-DD sequence provided more net life-years in those 5 years of age (1.82 [95% CI, 0.87-2.77]) and 20 years of age (2.23 [95% CI, 1.31-3.15]) compared with the DD-LD sequence. The net outcomes in patients 10 years of age (0.36 [95% CI, -0.51 to 1.23] additional life-years) and 15 years of age (0.64 [95% CI, -0.15 to 1.39] additional life-years) were not significantly different. However, for those aged 10 years, an LD-DD sequence was favored if eligibility for a second transplant was low (2.09 [95% CI, 1.20-2.98] additional life-years) or if the LD was no longer available (2.32 [95% CI, 1.52-3.12] additional life-years). For those aged 15 years, the LD-DD sequence was favored if the eligibility for a second transplant was low (1.84 [95% CI, 0.96-2.72] additional life-years) or if the LD was no longer available (2.49 [95% CI, 1.77-3.27] additional life-years). Access to multiple DD transplants did not compensate for missing the LD opportunity.Conclusions and relevanceThese findings suggest that the decreased use of LD kidney transplants in pediatric recipients during the past 2 decades should be scrutinized. Given the uncertainty of future recipient eligibility for retransplant and future availability of an LD transplant, the LD-DD sequence is likely the better option. This strategy of an LD transplant first would not only benefit pediatric recipients but allow DD kidneys to be used by others who do not have an LD option.
Project description:Children receive priority in the allocation of deceased donor kidneys for transplantation in the United States, but because allocation begins locally, geographic differences in population and organ supply may enable variation in pediatric access to transplantation. We assembled a cohort of 3764 individual listings for pediatric kidney transplantation in 2005-2010. For each donor service area, we assigned a category of short (<180 days), medium (181-270 days), or long (>270 days) median waiting time and calculated the ratio of pediatric-quality kidneys to pediatric candidates and the percentage of these kidneys locally diverted to adults. We used multivariable Cox regression analyses to examine the association between donor service area characteristics and time to deceased donor kidney transplantation. The Kaplan-Meier estimate of median waiting time to transplantation was 284 days (95% confidence interval, 263 to 300 days) and varied from 14 to 1313 days across donor service areas. Overall, 29% of pediatric-quality kidneys were locally diverted to adults. Compared with areas with short waiting times, areas with long waiting times had a lower ratio of pediatric-quality kidneys to candidates (3.1 versus 5.9; P<0.001) and more diversions to adults (31% versus 27%; P<0.001). In multivariable regression, a lower kidney to candidate ratio remained associated with longer waiting time (hazard ratio, 0.56 for areas with <2:1 versus reference areas with ?5:1 kidneys/candidates; P<0.01). Large geographic variation in waiting time for pediatric deceased donor kidney transplantation exists and is highly associated with local supply and demand factors. Future organ allocation policy should address this geographic inequity.
Project description:IntroductionKidney transplantation (KT) remains the treatment of choice for end-stage kidney disease (ESKD), but access to transplantation is limited by a disparity between supply and demand for suitable organs. This organ shortfall has resulted in the use of a wider range of donor kidneys and, in parallel, a reexamination of potential alternative renal replacement therapies. Previous studies comparing Canadian intensive home hemodialysis (IHHD) with deceased donor (DD) KT in the United States reported similar survival, suggesting IHHD might be a plausible alternative.MethodsUsing data from the Scientific Registry of Transplant Recipients and an experienced US-based IHHD program in Lynchburg, VA, we retrospectively compared mortality outcomes of a cohort of IHHD patients with transplant recipients within the same geographic region between October 1997 and June 2014.ResultsWe identified 3073 transplant recipients and 116 IHHD patients. Living donor KT (n = 1212) had the highest survival and 47% reduction in risk of death compared with IHHD (hazard ratio [HR]: 0.53; 95% confidence interval [CI]: 0.34-0.83). Survival of IHHD patients did not statistically differ from that of DD transplant recipients (n = 1834) in adjusted analyses (HR: 0.96; 95% CI: 0.62-1.48) or when exclusively compared with marginal (Kidney Donor Profile Index >85%) transplant recipients (HR: 1.35; 95% CI: 0.84-2.16).ConclusionOur study showed comparable overall survival between IHHD and DD KT. For appropriate patients, IHHD could serve as bridging therapy to transplant and a tenable long-term renal replacement therapy.
Project description:BackgroundAnatomic abnormalities increase the risk of deceased donor kidney discard, but their effect on transplant outcomes is understudied. We sought to determine the effect of multiple donor renal arteries on early outcomes after deceased donor kidney transplantation.MethodsFor this retrospective cohort study, we identified 1443 kidneys from 832 deceased donors with ≥1 kidney transplanted at our center (2006-2016). We compared the odds of delayed graft function and 90-day graft failure using logistic regression. To reduce potential selection bias, we then repeated the analysis using a paired-kidney cohort, including kidney pairs from 162 donors with one single-artery kidney and one multiartery kidney.ResultsOf 1443 kidneys included, 319 (22%) had multiple arteries. Multiartery kidneys experienced longer cold ischemia time, but other characteristics were similar between groups. Delayed graft function (50% multiartery versus 45% one artery, P=0.07) and 90-day graft failure (3% versus 3%, P=0.83) were similar between groups before and after adjusting for donor and recipient characteristics. In the paired kidney analysis, cold ischemia time was significantly longer for multiartery kidneys compared with single-artery kidneys from the same donor (33.5 versus 26.1 hours, P<0.001), but delayed graft function and 90-day graft failure were again similar between groups.ConclusionsCompared with single-artery deceased donor kidneys, those with multiple renal arteries are harder to place, but experience similar delayed graft function and early graft failure.
Project description:Deceased donor kidneys with acute kidney injury (AKI) are often discarded due to fear of poor outcomes. We performed a multicenter study to determine associations of AKI (increasing admission-to-terminal serum creatinine by AKI Network stages) with kidney discard, delayed graft function (DGF) and 6-month estimated glomerular filtration rate (eGFR). In 1632 donors, kidney discard risk increased for AKI stages 1, 2 and 3 (compared to no AKI) with adjusted relative risks of 1.28 (1.08-1.52), 1.82 (1.45-2.30) and 2.74 (2.0-3.75), respectively. Adjusted relative risk for DGF also increased by donor AKI stage: 1.27 (1.09-1.49), 1.70 (1.37-2.12) and 2.25 (1.74-2.91), respectively. Six-month eGFR, however, was similar across AKI categories but was lower for recipients with DGF (48 [interquartile range: 31-61] vs. 58 [45-75] ml/min/1.73m(2) for no DGF, p < 0.001). There was significant favorable interaction between donor AKI and DGF such that 6-month eGFR was progressively better for DGF kidneys with increasing donor AKI (46 [29-60], 49 [32-64], 52 [36-59] and 58 [39-71] ml/min/1.73m(2) for no AKI, stage 1, 2 and 3, respectively; interaction p = 0.05). Donor AKI is associated with kidney discard and DGF, but given acceptable 6-month allograft function, clinicians should consider cautious expansion into this donor pool.
Project description:BackgroundIn kidney transplantation, nonimmunologic donor-recipient (D-R) pairing is generally not given the same consideration as immunologic matching. The aim of this study was to determine how nonimmunologic D-R pairing relates to independent donor and recipient factors, and to immunologic HLA match for predicting graft loss.MethodsSeven D-R pairings (race, sex, age, weight, height, cytomegalovirus serostatus, and HLA match) were assessed for their association with the composite outcome of death or kidney graft loss using a Cox regression-based forward stepwise selection model. The best model for predicting graft loss (including nonimmunologic D-R pairings, independent D-R factors, and/or HLA match status) was determined using the Akaike Information Criterion.ResultsTwenty three thousand two hundred sixty two (29.9%) people in the derivation data set and 9892 (29.7%) in the validation data set developed the composite outcome of death or graft loss. A model that included both independent and D-R pairing variables best predicted graft loss. The c-indices for the derivation and validation models were 0.626 and 0.629, respectively. Size mismatch (MM) between donor and recipient (>30 kg [D < R} and >15 cm [D < R]) was associated with poor patient and graft survival even with 0 HLA MM, and conversely, an optimal D-R size pairing mitigated the risk of graft loss seen with 6 HLA MM.ConclusionsD-R pairing is valuable in predicting patient and graft outcomes after kidney transplant. D-R size matching could offset the benefit and harm seen with 0 and 6 HLA MM, respectively. This is a novel finding.