Project description:PURPOSE:We previously developed and validated informatic algorithms that used International Classification of Diseases 9th revision (ICD9)-based diagnostic and procedure codes to detect the presence and timing of cancer recurrence (the RECUR Algorithms). In 2015, ICD10 replaced ICD9 as the worldwide coding standard. To understand the impact of this transition, we evaluated the performance of the RECUR Algorithms after incorporating ICD10 codes. METHODS:Using publicly available translation tables along with clinician and other expertise, we updated the algorithms to include ICD10 codes as additional input variables. We evaluated the performance of the algorithms using gold standard recurrence measures associated with a contemporary cohort of patients with stage I to III breast, colorectal, and lung (excluding IIIB) cancer and derived performance measures, including the area under the receiver operating curve, average absolute prediction error, and correct classification rate. These values were compared with the performance measures derived from the validation of the original algorithms. RESULTS:A total of 659 colorectal, 280 lung, and 2,053 breast cancer cases were identified. Area under the receiver operating curve derived from the updated algorithms was 89.0% (95% CI, 82.3% to 95.7%), 88.9% (95% CI, 79.3% to 98.2%), and 80.5% (95% CI, 72.8% to 88.2%) for the colorectal, lung, and breast cancer algorithms, respectively. Average absolute prediction errors for recurrence timing were 2.7 (SE, 11.3%), 2.4 (SE, 10.4%), and 5.6 months (SE, 21.8%), respectively, and timing estimates were within 6 months of actual recurrence for more than 80% of colorectal, more than 90% of lung, and more than 50% of breast cancer cases using the updated algorithm. CONCLUSION:Performance measures derived from the updated and original algorithms had overlapping confidence intervals, suggesting that the ICD9 to ICD10 transition did not affect the RECUR Algorithm performance.
Project description:BackgroundThe Functional Comorbidity Index (FCI) was developed for community-based adult populations, with function as the outcome. The original FCI was a survey tool, but several International Classification of Diseases (ICD) code lists-for calculating the FCI using administrative data-have been published. However, compatible International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) and ICD-10-CM versions have not been available.ObjectiveWe developed ICD-9-CM and ICD-10-CM diagnosis code lists to optimize FCI concordance across ICD lexicons.Research designWe assessed concordance and frequency distributions across ICD lexicons for the FCI and individual comorbidities. We used length of stay and discharge disposition to assess continuity of FCI criterion validity across lexicons.SubjectsState Inpatient Databases from Arizona, Colorado, Michigan, New Jersey, New York, Utah, and Washington State (calendar year 2015) were obtained from the Healthcare Cost and Utilization Project. State Inpatient Databases contained ICD-9-CM diagnoses for the first 3 calendar quarters of 2015 and ICD-10-CM diagnoses for the fourth quarter of 2015. Inpatients under 18 years old were excluded.MeasuresLength of stay and discharge disposition outcomes were assessed in separate regression models. Covariates included age, sex, state, ICD lexicon, and FCI/lexicon interaction.ResultsThe FCI demonstrated stability across lexicons, despite small discrepancies in prevalence for individual comorbidities. Under ICD-9-CM, each additional comorbidity was associated with an 8.9% increase in mean length of stay and an 18.5% decrease in the odds of a routine discharge, compared with an 8.4% increase and 17.4% decrease, respectively, under ICD-10-CM.ConclusionThis study provides compatible ICD-9-CM and ICD-10-CM diagnosis code lists for the FCI.
Project description:PurposeAn International Classification of Disease (ICD-10) Charlson Comorbidity Index (CCI) adaptation had not been previously developed and validated for United States (US) healthcare claims data. Many researchers use the Canadian adaption by Quan et al (2005), not validated in US data. We sought to evaluate the predictive validity of a US ICD-10 CCI adaptation in US claims and compare it with the Canadian standard.MethodsDiverse patient cohorts (rheumatoid arthritis, hip/knee replacement, lumbar spine surgery, acute myocardial infarction [AMI], stroke, pneumonia) in the IBM® MarketScan® Research Databases were linked with the IBM MarketScan Mortality file. Predictive performance was measured using c-statistics for binary outcomes (1-year and postoperative mortality, in-hospital complications) and root mean square prediction error (RMSE) for continuous outcomes (1-year all-cause medical costs, index hospitalization costs, length of stay [LOS]), after adjusting for age and sex. C-statistics were compared by the method of DeLong and colleagues (1988); RMSEs, by resampling.ResultsC-statistics were generally high (≥ ~ 0.8) for mortality but lower for in-hospital complications (~0.6-0.7). RMSEs for costs and hospitalization LOS were relatively large and comparable to standard deviations. Results were similar overall between the US and Canadian adaptations, with relative differences typically <1%.ConclusionsThis US-based coding adaptation and a previously published Canadian adaptation resulted in similar predictive ability for all outcomes evaluated but may have different construct validity (not evaluated in our study). We recommend using adaptations specific to the country of data origin based on good research practice.
Project description:Administrative databases are increasingly used in research studies to capture clinical outcomes such as sepsis. This systematic review and meta-analysis examines the accuracy of International Classification of Diseases, 10th revision (ICD-10), codes for identifying sepsis in adult and pediatric patients.Data sourcesWe searched MEDLINE, EMBASE, Web of Science, CENTRAL, Epistemonikos, and McMaster Superfilters from inception to September 7, 2021.Study selectionWe included studies that validated the accuracy of sepsis ICD-10 codes against any reference standard.Data extractionThree authors, working in duplicate, independently extracted data. We conducted meta-analysis using a random effects model to pool sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We evaluated individual study risk of bias using the Quality Assessment of Diagnostic Accuracy Studies tool and assessed certainty in pooled diagnostic effect measures using the Grading of Recommendations Assessment, Development, and Evaluation framework.Data synthesisThirteen eligible studies were included in the qualitative synthesis and the meta-analysis. Eleven studies used manual chart review as the reference standard, and four studies used registry databases. Only one study evaluated pediatric patients exclusively. Compared with the reference standard of detailed chart review and/or registry databases, the pooled sensitivity for sepsis ICD-10 codes was 35% (95% CI, 22-48, low certainty), whereas the pooled specificity was 98% (95% CI: 98-99, low certainty). The PPV for ICD-10 codes ranged from 9.8% to 100% (median, 72.0%; interquartile range [IQR], 50.0-84.7%). NPV ranged from 54.7% to 99.1% (median, 95.9%; interquartile range, 85.5-98.3%).ConclusionsSepsis is undercoded in administrative databases. Future research is needed to explore if greater consistency in ICD-10 code definitions and enhanced quality measures for ICD-10 coders can improve the coding accuracy of sepsis in large databases.
Project description:To examine the validity of the International Classification of Diseases, 10th revision (ICD-10) codes for hyponatraemia in the nationwide population-based Danish National Registry of Patients (DNRP) among inpatients of all ages.Population-based validation study.All somatic hospitals in the North and Central Denmark Regions from 2006 through 2011.Patients of all ages admitted to hospital (n=819 701 individual patients) during the study period. The patient could be included in the study more than once, and our study did not restrict to patients with serum sodium measurements (total of n=2 186 642 hospitalisations).We validated ICD-10 discharge diagnoses of hyponatraemia recorded in the DNRP, using serum sodium measurements obtained from the laboratory information systems (LABKA) research database as the gold standard. One sodium value <135 mmol/L measured at any time during hospitalisation confirmed the diagnosis. We estimated sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for ICD-10 codes for hyponatraemia overall and for cut-off points for increasing hyponatraemia severity.An ICD-10 code for hyponatraemia was recorded in the DNRP in 5850 of the 2 186 642 hospitalisations identified. According to laboratory measurements, however, hyponatraemia was present in 306 418 (14%) hospitalisations. Sensitivity of hyponatraemia diagnoses was 1.8% (95% CI 1.7% to 1.8%). For sodium values <115 mmol/L, sensitivity was 34.3% (95% CI 32.6% to 35.9%). The overall PPV was 92.5% (95% CI 91.8% to 93.1%) and decreased with increasing hyponatraemia severity. Specificity and NPV were high for all cut-off points (?99.8% and ?86.2%, respectively). Patients with hyponatraemia without a corresponding ICD-10 discharge diagnosis were younger and had higher Charlson Comorbidity Index scores than patients with hyponatraemia with a hyponatraemia code in the DNRP.ICD-10 codes for hyponatraemia in the DNRP have high specificity but very low sensitivity. Laboratory test results, not discharge diagnoses, should be used to ascertain hyponatraemia.
Project description:Adverse drug events (ADEs) during hospital stays are a significant problem of healthcare systems. Established monitoring systems lack completeness or are cost intensive. Routinely assigned International Statistical Classification of Diseases and Related Health Problems (ICD) codes could complement existing systems for ADE identification. To analyze the potential of using routine data for ADE detection, the validity of a set of ICD codes was determined focusing on hospital-acquired events.The study utilized routine data from four German hospitals covering the years 2014 and 2015. A set of ICD, 10th Revision, German Modification (ICD-10-GM) diagnoses coded most frequently in the routine data and identified as codes indicating ADEs was analyzed. Data from psychiatric and psychotherapeutic departments were excluded. Retrospective chart review was performed to calculate positive predictive values (PPV) and sensitivity.Of 807 reviewed ADE codes, 91.2% (95%-confidence interval: 89.0, 93.1) were identified as disease in the medical records and 65.1% (61.7, 68.3) were confirmed as ADE. For code groups being predominantly hospital-acquired, 78.5% (73.7, 82.9) were confirmed as ADE, ranging from 68.5% to 94.4% dependent on the ICD code. However, sensitivity of inpatient ADEs was relatively low. 49.7% (45.2, 54.2) of 495 identified hospital-acquired ADEs were coded as disease in the routine data, from which a subgroup of 12.1% (9.4, 15.3) was coded as drug-associated disease.ICD codes from routine data can provide an important contribution to the development and improvement of ADE monitoring systems. Documentation quality is crucial to further increase the PPV, and actions against under-reporting of ADEs in routine data need to be taken.
Project description:International Classification of Diseases, 10th Revision codes (ICD-10) for autosomal dominant polycystic kidney disease (ADPKD) is used within several administrative health care databases. It is unknown whether these codes identify patients who meet strict clinical criteria for ADPKD.The objective of this study is (1) to determine whether different ICD-10 coding algorithms identify adult patients who meet strict clinical criteria for ADPKD as assessed through medical chart review and (2) to assess the number of patients identified with different ADPKD coding algorithms in Ontario.Validation study of health care database codes, and prevalence.Ontario, Canada.For the chart review, 201 adult patients with hospital encounters between April 1, 2002, and March 31, 2014, assigned either ICD-10 codes Q61.2 or Q61.3.This study measured positive predictive value of the ICD-10 coding algorithms and the number of Ontarians identified with different coding algorithms.We manually reviewed a random sample of medical charts in London, Ontario, Canada, and determined whether or not ADPKD was present according to strict clinical criteria.The presence of either ICD-10 code Q61.2 or Q61.3 in a hospital encounter had a positive predictive value of 85% (95% confidence interval [CI], 79%-89%) and identified 2981 Ontarians (0.02% of the Ontario adult population). The presence of ICD-10 code Q61.2 in a hospital encounter had a positive predictive value of 97% (95% CI, 86%-100%) and identified 394 adults in Ontario (0.003% of the Ontario adult population).(1) We could not calculate other measures of validity; (2) the coding algorithms do not identify patients without hospital encounters; and (3) coding practices may differ between hospitals.Most patients with ICD-10 code Q61.2 or Q61.3 assigned during their hospital encounters have ADPKD according to the clinical criteria. These codes can be used to assemble cohorts of adult patients with ADPKD and hospital encounters.
Project description:BackgroundClinical research requires that diagnostic codes captured from routinely collected health administrative data accurately identify individuals with a disease.ObjectiveIn this study, we validated the International Classification of Disease 10th Revision (ICD-10) definition for kidney transplant rejection (T86.100) and for kidney transplant failure (T86.101).DesignRetrospective cohort study.SettingA large, regional transplantation center in Ontario, Canada.PatientsAll adult kidney transplant recipients from 2002 to 2018.MeasurementsChart review was undertaken to identify the first occurrence of biopsy-confirmed rejection and graft loss for all participants. For each observation, we determined the first date a single ICD-10 code T86.100 or T86.101 was recorded as a hospital encounter discharge diagnosis.MethodsUsing chart review as the gold standard, we determined the sensitivity, specificity, and positive predictive value (PPV) for the ICD-10 codes T86.100 and T86.101.ResultsOur study population comprised of 1,258 kidney transplant recipients. The prevalence of rejection and death-censored graft loss were 15.6 and 9.1%, respectively. For the ICD-10 rejection code (T86.100), sensitivity was 72.9% (95% confidence interval [CI], 66.6-79.2), specificity 97.5% (96.5-98.4), and PPV 83.8% (78.3-89.4). For the ICD-10 graft loss code (T86.101), sensitivity was 21.2% (95% CI, 13.2-29.3), specificity 86.3% (84.3-88.3), and PPV 11.7% (7.0-16.4).LimitationsSingle-center study which may limit generalizability of our findings.ConclusionsA single ICD-10 code for kidney transplant rejection (T86.100) was present in 84% of true kidney transplant rejections and is an accurate way of identifying kidney transplant recipients with rejection using administrative health data. The ICD-10 code for graft failure (T86.101) performed poorly and should not be used for administrative health research.
Project description:ObjectiveRoutinely collected health administrative data can be used to efficiently assess disease burden in large populations, but it is important to evaluate the validity of these data. The objective of this study was to develop and validate International Classification of Disease 10th revision (ICD -10) algorithms that identify laboratory-confirmed influenza or laboratory-confirmed respiratory syncytial virus (RSV) hospitalizations using population-based health administrative data from Ontario, Canada.Study design and settingInfluenza and RSV laboratory data from the 2014-15, 2015-16, 2016-17 and 2017-18 respiratory virus seasons were obtained from the Ontario Laboratories Information System (OLIS) and were linked to hospital discharge abstract data to generate influenza and RSV reference cohorts. These reference cohorts were used to assess the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of the ICD-10 algorithms. To minimize misclassification in future studies, we prioritized specificity and PPV in selecting top-performing algorithms.Results83,638 and 61,117 hospitalized patients were included in the influenza and RSV reference cohorts, respectively. The best influenza algorithm had a sensitivity of 73% (95% CI 72% to 74%), specificity of 99% (95% CI 99% to 99%), PPV of 94% (95% CI 94% to 95%), and NPV of 94% (95% CI 94% to 95%). The best RSV algorithm had a sensitivity of 69% (95% CI 68% to 70%), specificity of 99% (95% CI 99% to 99%), PPV of 91% (95% CI 90% to 91%) and NPV of 97% (95% CI 97% to 97%).ConclusionWe identified two highly specific algorithms that best ascertain patients hospitalized with influenza or RSV. These algorithms may be applied to hospitalized patients if data on laboratory tests are not available, and will thereby improve the power of future epidemiologic studies of influenza, RSV, and potentially other severe acute respiratory infections.