Project description:Objective: This work aims to systematically identify, describe, and appraise all prognostic models for cervical cancer and provide a reference for clinical practice and future research. Methods: We systematically searched PubMed, EMBASE, and Cochrane library databases up to December 2020 and included studies developing, validating, or updating a prognostic model for cervical cancer. Two reviewers extracted information based on the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modeling Studies checklist and assessed the risk of bias using the Prediction model Risk Of Bias ASsessment Tool. Results: Fifty-six eligible articles were identified, describing the development of 77 prognostic models and 27 external validation efforts. The 77 prognostic models focused on three types of cervical cancer patients at different stages, i.e., patients with early-stage cervical cancer (n = 29; 38%), patients with locally advanced cervical cancer (n = 27; 35%), and all-stage cervical cancer patients (n = 21; 27%). Among the 77 models, the most frequently used predictors were lymph node status (n = 57; 74%), the International Federation of Gynecology and Obstetrics stage (n = 42; 55%), histological types (n = 38; 49%), and tumor size (n = 37; 48%). The number of models that applied internal validation, presented a full equation, and assessed model calibration was 52 (68%), 16 (21%), and 45 (58%), respectively. Twenty-four models were externally validated, among which three were validated twice. None of the models were assessed with an overall low risk of bias. The Prediction Model of Failure in Locally Advanced Cervical Cancer model was externally validated twice, with acceptable performance, and seemed to be the most reliable. Conclusions: Methodological details including internal validation, sample size, and handling of missing data need to be emphasized on, and external validation is needed to facilitate the application and generalization of models for cervical cancer.
Project description:BackgroundPeople presenting with first-episode psychosis (FEP) have heterogenous outcomes. More than 40% fail to achieve symptomatic remission. Accurate prediction of individual outcome in FEP could facilitate early intervention to change the clinical trajectory and improve prognosis.AimsWe aim to systematically review evidence for prediction models developed for predicting poor outcome in FEP.MethodA protocol for this study was published on the International Prospective Register of Systematic Reviews, registration number CRD42019156897. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidance, we systematically searched six databases from inception to 28 January 2021. We used the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies and the Prediction Model Risk of Bias Assessment Tool to extract and appraise the outcome prediction models. We considered study characteristics, methodology and model performance.ResultsThirteen studies reporting 31 prediction models across a range of clinical outcomes met criteria for inclusion. Eleven studies used logistic regression with clinical and sociodemographic predictor variables. Just two studies were found to be at low risk of bias. Methodological limitations identified included a lack of appropriate validation, small sample sizes, poor handling of missing data and inadequate reporting of calibration and discrimination measures. To date, no model has been applied to clinical practice.ConclusionsFuture prediction studies in psychosis should prioritise methodological rigour and external validation in larger samples. The potential for prediction modelling in FEP is yet to be realised.
Project description:Background and aimsEsophageal cancer risk prediction models allow for risk-stratified endoscopic screening. We aimed to assess the quality of these models developed in the general population.MethodsA systematic search of the PubMed and Embase databases from January 2000 through May 2021 was performed. Studies that developed or validated a model of esophageal cancer in the general population were included. Screening, data extraction, and risk of bias (ROB) assessment by the Prediction model Risk Of Bias Assessment Tool (PROBAST) were performed independently by two reviewers.ResultsOf the 13 models included in the qualitative analysis, 8 were developed for esophageal squamous cell carcinoma (ESCC) and the other 5 were developed for esophageal adenocarcinoma (EAC). Only two models conducted external validation. In the ESCC models, cigarette smoking was included in each model, followed by age, sex, and alcohol consumption. For EAC models, cigarette smoking and body mass index were included in each model, and gastroesophageal reflux disease, uses of acid-suppressant medicine, and nonsteroidal anti-inflammatory drug were exclusively included. The discriminative performance was reported in all studies, with C statistics ranging from 0.71 to 0.88, whereas only six models reported calibration. For ROB, all the models had a low risk in participant and outcome, but all models showed high risk in analysis, and 60% of models showed a high risk in predictors, which resulted in all models being classified as having overall high ROB. For model applicability, about 60% of these models had an overall low risk, with 30% of models of high risk and 10% of models of unclear risk, concerning the assessment of participants, predictors, and outcomes.ConclusionsMost current risk prediction models of esophageal cancer have a high ROB. Prediction models need further improvement in their quality and applicability to benefit esophageal cancer screening.
Project description:Osteoporotic fractures (OF) are a global public health problem currently. Many risk prediction models for OF have been developed, but their performance and methodological quality are unclear. We conducted this systematic review to summarize and critically appraise the OF risk prediction models. Three databases were searched until April 2021. Studies developing or validating multivariable models for OF risk prediction were considered eligible. Used the prediction model risk of bias assessment tool to appraise the risk of bias and applicability of included models. All results were narratively summarized and described. A total of 68 studies describing 70 newly developed prediction models and 138 external validations were included. Most models were explicitly developed (n=31, 44%) and validated (n=76, 55%) only for female. Only 22 developed models (31%) were externally validated. The most validated tool was Fracture Risk Assessment Tool. Overall, only a few models showed outstanding (n=3, 1%) or excellent (n=32, 15%) prediction discrimination. Calibration of developed models (n=25, 36%) or external validation models (n=33, 24%) were rarely assessed. No model was rated as low risk of bias, mostly because of an insufficient number of cases and inappropriate assessment of calibration. There are a certain number of OF risk prediction models. However, few models have been thoroughly internally validated or externally validated (with calibration being unassessed for most of the models), and all models showed methodological shortcomings. Instead of developing completely new models, future research is suggested to validate, improve, and analyze the impact of existing models.
Project description:More than a year has passed since the report of the first case of coronavirus disease 2019 (COVID), and increasing deaths continue to occur. Minimizing the time required for resource allocation and clinical decision making, such as triage, choice of ventilation modes and admission to the intensive care unit is important. Machine learning techniques are acquiring an increasingly sought-after role in predicting the outcome of COVID patients. Particularly, the use of baseline machine learning techniques is rapidly developing in COVID mortality prediction, since a mortality prediction model could rapidly and effectively help clinical decision-making for COVID patients at imminent risk of death. Recent studies reviewed predictive models for SARS-CoV-2 diagnosis, severity, length of hospital stay, intensive care unit admission or mechanical ventilation modes outcomes; however, systematic reviews focused on prediction of COVID mortality outcome with machine learning methods are lacking in the literature. The present review looked into the studies that implemented machine learning, including deep learning, methods in COVID mortality prediction thus trying to present the existing published literature and to provide possible explanations of the best results that the studies obtained. The study also discussed challenging aspects of current studies, providing suggestions for future developments.
Project description:BackgroundAccurate and timely diagnosis and effective prognosis of the disease is important to provide the best possible care for patients with COVID-19 and reduce the burden on the health care system. Machine learning methods can play a vital role in the diagnosis of COVID-19 by processing chest x-ray images.ObjectiveThe aim of this study is to summarize information on the use of intelligent models for the diagnosis and prognosis of COVID-19 to help with early and timely diagnosis, minimize prolonged diagnosis, and improve overall health care.MethodsA systematic search of databases, including PubMed, Web of Science, IEEE, ProQuest, Scopus, bioRxiv, and medRxiv, was performed for COVID-19-related studies published up to May 24, 2020. This study was performed in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. All original research articles describing the application of image processing for the prediction and diagnosis of COVID-19 were considered in the analysis. Two reviewers independently assessed the published papers to determine eligibility for inclusion in the analysis. Risk of bias was evaluated using the Prediction Model Risk of Bias Assessment Tool.ResultsOf the 629 articles retrieved, 44 articles were included. We identified 4 prognosis models for calculating prediction of disease severity and estimation of confinement time for individual patients, and 40 diagnostic models for detecting COVID-19 from normal or other pneumonias. Most included studies used deep learning methods based on convolutional neural networks, which have been widely used as a classification algorithm. The most frequently reported predictors of prognosis in patients with COVID-19 included age, computed tomography data, gender, comorbidities, symptoms, and laboratory findings. Deep convolutional neural networks obtained better results compared with non-neural network-based methods. Moreover, all of the models were found to be at high risk of bias due to the lack of information about the study population, intended groups, and inappropriate reporting.ConclusionsMachine learning models used for the diagnosis and prognosis of COVID-19 showed excellent discriminative performance. However, these models were at high risk of bias, because of various reasons such as inadequate information about study participants, randomization process, and the lack of external validation, which may have resulted in the optimistic reporting of these models. Hence, our findings do not recommend any of the current models to be used in practice for the diagnosis and prognosis of COVID-19.
Project description:ObjectivesImproved ability to predict impairments after critical illness could guide clinical decision-making, inform trial enrollment, and facilitate comprehensive patient recovery. A systematic review of the literature was conducted to investigate whether physical, cognitive, and mental health impairments could be predicted in adult survivors of critical illness.Data sourcesA systematic search of PubMed and the Cochrane Library (Prospective Register of Systematic Reviews ID: CRD42018117255) was undertaken on December 8, 2018, and the final searches updated on January 20, 2019.Study selectionFour independent reviewers assessed titles and abstracts against study eligibility criteria. Studies were eligible if a prediction model was developed, validated, or updated for impairments after critical illness in adult patients. Discrepancies were resolved by consensus or an independent adjudicator.Data extractionData on study characteristics, timing of outcome measurement, candidate predictors, and analytic strategies used were extracted. Risk of bias was assessed using the Prediction model Risk Of Bias Assessment Tool.Data synthesisOf 8,549 screened studies, three studies met inclusion. All three studies focused on the development of a prediction model to predict (1) a mental health composite outcome at 3 months post discharge, (2) return-to-pre-ICU functioning and residence at 6 months post discharge, and (3) physical function 2 months post discharge. Only one model had been externally validated. All studies had a high risk of bias, primarily due to the sample size, and statistical methods used to develop and select the predictors for the prediction published model.ConclusionsWe only found three studies that developed a prediction model of any post-ICU impairment. There are several opportunities for improvement for future prediction model development, including the use of standardized outcomes and time horizons, and improved study design and statistical methodology.
Project description:ObjectiveTo map and assess prognostic models for outcome prediction in patients with chronic obstructive pulmonary disease (COPD).DesignSystematic review.Data sourcesPubMed until November 2018 and hand searched references from eligible articles.Eligibility criteria for study selectionStudies developing, validating, or updating a prediction model in COPD patients and focusing on any potential clinical outcome.ResultsThe systematic search yielded 228 eligible articles, describing the development of 408 prognostic models, the external validation of 38 models, and the validation of 20 prognostic models derived for diseases other than COPD. The 408 prognostic models were developed in three clinical settings: outpatients (n=239; 59%), patients admitted to hospital (n=155; 38%), and patients attending the emergency department (n=14; 3%). Among the 408 prognostic models, the most prevalent endpoints were mortality (n=209; 51%), risk for acute exacerbation of COPD (n=42; 10%), and risk for readmission after the index hospital admission (n=36; 9%). Overall, the most commonly used predictors were age (n=166; 41%), forced expiratory volume in one second (n=85; 21%), sex (n=74; 18%), body mass index (n=66; 16%), and smoking (n=65; 16%). Of the 408 prognostic models, 100 (25%) were internally validated and 91 (23%) examined the calibration of the developed model. For 286 (70%) models a model presentation was not available, and only 56 (14%) models were presented through the full equation. Model discrimination using the C statistic was available for 311 (76%) models. 38 models were externally validated, but in only 12 of these was the validation performed by a fully independent team. Only seven prognostic models with an overall low risk of bias according to PROBAST were identified. These models were ADO, B-AE-D, B-AE-D-C, extended ADO, updated ADO, updated BODE, and a model developed by Bertens et al. A meta-analysis of C statistics was performed for 12 prognostic models, and the summary estimates ranged from 0.611 to 0.769.ConclusionsThis study constitutes a detailed mapping and assessment of the prognostic models for outcome prediction in COPD patients. The findings indicate several methodological pitfalls in their development and a low rate of external validation. Future research should focus on the improvement of existing models through update and external validation, as well as the assessment of the safety, clinical effectiveness, and cost effectiveness of the application of these prognostic models in clinical practice through impact studies.Systematic review registrationPROSPERO CRD42017069247.
Project description:IntroductionAcute kidney injury (AKI) has high morbidity and mortality in intensive care units, which can lead to chronic kidney disease, more costs and longer hospital stay. Early identification of AKI is crucial for clinical intervention. Although various risk prediction models have been developed to identify AKI, the overall predictive performance varies widely across studies. Owing to the different disease scenarios and the small number of externally validated cohorts in different prediction models, the stability and applicability of these models for AKI in critically ill patients are controversial. Moreover, there are no current risk-classification tools that are standardised for prediction of AKI in critically ill patients. The purpose of this systematic review is to map and assess prediction models for AKI in critically ill patients based on a comprehensive literature review.Methods and analysisA systematic review with meta-analysis is designed and will be conducted according to the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS). Three databases including PubMed, Cochrane Library and EMBASE from inception through October 2020 will be searched to identify all studies describing development and/or external validation of original multivariable models for predicting AKI in critically ill patients. Random-effects meta-analyses for external validation studies will be performed to estimate the performance of each model. The restricted maximum likelihood estimation and the Hartung-Knapp-Sidik-Jonkman method under a random-effects model will be applied to estimate the summary C statistic and 95% CI. 95% prediction interval integrating the heterogeneity will also be calculated to pool C-statistics to predict a possible range of C-statistics of future validation studies. Two investigators will extract data independently using the CHARMS checklist. Study quality or risk of bias will be assessed using the Prediction Model Risk of Bias Assessment Tool.Ethics and disseminationEthical approval and patient informed consent are not required because all information will be abstracted from published literatures. We plan to share our results with clinicians and publish them in a general or critical care medicine peer-reviewed journal. We also plan to present our results at critical care international conferences.Osf registration number10.17605/OSF.IO/X25AT.
Project description:Background and objectiveBladder cancer is common among current and former smokers. High bladder cancer mortality may be decreased through early diagnosis and screening. The aim of this study was to appraise decision models used for the economic evaluation of bladder cancer screening and diagnosis, and to summarise the main outcomes of these models.MethodsMEDLINE via PubMed, Embase, EconLit and Web of Science databases was systematically searched from January 2006 to May 2022 for modelling studies that assessed the cost effectiveness of bladder cancer screening and diagnostic interventions. Articles were appraised according to Patient, Intervention, Comparator and Outcome (PICO) characteristics, modelling methods, model structures and data sources. The quality of the studies was also appraised using the Philips checklist by two independent reviewers.ResultsSearches identified 3082 potentially relevant studies, which resulted in 18 articles that met our inclusion criteria. Four of these articles were on bladder cancer screening, and the remaining 14 were diagnostic or surveillance interventions. Two of the four screening models were individual-level simulations. All screening models (n = 4, with three on a high-risk population and one on a general population) concluded that screening is either cost saving or cost effective with cost-effectiveness ratios lower than $53,000/life-years saved. Disease prevalence was a strong determinant of cost effectiveness. Diagnostic models (n = 14) assessed multiple interventions; white light cystoscopy was the most common intervention and was considered cost effective in all studies (n = 4). Screening models relied largely on published evidence generalised from other countries and did not report the validation of their predictions to external data. Almost all diagnostic models (n = 13 out of 14) had a time horizon of 5 years or less and most of the models (n = 11) did not incorporate health-related utilities. In both screening and diagnostic models, epidemiological inputs were based on expert elicitation, assumptions or international evidence of uncertain generalisability. In modelling disease, seven models did not use a standard classification system to define cancer states, others used risk-based, numerical or a Tumour, Node, Metastasis classification. Despite including certain components of disease onset or progression, no models included a complete and coherent model of the natural history of bladder cancer (i.e. simulating the progression of asymptomatic primary bladder cancer from cancer onset, i.e. in the absence of treatment).ConclusionsThe variation in natural history model structures and the lack of data for model parameterisation suggest that research in bladder cancer early detection and screening is at an early stage of development. Appropriate characterisation and analysis of uncertainty in bladder cancer models should be considered a priority.