Project description:ObjectivesMachine learning algorithms are being increasingly used for predicting hospital readmissions. This meta-analysis evaluated the performance of logistic regression (LR) and machine learning (ML) models for the prediction of 30-day hospital readmission among patients in the US.MethodsElectronic databases (i.e., Medline, PubMed, and Embase) were searched from January 2015 to December 2019. Only studies in the English language were included. Two reviewers performed studies screening, quality appraisal, and data collection. The quality of the studies was assessed using the Quality in Prognosis Studies (QUIPS) tool. Model performance was evaluated using the Area Under the Curve (AUC). A random-effects meta-analysis was performed using STATA 16.ResultsNine studies were included based on the selection criteria. The most common ML techniques were tree-based methods such as boosting and random forest. Most of the studies had a low risk of bias (8/9). The AUC was greater with ML to predict 30-day all-cause hospital readmission compared with LR [Mean Difference (MD): 0.03; 95% Confidence Interval (CI) 0.01-0.05]. Subgroup analyses found that deep-learning methods had a better performance compared with LR (MD 0.06; 95% CI, 0.04-0.09), followed by neural networks (MD: 0.03; 95% CI, 0.03-0.03), while the AUCs of the tree-based (MD: 0.02; 95% CI -0.00-0.04) and kernel-based (MD: 0.02; 95% CI 0.02 (-0.13-0.16) methods were no different compared to LR. More than half of the studies evaluated heart failure-related rehospitalization (N = 5). For the readmission prediction among heart failure patients, ML performed better compared with LR, with a mean difference in AUC of 0.04 (95% CI, 0.01-0.07). The leave-one-out sensitivity analysis confirmed the robustness of the findings.ConclusionMultiple ML methods were used to predict 30-day all-cause hospital readmission. Performance varied across the ML methods, with deep-learning methods showing the best performance over the LR.
Project description:BackgroundThere is much interest in the use of prognostic and diagnostic prediction models in all areas of clinical medicine. The use of machine learning to improve prognostic and diagnostic accuracy in this area has been increasing at the expense of classic statistical models. Previous studies have compared performance between these two approaches but their findings are inconsistent and many have limitations. We aimed to compare the discrimination and calibration of seven models built using logistic regression and optimised machine learning algorithms in a clinical setting, where the number of potential predictors is often limited, and externally validate the models.MethodsWe trained models using logistic regression and six commonly used machine learning algorithms to predict if a patient diagnosed with diabetes has type 1 diabetes (versus type 2 diabetes). We used seven predictor variables (age, BMI, GADA islet-autoantibodies, sex, total cholesterol, HDL cholesterol and triglyceride) using a UK cohort of adult participants (aged 18-50?years) with clinically diagnosed diabetes recruited from primary and secondary care (n = 960, 14% with type 1 diabetes). Discrimination performance (ROC AUC), calibration and decision curve analysis of each approach was compared in a separate external validation dataset (n = 504, 21% with type 1 diabetes).ResultsAverage performance obtained in internal validation was similar in all models (ROC AUC ? 0.94). In external validation, there were very modest reductions in discrimination with AUC ROC remaining ? 0.93 for all methods. Logistic regression had the numerically highest value in external validation (ROC AUC 0.95). Logistic regression had good performance in terms of calibration and decision curve analysis. Neural network and gradient boosting machine had the best calibration performance. Both logistic regression and support vector machine had good decision curve analysis for clinical useful threshold probabilities.ConclusionLogistic regression performed as well as optimised machine algorithms to classify patients with type 1 and type 2 diabetes. This study highlights the utility of comparing traditional regression modelling to machine learning, particularly when using a small number of well understood, strong predictor variables.
Project description:Acute kidney injury (AKI) after liver transplantation has been reported to be associated with increased mortality. Recently, machine learning approaches were reported to have better predictive ability than the classic statistical analysis. We compared the performance of machine learning approaches with that of logistic regression analysis to predict AKI after liver transplantation. We reviewed 1211 patients and preoperative and intraoperative anesthesia and surgery-related variables were obtained. The primary outcome was postoperative AKI defined by acute kidney injury network criteria. The following machine learning techniques were used: decision tree, random forest, gradient boosting machine, support vector machine, naïve Bayes, multilayer perceptron, and deep belief networks. These techniques were compared with logistic regression analysis regarding the area under the receiver-operating characteristic curve (AUROC). AKI developed in 365 patients (30.1%). The performance in terms of AUROC was best in gradient boosting machine among all analyses to predict AKI of all stages (0.90, 95% confidence interval [CI] 0.86?0.93) or stage 2 or 3 AKI. The AUROC of logistic regression analysis was 0.61 (95% CI 0.56?0.66). Decision tree and random forest techniques showed moderate performance (AUROC 0.86 and 0.85, respectively). The AUROC of support the vector machine, naïve Bayes, neural network, and deep belief network was smaller than that of the other models. In our comparison of seven machine learning approaches with logistic regression analysis, the gradient boosting machine showed the best performance with the highest AUROC. An internet-based risk estimator was developed based on our model of gradient boosting. However, prospective studies are required to validate our results.
Project description:In ventricular tachyarrhythmia, electrical instability features including action potential duration, dominant frequency, phase singularity, and filaments are associated with mechanical contractility. However, there are insufficient studies on estimated mechanical contractility based on electrical features during ventricular tachyarrhythmia using a stochastic model. In this study, we predicted cardiac mechanical performance from features of electrical instability during ventricular tachyarrhythmia simulation using machine learning algorithms, including support vector regression (SVR) and artificial neural network (ANN) models. We performed an electromechanical tachyarrhythmia simulation and extracted 12 electrical instability features and two mechanical properties, including stroke volume and the amplitude of myocardial tension (ampTens). We compared predictive performance according to kernel types of the SVR model and the number of hidden layers of the ANN model. In the SVR model, the prediction accuracies of stroke volume and ampTens were the highest when using the polynomial kernel and linear kernel, respectively. The predictive performance of the ANN model was better than that of the SVR model. The prediction accuracies were the highest when the ANN model consisted of three hidden layers. Accordingly, we propose the ANN model with three hidden layers as an optimal model for predicting cardiac mechanical contractility in ventricular tachyarrhythmia. The results of this study are expected to be used to indirectly estimate the hemodynamic response from the electrical cardiac map measured by the optical mapping system during cardiac surgery, as well as cardiac contractility under normal sinus rhythm conditions.
Project description:BackgroundPredictions in pregnancy care are complex because of interactions among multiple factors. Hence, pregnancy outcomes are not easily predicted by a single predictor using only one algorithm or modeling method.ObjectiveThis study aims to review and compare the predictive performances between logistic regression (LR) and other machine learning algorithms for developing or validating a multivariable prognostic prediction model for pregnancy care to inform clinicians' decision making.MethodsResearch articles from MEDLINE, Scopus, Web of Science, and Google Scholar were reviewed following several guidelines for a prognostic prediction study, including a risk of bias (ROB) assessment. We report the results based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Studies were primarily framed as PICOTS (population, index, comparator, outcomes, timing, and setting): Population: men or women in procreative management, pregnant women, and fetuses or newborns; Index: multivariable prognostic prediction models using non-LR algorithms for risk classification to inform clinicians' decision making; Comparator: the models applying an LR; Outcomes: pregnancy-related outcomes of procreation or pregnancy outcomes for pregnant women and fetuses or newborns; Timing: pre-, inter-, and peripregnancy periods (predictors), at the pregnancy, delivery, and either puerperal or neonatal period (outcome), and either short- or long-term prognoses (time interval); and Setting: primary care or hospital. The results were synthesized by reporting study characteristics and ROBs and by random effects modeling of the difference of the logit area under the receiver operating characteristic curve of each non-LR model compared with the LR model for the same pregnancy outcomes. We also reported between-study heterogeneity by using τ2 and I2.ResultsOf the 2093 records, we included 142 studies for the systematic review and 62 studies for a meta-analysis. Most prediction models used LR (92/142, 64.8%) and artificial neural networks (20/142, 14.1%) among non-LR algorithms. Only 16.9% (24/142) of studies had a low ROB. A total of 2 non-LR algorithms from low ROB studies significantly outperformed LR. The first algorithm was a random forest for preterm delivery (logit AUROC 2.51, 95% CI 1.49-3.53; I2=86%; τ2=0.77) and pre-eclampsia (logit AUROC 1.2, 95% CI 0.72-1.67; I2=75%; τ2=0.09). The second algorithm was gradient boosting for cesarean section (logit AUROC 2.26, 95% CI 1.39-3.13; I2=75%; τ2=0.43) and gestational diabetes (logit AUROC 1.03, 95% CI 0.69-1.37; I2=83%; τ2=0.07).ConclusionsPrediction models with the best performances across studies were not necessarily those that used LR but also used random forest and gradient boosting that also performed well. We recommend a reanalysis of existing LR models for several pregnancy outcomes by comparing them with those algorithms that apply standard guidelines.Trial registrationPROSPERO (International Prospective Register of Systematic Reviews) CRD42019136106; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=136106.
Project description:The goals of this study were to examine whether machine-learning algorithms outperform multivariable logistic regression in the prediction of insufficient response to methotrexate (MTX); secondly, to examine which features are essential for correct prediction; and finally, to investigate whether the best performing model specifically identifies insufficient responders to MTX (combination) therapy. The prediction of insufficient response (3-month Disease Activity Score 28-Erythrocyte-sedimentation rate (DAS28-ESR) > 3.2) was assessed using logistic regression, least absolute shrinkage and selection operator (LASSO), random forest, and extreme gradient boosting (XGBoost). The baseline features of 355 rheumatoid arthritis (RA) patients from the "treatment in the Rotterdam Early Arthritis CoHort" (tREACH) and the U-Act-Early trial were combined for analyses. The model performances were compared using area under the curve (AUC) of receiver operating characteristic (ROC) curves, 95% confidence intervals (95% CI), and sensitivity and specificity. Finally, the best performing model following feature selection was tested on 101 RA patients starting tocilizumab (TCZ)-monotherapy. Logistic regression (AUC = 0.77 95% CI: 0.68-0.86) performed as well as LASSO (AUC = 0.76, 95% CI: 0.67-0.85), random forest (AUC = 0.71, 95% CI: 0.61 = 0.81), and XGBoost (AUC = 0.70, 95% CI: 0.61-0.81), yet logistic regression reached the highest sensitivity (81%). The most important features were baseline DAS28 (components). For all algorithms, models with six features performed similarly to those with 16. When applied to the TCZ-monotherapy group, logistic regression's sensitivity significantly dropped from 83% to 69% (p = 0.03). In the current dataset, logistic regression performed equally well compared to machine-learning algorithms in the prediction of insufficient response to MTX. Models could be reduced to six features, which are more conducive for clinical implementation. Interestingly, the prediction model was specific to MTX (combination) therapy response.
Project description:ObjectiveComputerized decision-support tools may improve diagnosis of acute myocardial infarction (AMI) among patients presenting with chest pain at the emergency department (ED). The primary aim was to assess the predictive accuracy of machine learning algorithms based on paired high-sensitivity cardiac troponin T (hs-cTnT) concentrations with varying sampling times, age, and sex in order to rule in or out AMI.MethodsIn this register-based, cross-sectional diagnostic study conducted retrospectively based on 5695 chest pain patients at 2 hospitals in Sweden 2013-2014 we used 5-fold cross-validation 200 times in order to compare the performance of an artificial neural network (ANN) with European guideline-recommended 0/1- and 0/3-hour algorithms for hs-cTnT and with logistic regression without interaction terms. Primary outcome was the size of the intermediate risk group where AMI could not be ruled in or out, while holding the sensitivity (rule-out) and specificity (rule-in) constant across models.ResultsANN and logistic regression had similar (95%) areas under the receiver operating characteristics curve. In patients (n = 4171) where the timing requirements (0/1 or 0/3 hour) for the sampling were met, using ANN led to a relative decrease of 9.2% (95% confidence interval 4.4% to 13.8%; from 24.5% to 22.2% of all tested patients) in the size of the intermediate group compared to the recommended algorithms. By contrast, using logistic regression did not substantially decrease the size of the intermediate group.ConclusionMachine learning algorithms allow for flexibility in sampling and have the potential to improve risk assessment among chest pain patients at the ED.
Project description:ObjectiveTo predict preterm birth in nulliparous women using logistic regression and machine learning.DesignPopulation-based retrospective cohort.ParticipantsNulliparous women (N = 112,963) with a singleton gestation who gave birth between 20-42 weeks gestation in Ontario hospitals from April 1, 2012 to March 31, 2014.MethodsWe used data during the first and second trimesters to build logistic regression and machine learning models in a "training" sample to predict overall and spontaneous preterm birth. We assessed model performance using various measures of accuracy including sensitivity, specificity, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC) in an independent "validation" sample.ResultsDuring the first trimester, logistic regression identified 13 variables associated with preterm birth, of which the strongest predictors were diabetes (Type I: adjusted odds ratio (AOR): 4.21; 95% confidence interval (CI): 3.23-5.42; Type II: AOR: 2.68; 95% CI: 2.05-3.46) and abnormal pregnancy-associated plasma protein A concentration (AOR: 2.04; 95% CI: 1.80-2.30). During the first trimester, the maximum AUC was 60% (95% CI: 58-62%) with artificial neural networks in the validation sample. During the second trimester, 17 variables were significantly associated with preterm birth, among which complications during pregnancy had the highest AOR (13.03; 95% CI: 12.21-13.90). During the second trimester, the AUC increased to 65% (95% CI: 63-66%) with artificial neural networks in the validation sample. Including complications during the pregnancy yielded an AUC of 80% (95% CI: 79-81%) with artificial neural networks. All models yielded 94-97% negative predictive values for spontaneous PTB during the first and second trimesters.ConclusionAlthough artificial neural networks provided slightly higher AUC than logistic regression, prediction of preterm birth in the first trimester remained elusive. However, including data from the second trimester improved prediction to a moderate level by both logistic regression and machine learning approaches.
Project description:Shallow landslides damage buildings and other infrastructure, disrupt agriculture practices, and can cause social upheaval and loss of life. As a result, many scientists study the phenomenon, and some of them have focused on producing landslide susceptibility maps that can be used by land-use managers to reduce injury and damage. This paper contributes to this effort by comparing the power and effectiveness of five machine learning, benchmark algorithms-Logistic Model Tree, Logistic Regression, Naïve Bayes Tree, Artificial Neural Network, and Support Vector Machine-in creating a reliable shallow landslide susceptibility map for Bijar City in Kurdistan province, Iran. Twenty conditioning factors were applied to 111 shallow landslides and tested using the One-R attribute evaluation (ORAE) technique for modeling and validation processes. The performance of the models was assessed by statistical-based indexes including sensitivity, specificity, accuracy, mean absolute error (MAE), root mean square error (RMSE), and area under the receiver operatic characteristic curve (AUC). Results indicate that all the five machine learning models performed well for shallow landslide susceptibility assessment, but the Logistic Model Tree model (AUC = 0.932) had the highest goodness-of-fit and prediction accuracy, followed by the Logistic Regression (AUC = 0.932), Naïve Bayes Tree (AUC = 0.864), ANN (AUC = 0.860), and Support Vector Machine (AUC = 0.834) models. Therefore, we recommend the use of the Logistic Model Tree model in shallow landslide mapping programs in semi-arid regions to help decision makers, planners, land-use managers, and government agencies mitigate the hazard and risk.
Project description:ObjectiveTo present new classification methods of knee osteoarthritis (KOA) using machine learning and compare its performance with conventional statistical methods as classification techniques using machine learning have recently been developed.MethodsA total of 84 KOA patients and 97 normal participants were recruited. KOA patients were clustered into three groups according to the Kellgren-Lawrence (K-L) grading system. All subjects completed gait trials under the same experimental conditions. Machine learning-based classification using the support vector machine (SVM) classifier was performed to classify KOA patients and the severity of KOA. Logistic regression analysis was also performed to compare the results in classifying KOA patients with machine learning method.ResultsIn the classification between KOA patients and normal subjects, the accuracy of classification was higher in machine learning method than in logistic regression analysis. In the classification of KOA severity, accuracy was enhanced through the feature selection process in the machine learning method. The most significant gait feature for classification was flexion and extension of the knee in the swing phase in the machine learning method.ConclusionThe machine learning method is thought to be a new approach to complement conventional logistic regression analysis in the classification of KOA patients. It can be clinically used for diagnosis and gait correction of KOA patients.