Project description:BackgroundThe management of acne requires the consideration of its severity; however, a universally adopted evaluation system for clinical practice is lacking. Artificial intelligence (AI) evaluation systems hold the promise of enhancing the efficiency and reproducibility of assessments. Artificial intelligence (AI) evaluation systems offer the potential to enhance the efficiency and reproducibility of assessments in this domain. While the identification of skin lesions represents a crucial component of acne evaluation, existing AI systems often overlook lesion identification or fail to integrate it with severity assessment. This study aimed to develop an AI-powered acne grading system and compare its performance with physician image-based scoring.MethodsA total of 1,501 acne patients were included in the study, and standardized pictures were obtained using the VISIA system. The initial evaluation involved 40 stratified sampled frontal photos assessed by seven dermatologists. Subsequently, the three doctors with the highest inter-rater agreement annotated the remaining 1,461 images, which served as the dataset for the development of the AI system. The dataset was randomly divided into two groups: 276 images were allocated for training the acne lesion identification platform, and 1,185 images were used to assess the severity of acne.ResultsThe average precision of our model for skin lesion identification was 0.507 and the average recall was 0.775. The AI severity grading system achieved good agreement with the true label (linear weighted kappa = 0.652). After integrating the lesion identification results into the severity assessment with fixed weights and learnable weights, the kappa rose to 0.737 and 0.696, respectively, and the entire evaluation on a Linux workstation with a Tesla K40m GPU took less than 0.1s per picture.ConclusionThis study developed a system that detects various types of acne lesions and correlates them well with acne severity grading, and the good accuracy and efficiency make this approach potentially an effective clinical decision support tool.
Project description:Cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) plays a pivotal role in preventing autoimmunity and fostering anticancer immunity by interacting with B7 proteins CD80 and CD86. CTLA-4 is the first immune checkpoint targeted with a monoclonal antibody inhibitor. Checkpoint inhibitors have generated durable responses in many cancer patients, representing a revolutionary milestone in cancer immunotherapy. However, therapeutic efficacy is limited to a small portion of patients, and immune-related adverse events are noteworthy, especially for monoclonal antibodies directed against CTLA-4. Previously, small molecules have been developed to impair the CTLA-4: CD80 interaction; however, they directly targeted CD80 and not CTLA-4. In this study, we performed artificial intelligence (AI)-powered virtual screening of approximately ten million compounds to target CTLA-4. We validated primary hits with biochemical, biophysical, immunological, and experimental animal assays. We then optimized lead compounds and obtained inhibitors with an inhibitory concentration of 1 micromole in disrupting the interaction between CTLA-4 and CD80. Unlike ipilimumab, these small molecules did not degrade CTLA-4. Several compounds inhibited tumor development prophylactically and therapeutically in syngeneic and CTLA-4-humanized mice. This project supports an AI-based framework in designing small molecules targeting immune checkpoints for cancer therapy.
Project description:BackgroundGleason grading of prostate cancer is an important prognostic factor, but suffers from poor reproducibility, particularly among non-subspecialist pathologists. Although artificial intelligence (A.I.) tools have demonstrated Gleason grading on-par with expert pathologists, it remains an open question whether and to what extent A.I. grading translates to better prognostication.MethodsIn this study, we developed a system to predict prostate cancer-specific mortality via A.I.-based Gleason grading and subsequently evaluated its ability to risk-stratify patients on an independent retrospective cohort of 2807 prostatectomy cases from a single European center with 5-25 years of follow-up (median: 13, interquartile range 9-17).ResultsHere, we show that the A.I.'s risk scores produced a C-index of 0.84 (95% CI 0.80-0.87) for prostate cancer-specific mortality. Upon discretizing these risk scores into risk groups analogous to pathologist Grade Groups (GG), the A.I. has a C-index of 0.82 (95% CI 0.78-0.85). On the subset of cases with a GG provided in the original pathology report (n = 1517), the A.I.'s C-indices are 0.87 and 0.85 for continuous and discrete grading, respectively, compared to 0.79 (95% CI 0.71-0.86) for GG obtained from the reports. These represent improvements of 0.08 (95% CI 0.01-0.15) and 0.07 (95% CI 0.00-0.14), respectively.ConclusionsOur results suggest that A.I.-based Gleason grading can lead to effective risk stratification, and warrants further evaluation for improving disease management.
Project description:Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted κ, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.
Project description:BackgroundColposcopy diagnosis and directed biopsy are the key components in cervical cancer screening programs. However, their performance is limited by the requirement for experienced colposcopists. This study aimed to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) for grading colposcopic impressions and guiding biopsies.MethodsAnonymized digital records of 19,435 patients were obtained from six hospitals across China. These records included colposcopic images, clinical information, and pathological results (gold standard). The data were randomly assigned (7:1:2) to a training and a tuning set for developing CAIADS and to a validation set for evaluating performance.ResultsThe agreement between CAIADS-graded colposcopic impressions and pathology findings was higher than that of colposcopies interpreted by colposcopists (82.2% versus 65.9%, kappa 0.750 versus 0.516, p < 0.001). For detecting pathological high-grade squamous intraepithelial lesion or worse (HSIL+), CAIADS showed higher sensitivity than the use of colposcopies interpreted by colposcopists at either biopsy threshold (low-grade or worse 90.5%, 95% CI 88.9-91.4% versus 83.5%, 81.5-85.3%; high-grade or worse 71.9%, 69.5-74.2% versus 60.4%, 57.9-62.9%; all p < 0.001), whereas the specificities were similar (low-grade or worse 51.8%, 49.8-53.8% versus 52.0%, 50.0-54.1%; high-grade or worse 93.9%, 92.9-94.9% versus 94.9%, 93.9-95.7%; all p > 0.05). The CAIADS also demonstrated a superior ability in predicting biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758.ConclusionsThe CAIADS has potential in assisting beginners and for improving the diagnostic quality of colposcopy and biopsy in the detection of cervical precancer/cancer.
Project description:BackgroundAssessment of spine alignment is crucial in the management of scoliosis, but current auto-analysis of spine alignment suffers from low accuracy. We aim to develop and validate a hybrid model named SpineHRNet+, which integrates artificial intelligence (AI) and rule-based methods to improve auto-alignment reliability and interpretability.MethodsFrom December 2019 to November 2020, 1,542 consecutive patients with scoliosis attending two local scoliosis clinics (The Duchess of Kent Children's Hospital at Sandy Bay in Hong Kong; Queen Mary Hospital in Pok Fu Lam on Hong Kong Island) were recruited. The biplanar radiographs of each patient were collected with our medical machine EOS™. The collected radiographs were recaptured using smartphones or screenshots, with deidentified images securely stored. Manually labelled landmarks and alignment parameters by a spine surgeon were considered as ground truth (GT). The data were split 8:2 to train and internally test SpineHRNet+, respectively. This was followed by a prospective validation on another 337 patients. Quantitative analyses of landmark predictions were conducted, and reliabilities of auto-alignment were assessed using linear regression and Bland-Altman plots. Deformity severity and sagittal abnormality classifications were evaluated by confusion matrices.FindingsSpineHRNet+ achieved accurate landmark detection with mean Euclidean distance errors of 2·78 and 5·52 pixels on posteroanterior and lateral radiographs, respectively. The mean angle errors between predictions and GT were 3·18° and 6·32° coronally and sagittally. All predicted alignments were strongly correlated with GT (p < 0·001, R2 > 0·97), with minimal overall difference visualised via Bland-Altman plots. For curve detections, 95·7% sensitivity and 88·1% specificity was achieved, and for severity classification, 88·6-90·8% sensitivity was obtained. For sagittal abnormalities, greater than 85·2-88·9% specificity and sensitivity were achieved.InterpretationThe auto-analysis provided by SpineHRNet+ was reliable and continuous and it might offer the potential to assist clinical work and facilitate large-scale clinical studies.FundingRGC Research Impact Fund (R5017-18F), Innovation and Technology Fund (ITS/404/18), and the AOSpine East Asia Fund (AOSEA(R)2019-06).
Project description:Prostate cancer treatment strategies are guided by risk-stratification. This stratification can be difficult in some patients with known comorbidities. New models are needed to guide strategies and determine which patients are at risk of prostate cancer mortality. This article presents a gradient-boosting model to predict the risk of prostate cancer mortality within 10 years after a cancer diagnosis, and to provide an interpretable prediction. This work uses prospective data from the PLCO Cancer Screening and selected patients who were diagnosed with prostate cancer. During follow-up, 8776 patients were diagnosed with prostate cancer. The dataset was randomly split into a training (n = 7021) and testing (n = 1755) dataset. Accuracy was 0.98 (±0.01), and the area under the receiver operating characteristic was 0.80 (±0.04). This model can be used to support informed decision-making in prostate cancer treatment. AI interpretability provides a novel understanding of the predictions to the users.
Project description:AimIn countries where access to mammography equipment and skilled personnel is limited, most breast cancer (BC) cases are detected in locally advanced stages. Infrared breast thermography is recognized as an adjunctive technique for the detection of BC due to its advantages such as safety (by not emitting ionizing radiation nor applying any stress to the breast), portability, and low cost. Improved by advanced computational analytics techniques, infrared thermography could be a valuable complementary screening technique to detect BC at early stages. In this work, an infrared-artificial intelligence (AI) software was developed and evaluated to help physicians to identify potential BC cases.MethodsSeveral AI algorithms were developed and evaluated, which were learned from a proprietary database of 2,700 patients, with BC cases that were confirmed through mammography, ultrasound, and biopsy. Following by evaluation of the algorithms, the best AI algorithm (infrared-AI software) was submitted to a clinic validation process in which its ability to detect BC was compared to mammography evaluations in a double-blind test.ResultsThe infrared-AI software demonstrated efficiency values of 94.87% sensitivity, 72.26% specificity, 30.08% positive predictive value (PPV), and 99.12% negative predictive value (NPV), whereas the reference mammography evaluation reached 100% sensitivity, 97.10% specificity, 81.25% PPV, and 100% NPV.ConclusionsThe infrared-AI software here developed shows high BC sensitivity (94.87%) and high NPV (99.12%). Therefore, it is proposed as a complementary screening tool for BC.
Project description:Background/objectivesCheckpoint inhibitors, which generate durable responses in many cancer patients, have revolutionized cancer immunotherapy. However, their therapeutic efficacy is limited, and immune-related adverse events are severe, especially for monoclonal antibody treatment directed against cytotoxic T-lymphocyte-associated protein 4 (CTLA-4), which plays a pivotal role in preventing autoimmunity and fostering anticancer immunity by interacting with the B7 proteins CD80 and CD86. Small molecules impairing the CTLA-4/CD80 interaction have been developed; however, they directly target CD80, not CTLA-4.Subjects/methodsIn this study, we performed artificial intelligence (AI)-powered virtual screening of approximately ten million compounds to identify those targeting CTLA-4. We validated the hits molecules with biochemical, biophysical, immunological, and experimental animal assays.ResultsThe primary hits obtained from the virtual screening were successfully validated in vitro and in vivo. We then optimized lead compounds and obtained inhibitors (inhibitory concentration, 1 micromole) that disrupted the CTLA-4/CD80 interaction without degrading CTLA-4.ConclusionsSeveral compounds inhibited tumor development prophylactically and therapeutically in syngeneic and CTLA-4-humanized mice. Our findings support using AI-based frameworks to design small molecules targeting immune checkpoints for cancer therapy.