Project description:Cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) plays a pivotal role in preventing autoimmunity and fostering anticancer immunity by interacting with B7 proteins CD80 and CD86. CTLA-4 is the first immune checkpoint targeted with a monoclonal antibody inhibitor. Checkpoint inhibitors have generated durable responses in many cancer patients, representing a revolutionary milestone in cancer immunotherapy. However, therapeutic efficacy is limited to a small portion of patients, and immune-related adverse events are noteworthy, especially for monoclonal antibodies directed against CTLA-4. Previously, small molecules have been developed to impair the CTLA-4: CD80 interaction; however, they directly targeted CD80 and not CTLA-4. In this study, we performed artificial intelligence (AI)-powered virtual screening of approximately ten million compounds to target CTLA-4. We validated primary hits with biochemical, biophysical, immunological, and experimental animal assays. We then optimized lead compounds and obtained inhibitors with an inhibitory concentration of 1 micromole in disrupting the interaction between CTLA-4 and CD80. Unlike ipilimumab, these small molecules did not degrade CTLA-4. Several compounds inhibited tumor development prophylactically and therapeutically in syngeneic and CTLA-4-humanized mice. This project supports an AI-based framework in designing small molecules targeting immune checkpoints for cancer therapy.
Project description:BackgroundColposcopy diagnosis and directed biopsy are the key components in cervical cancer screening programs. However, their performance is limited by the requirement for experienced colposcopists. This study aimed to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) for grading colposcopic impressions and guiding biopsies.MethodsAnonymized digital records of 19,435 patients were obtained from six hospitals across China. These records included colposcopic images, clinical information, and pathological results (gold standard). The data were randomly assigned (7:1:2) to a training and a tuning set for developing CAIADS and to a validation set for evaluating performance.ResultsThe agreement between CAIADS-graded colposcopic impressions and pathology findings was higher than that of colposcopies interpreted by colposcopists (82.2% versus 65.9%, kappa 0.750 versus 0.516, p?<?0.001). For detecting pathological high-grade squamous intraepithelial lesion or worse (HSIL+), CAIADS showed higher sensitivity than the use of colposcopies interpreted by colposcopists at either biopsy threshold (low-grade or worse 90.5%, 95% CI 88.9-91.4% versus 83.5%, 81.5-85.3%; high-grade or worse 71.9%, 69.5-74.2% versus 60.4%, 57.9-62.9%; all p?<?0.001), whereas the specificities were similar (low-grade or worse 51.8%, 49.8-53.8% versus 52.0%, 50.0-54.1%; high-grade or worse 93.9%, 92.9-94.9% versus 94.9%, 93.9-95.7%; all p?>?0.05). The CAIADS also demonstrated a superior ability in predicting biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758.ConclusionsThe CAIADS has potential in assisting beginners and for improving the diagnostic quality of colposcopy and biopsy in the detection of cervical precancer/cancer.
Project description:BackgroundAssessment of spine alignment is crucial in the management of scoliosis, but current auto-analysis of spine alignment suffers from low accuracy. We aim to develop and validate a hybrid model named SpineHRNet+, which integrates artificial intelligence (AI) and rule-based methods to improve auto-alignment reliability and interpretability.MethodsFrom December 2019 to November 2020, 1,542 consecutive patients with scoliosis attending two local scoliosis clinics (The Duchess of Kent Children's Hospital at Sandy Bay in Hong Kong; Queen Mary Hospital in Pok Fu Lam on Hong Kong Island) were recruited. The biplanar radiographs of each patient were collected with our medical machine EOS™. The collected radiographs were recaptured using smartphones or screenshots, with deidentified images securely stored. Manually labelled landmarks and alignment parameters by a spine surgeon were considered as ground truth (GT). The data were split 8:2 to train and internally test SpineHRNet+, respectively. This was followed by a prospective validation on another 337 patients. Quantitative analyses of landmark predictions were conducted, and reliabilities of auto-alignment were assessed using linear regression and Bland-Altman plots. Deformity severity and sagittal abnormality classifications were evaluated by confusion matrices.FindingsSpineHRNet+ achieved accurate landmark detection with mean Euclidean distance errors of 2·78 and 5·52 pixels on posteroanterior and lateral radiographs, respectively. The mean angle errors between predictions and GT were 3·18° and 6·32° coronally and sagittally. All predicted alignments were strongly correlated with GT (p < 0·001, R2 > 0·97), with minimal overall difference visualised via Bland-Altman plots. For curve detections, 95·7% sensitivity and 88·1% specificity was achieved, and for severity classification, 88·6-90·8% sensitivity was obtained. For sagittal abnormalities, greater than 85·2-88·9% specificity and sensitivity were achieved.InterpretationThe auto-analysis provided by SpineHRNet+ was reliable and continuous and it might offer the potential to assist clinical work and facilitate large-scale clinical studies.FundingRGC Research Impact Fund (R5017-18F), Innovation and Technology Fund (ITS/404/18), and the AOSpine East Asia Fund (AOSEA(R)2019-06).
Project description:Prostate cancer treatment strategies are guided by risk-stratification. This stratification can be difficult in some patients with known comorbidities. New models are needed to guide strategies and determine which patients are at risk of prostate cancer mortality. This article presents a gradient-boosting model to predict the risk of prostate cancer mortality within 10 years after a cancer diagnosis, and to provide an interpretable prediction. This work uses prospective data from the PLCO Cancer Screening and selected patients who were diagnosed with prostate cancer. During follow-up, 8776 patients were diagnosed with prostate cancer. The dataset was randomly split into a training (n = 7021) and testing (n = 1755) dataset. Accuracy was 0.98 (±0.01), and the area under the receiver operating characteristic was 0.80 (±0.04). This model can be used to support informed decision-making in prostate cancer treatment. AI interpretability provides a novel understanding of the predictions to the users.
Project description:ImportanceFor prostate cancer, Gleason grading of the biopsy specimen plays a pivotal role in determining case management. However, Gleason grading is associated with substantial interobserver variability, resulting in a need for decision support tools to improve the reproducibility of Gleason grading in routine clinical practice.ObjectiveTo evaluate the ability of a deep learning system (DLS) to grade diagnostic prostate biopsy specimens.Design, setting, and participantsThe DLS was evaluated using 752 deidentified digitized images of formalin-fixed paraffin-embedded prostate needle core biopsy specimens obtained from 3 institutions in the United States, including 1 institution not used for DLS development. To obtain the Gleason grade group (GG), each specimen was first reviewed by 2 expert urologic subspecialists from a multi-institutional panel of 6 individuals (years of experience: mean, 25 years; range, 18-34 years). A third subspecialist reviewed discordant cases to arrive at a majority opinion. To reduce diagnostic uncertainty, all subspecialists had access to an immunohistochemical-stained section and 3 histologic sections for every biopsied specimen. Their review was conducted from December 2018 to June 2019.Main outcomes and measuresThe frequency of the exact agreement of the DLS with the majority opinion of the subspecialists in categorizing each tumor-containing specimen as 1 of 5 categories: nontumor, GG1, GG2, GG3, or GG4-5. For comparison, the rate of agreement of 19 general pathologists' opinions with the subspecialists' majority opinions was also evaluated.ResultsFor grading tumor-containing biopsy specimens in the validation set (n = 498), the rate of agreement with subspecialists was significantly higher for the DLS (71.7%; 95% CI, 67.9%-75.3%) than for general pathologists (58.0%; 95% CI, 54.5%-61.4%) (P < .001). In subanalyses of biopsy specimens from an external validation set (n = 322), the Gleason grading performance of the DLS remained similar. For distinguishing nontumor from tumor-containing biopsy specimens (n = 752), the rate of agreement with subspecialists was 94.3% (95% CI, 92.4%-95.9%) for the DLS and similar at 94.7% (95% CI, 92.8%-96.3%) for general pathologists (P = .58).Conclusions and relevanceIn this study, the DLS showed higher proficiency than general pathologists at Gleason grading prostate needle core biopsy specimens and generalized to an independent institution. Future research is necessary to evaluate the potential utility of using the DLS as a decision support tool in clinical workflows and to improve the quality of prostate cancer grading for therapy decisions.
Project description:BackgroundWearables and artificial intelligence (AI)-powered digital health platforms that utilize machine learning algorithms can autonomously measure a senior's change in activity and behavior and may be useful tools for proactive interventions that target modifiable risk factors.ObjectiveThe goal of this study was to analyze how a wearable device and AI-powered digital health platform could provide improved health outcomes for older adults in assisted living communities.MethodsData from 490 residents from six assisted living communities were analyzed retrospectively over 24 months. The intervention group (+CP) consisted of 3 communities that utilized CarePredict (n=256), and the control group (-CP) consisted of 3 communities (n=234) that did not utilize CarePredict. The following outcomes were measured and compared to baseline: hospitalization rate, fall rate, length of stay (LOS), and staff response time.ResultsThe residents of the +CP and -CP communities exhibit no statistical difference in age (P=.64), sex (P=.63), and staff service hours per resident (P=.94). The data show that the +CP communities exhibited a 39% lower hospitalization rate (P=.02), a 69% lower fall rate (P=.01), and a 67% greater length of stay (P=.03) than the -CP communities. The staff alert acknowledgment and reach resident times also improved in the +CP communities by 37% (P=.02) and 40% (P=.02), respectively.ConclusionsThe AI-powered digital health platform provides the community staff with actionable information regarding each resident's activities and behavior, which can be used to identify older adults that are at an increased risk for a health decline. Staff can use this data to intervene much earlier, protecting seniors from conditions that left untreated could result in hospitalization. In summary, the use of wearables and AI-powered digital health platform can contribute to improved health outcomes for seniors in assisted living communities. The accuracy of the system will be further validated in a larger trial.
Project description:Precision medicine is one of the recent and powerful developments in medical care, which has the potential to improve the traditional symptom-driven practice of medicine, allowing earlier interventions using advanced diagnostics and tailoring better and economically personalized treatments. Identifying the best pathway to personalized and population medicine involves the ability to analyze comprehensive patient information together with broader aspects to monitor and distinguish between sick and relatively healthy people, which will lead to a better understanding of biological indicators that can signal shifts in health. While the complexities of disease at the individual level have made it difficult to utilize healthcare information in clinical decision-making, some of the existing constraints have been greatly minimized by technological advancements. To implement effective precision medicine with enhanced ability to positively impact patient outcomes and provide real-time decision support, it is important to harness the power of electronic health records by integrating disparate data sources and discovering patient-specific patterns of disease progression. Useful analytic tools, technologies, databases, and approaches are required to augment networking and interoperability of clinical, laboratory and public health systems, as well as addressing ethical and social issues related to the privacy and protection of healthcare data with effective balance. Developing multifunctional machine learning platforms for clinical data extraction, aggregation, management and analysis can support clinicians by efficiently stratifying subjects to understand specific scenarios and optimize decision-making. Implementation of artificial intelligence in healthcare is a compelling vision that has the potential in leading to the significant improvements for achieving the goals of providing real-time, better personalized and population medicine at lower costs. In this study, we focused on analyzing and discussing various published artificial intelligence and machine learning solutions, approaches and perspectives, aiming to advance academic solutions in paving the way for a new data-centric era of discovery in healthcare.
Project description:PURPOSE:To establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage. METHODS:The training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services. RESULTS:The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%-99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be 'referred', substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern. CONCLUSIONS:The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.
Project description:In the last years, the widespread use of the prostate-specific antigen (PSA) blood examination to triage patients who will enter the diagnostic/therapeutic path for prostate cancer (PCa) has almost halved PCa-specific mortality. As a counterpart, millions of men with clinically insignificant cancer not destined to cause death are treated, with no beneficial impact on overall survival. Therefore, there is a compelling need to develop tools that can help in stratifying patients according to their risk, to support physicians in the selection of the most appropriate treatment option for each individual patient. The aim of this study was to develop and validate on multivendor data a fully automated computer-aided diagnosis (CAD) system to detect and characterize PCas according to their aggressiveness. We propose a CAD system based on artificial intelligence algorithms that a) registers all images coming from different MRI sequences, b) provides candidates suspicious to be tumor, and c) provides an aggressiveness score of each candidate based on the results of a support vector machine classifier fed with radiomics features. The dataset was composed of 131 patients (149 tumors) from two different institutions that were divided in a training set, a narrow validation set, and an external validation set. The algorithm reached an area under the receiver operating characteristic (ROC) curve in distinguishing between low and high aggressive tumors of 0.96 and 0.81 on the training and validation sets, respectively. Moreover, when the output of the classifier was divided into three classes of risk, i.e., indolent, indeterminate, and aggressive, our method did not classify any aggressive tumor as indolent, meaning that, according to our score, all aggressive tumors would undergo treatment or further investigations. Our CAD performance is superior to that of previous studies and overcomes some of their limitations, such as the need to perform manual segmentation of the tumor or the fact that analysis is limited to single-center datasets. The results of this study are promising and could pave the way to a prediction tool for personalized decision making in patients harboring PCa.