Project description:Antibiotic resistance is a worldwide public health problem due to the costs and mortality rates it generates. However, the large pharmaceutical industries have stopped searching for new antibiotics because of their low profitability, given the rapid replacement rates imposed by the increasingly observed resistance acquired by microorganisms. Alternatively, antimicrobial peptides (AMPs) have emerged as potent molecules with a much lower rate of resistance generation. The discovery of these peptides is carried out through extensive in vitro screenings of either rational or non-rational libraries. These processes are tedious and expensive and generate only a few AMP candidates, most of which fail to show the required activity and physicochemical properties for practical applications. This work proposes implementing an artificial intelligence algorithm to reduce the required experimentation and increase the efficiency of high-activity AMP discovery. Our deep learning (DL) model, called AMPs-Net, outperforms the state-of-the-art method by 8.8% in average precision. Furthermore, it is highly accurate to predict the antibacterial and antiviral capacity of a large number of AMPs. Our search led to identifying two unreported antimicrobial motifs and two novel antimicrobial peptides related to them. Moreover, by coupling DL with molecular dynamics (MD) simulations, we were able to find a multifunctional peptide with promising therapeutic effects. Our work validates our previously proposed pipeline for a more efficient rational discovery of novel AMPs.
Project description:Calls for "ethical Artificial Intelligence" are legion, with a recent proliferation of government and industry guidelines attempting to establish ethical rules and boundaries for this new technology. With few exceptions, they interpret Artificial Intelligence (AI) ethics narrowly in a liberal political framework of privacy concerns, transparency, governance and non-discrimination. One of the main hurdles to establishing "ethical AI" remains how to operationalize high-level principles such that they translate to technology design, development and use in the labor process. This is because organizations can end up interpreting ethics in an ad-hoc way with no oversight, treating ethics as simply another technological problem with technological solutions, and regulations have been largely detached from the issues AI presents for workers. There is a distinct lack of supra-national standards for fair, decent, or just AI in contexts where people depend on and work in tandem with it. Topics such as discrimination and bias in job allocation, surveillance and control in the labor process, and quantification of work have received significant attention, yet questions around AI and job quality and working conditions have not. This has left workers exposed to potential risks and harms of AI. In this paper, we provide a critique of relevant academic literature and policies related to AI ethics. We then identify a set of principles that could facilitate fairer working conditions with AI. As part of a broader research initiative with the Global Partnership on Artificial Intelligence, we propose a set of accountability mechanisms to ensure AI systems foster fairer working conditions. Such processes are aimed at reshaping the social impact of technology from the point of inception to set a research agenda for the future. As such, the key contribution of the paper is how to bridge from abstract ethical principles to operationalizable processes in the vast field of AI and new technology at work.
Project description:BackgroundAtherosclerotic cardiovascular disease (ASCVD) is the leading cause of death worldwide, driven primarily by coronary artery disease (CAD). ASCVD risk estimators such as the pooled cohort equations (PCE) facilitate risk stratification and primary prevention of ASCVD but their accuracy is still suboptimal.MethodsUsing deep electronic health record data from 7,116,209 patients seen at 70+ hospitals and clinics across 5 states in the USA, we developed an artificial intelligence-based electrocardiogram analysis tool (ECG-AI) to detect CAD and assessed the additive value of ECG-AI-based ASCVD risk stratification to the PCE. We created independent ECG-AI models using separate neural networks including subjects without known history of ASCVD, to identify coronary artery calcium (CAC) score ≥300 Agatston units by computed tomography, obstructive CAD by angiography or procedural intervention, and regional left ventricular akinesis in ≥1 segment by echocardiogram, as a reflection of possible prior myocardial infarction (MI). These were used to assess the utility of ECG-AI-based ASCVD risk stratification in a retrospective observational study consisting of patients with PCE scores and no prior ASCVD. The study period covered all available digitized EHR data, with the first available ECG in 1987 and the last in February 2023.FindingsECG-AI for identifying CAC ≥300, obstructive CAD, and regional akinesis achieved area under the receiver operating characteristic (AUROC) values of 0.88, 0.85, and 0.94, respectively. An ensembled ECG-AI identified 3, 5, and 10-year risk for acute coronary events and mortality independently and additively to PCE. Hazard ratios for acute coronary events over 3-years in patients without ASCVD that tested positive on 1, 2, or 3 versus 0 disease-specific ECG-AI models at cohort entry were 2.41 (2.14-2.71), 4.23 (3.74-4.78), and 11.75 (10.2-13.52), respectively. Similar stratification was observed in cohorts stratified by PCE or age.InterpretationECG-AI has potential to address unmet need for accessible risk stratification in patients in whom PCE under, over, or insufficiently estimates ASCVD risk, and in whom risk assessment over time periods shorter than 10 years is desired.FundingAnumana.
Project description:WD is caused by ATP7B variants disrupting copper efflux resulting in excessive copper accumulation mainly in liver and brain. The diagnosis of WD is challenged by its variable clinical course, onset, morbidity, and ATP7B variant type. Currently it is diagnosed by a combination of clinical symptoms/signs, aberrant copper metabolism parameters (e.g., low ceruloplasmin serum levels and high urinary and hepatic copper concentrations), and genetic evidence of ATP7B mutations when available. As early diagnosis and treatment are key to favorable outcomes, it is critical to identify subjects before the onset of overtly detrimental clinical manifestations. To this end, we sought to improve WD diagnosis using artificial neural network algorithms (part of artificial intelligence) by integrating available clinical and molecular parameters. Surprisingly, WD diagnosis was based on plasma levels of glutamate, asparagine, taurine, and Fischer's ratio. As these amino acids are linked to the urea-Krebs' cycles, our study not only underscores the central role of hepatic mitochondria in WD pathology but also that most WD patients have underlying hepatic dysfunction. Our study provides novel evidence that artificial intelligence utilized for integrated analysis for WD may result in earlier diagnosis and mechanistically relevant treatments for patients with WD.
Project description:The aim was to systematically synthesize the current research and influence of artificial intelligence (AI) models on temporomandibular joint (TMJ) osteoarthritis (OA) diagnosis using cone-beam computed tomography (CBCT) or panoramic radiography. Seven databases (PubMed, Embase, Scopus, Web of Science, LILACS, ProQuest, and SpringerLink) were searched for TMJ OA and AI articles. We used QUADAS-2 to assess the risk of bias, while with MI-CLAIM we checked the minimum information about clinical artificial intelligence modeling. Two hundred and three records were identified, out of which seven were included, amounting to 10,077 TMJ images. Three studies focused on the diagnosis of TMJ OA using panoramic radiography with various transfer learning models (ResNet model) on which the meta-analysis was performed. The pooled sensitivity was 0.76 (95% CI 0.35–0.95) and the specificity was 0.79 (95% CI 0.75–0.83). The other studies investigated the 3D shape of the condyle and disease classification observed on CBCT images, as well as the numerous radiomics features that can be combined with clinical and proteomic data to investigate the most effective models and promising features for the diagnosis of TMJ OA. The accuracy of the methods was nearly equivalent; it was higher when the indeterminate diagnosis was excluded or when fine-tuning was used.
Project description:AimsAn artificial intelligence-augmented electrocardiogram (AI-ECG) algorithm can identify left ventricular systolic dysfunction (LVSD). We sought to determine whether this AI-ECG algorithm could stratify mortality risk in cardiac intensive care unit (CICU) patients, independent of the presence of LVSD by transthoracic echocardiography (TTE).Methods and resultsWe included 11 266 unique Mayo Clinic CICU patients admitted from 2007 to 2018 who underwent AI-ECG after CICU admission. Left ventricular ejection fraction (LVEF) data were extracted for patients with a TTE during hospitalization. Hospital mortality was analysed using multivariable logistic regression. Mean age was 68 ± 15 years, including 37% females. Higher AI-ECG probability of LVSD remained associated with higher hospital mortality [adjusted odds ratio (OR) 1.05 per 0.1 higher, 95% confidence interval (CI) 1.02-1.08, P = 0.003] after adjustment for LVEF, which itself was inversely related with the risk of hospital mortality (adjusted OR 0.96 per 5% higher, 95% CI 0.93-0.99, P = 0.02). Patients with available LVEF data (n = 8242) were divided based on the presence of predicted (by AI-ECG) vs. observed (by TTE) LVSD (defined as LVEF ≤ 35%), using TTE as the gold standard. A stepwise increase in hospital mortality was observed for patients with a true negative, false positive, false negative, and true positive AI-ECG.ConclusionThe AI-ECG prediction of LVSD is associated with hospital mortality in CICU patients, affording risk stratification in addition to that provided by echocardiographic LVEF. Our results emphasize the prognostic value of electrocardiographic patterns reflecting underlying myocardial disease that are recognized by the AI-ECG.
Project description:BackgroundRisk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability.ObjectivesTo examine an artificial intelligence (AI)-enhanced electrocardiographic (AI-ECG) surrogate for imaging risk biomarkers, and its association with CTRCD.MethodsAcross a five-hospital U.S.-based health system (2013-2023), we identified patients with breast cancer or non-Hodgkin lymphoma (NHL) who received anthracyclines (AC) and/or trastuzumab (TZM), and a control cohort receiving immune checkpoint inhibitors (ICI). We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to ECG images (≥0.1, positive screen) and explored its association with i) global longitudinal strain (GLS) measured within 15 days (n=7,271 pairs); ii) future CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), and LVEF<40%. In the ICI cohort we correlated baseline AI-ECG-LVSD predictions with downstream myocarditis.ResultsHigher AI-ECG LVSD predictions were associated with worse GLS (-18% [IQR:-20 to -17%] for predictions<0.1, to -12% [IQR:-15 to -9%] for ≥0.5 (p<0.001)). In 1,308 patients receiving AC/TZM (age 59 [IQR:49-67] years, 999 [76.4%] women, 80 [IQR:42-115] follow-up months) a positive baseline AI-ECG LVSD screen was associated with ~2-fold and ~4.8-fold increase in the incidence of the composite CTRCD endpoint (adj.HR 2.22 [95%CI:1.63-3.02]), and LVEF<40% (adj.HR 4.76 [95%CI:2.62-8.66]), respectively. Among 2,056 patients receiving ICI (age 65 [IQR:57-73] years, 913 [44.4%] women, follow-up 63 [IQR:28-99] months) AI-ECG predictions were not associated with ICI myocarditis (adj.HR 1.36 [95%CI:0.47-3.93]).ConclusionAI applied to baseline ECG images can stratify the risk of CTRCD associated with anthracycline or trastuzumab exposure.
Project description:Artificial intelligence algorithms could be used to risk-stratify thyroid nodules and may reduce the subjectivity of ultrasonography. One such algorithm is AIBx which has shown good performance. However, external validation is crucial prior to clinical implementation. Patients harboring thyroid nodules 1-4 cm in size, undergoing thyroid surgery from 2014 to 2016 in a single institution, were included. A histological diagnosis was obtained in all cases. Medullary thyroid cancer, metastasis from other cancers, thyroid lymphomas, and purely cystic nodules were excluded. Retrospectively, transverse ultrasound images of the nodules were analyzed by AIBx, and the results were compared with histopathology and Thyroid Imaging Reporting and Data System (TIRADS), calculated by experienced physicians. Out of 329 patients, 257 nodules from 209 individuals met the eligibility criteria. Fifty-one nodules (20%) were malignant. AIBx had a negative predictive value (NPV) of 89.2%. Sensitivity, specificity, and positive predictive values (PPV) were 78.4, 44.2, and 25.8%, respectively. Considering both TIRADS 4 and TIRADS 5 nodules as malignant lesions resulted in an NPV of 93.0%, while PPV and specificity were only 22.4 and 19.4%, respectively. By combining AIBx with TIRADS, no malignant nodules were overlooked. When applied to ultrasound images obtained in a different setting than used for training, AIBx had comparable NPVs to TIRADS. AIBx performed even better when combined with TIRADS, thus reducing false negative assessments. These data support the concept of AIBx for thyroid nodules, and this tool may help less experienced operators by reducing the subjectivity inherent to thyroid ultrasound interpretation.
Project description:Feasibility of automated volume-derived cardiac functional evaluation has successfully been demonstrated using cardiovascular magnetic resonance (CMR) imaging. Notwithstanding, strain assessment has proven incremental value for cardiovascular risk stratification. Since introduction of deformation imaging to clinical practice has been complicated by time-consuming post-processing, we sought to investigate automation respectively. CMR data (n = 1095 patients) from two prospectively recruited acute myocardial infarction (AMI) populations with ST-elevation (STEMI) (AIDA STEMI n = 759) and non-STEMI (TATORT-NSTEMI n = 336) were analysed fully automated and manually on conventional cine sequences. LV function assessment included global longitudinal, circumferential, and radial strains (GLS/GCS/GRS). Agreements were assessed between automated and manual strain assessments. The former were assessed for major adverse cardiac event (MACE) prediction within 12 months following AMI. Manually and automated derived GLS showed the best and excellent agreement with an intraclass correlation coefficient (ICC) of 0.81. Agreement was good for GCS and poor for GRS. Amongst automated analyses, GLS (HR 1.12, 95% CI 1.08-1.16, p < 0.001) and GCS (HR 1.07, 95% CI 1.05-1.10, p < 0.001) best predicted MACE with similar diagnostic accuracy compared to manual analyses; area under the curve (AUC) for GLS (auto 0.691 vs. manual 0.693, p = 0.801) and GCS (auto 0.668 vs. manual 0.686, p = 0.425). Amongst automated functional analyses, GLS was the only independent predictor of MACE in multivariate analyses (HR 1.10, 95% CI 1.04-1.15, p < 0.001). Considering high agreement of automated GLS and equally high accuracy for risk prediction compared to the reference standard of manual analyses, automation may improve efficiency and aid in clinical routine implementation.Trial registration: ClinicalTrials.gov, NCT00712101 and NCT01612312.
Project description:We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation.