Project description:Human-in-the-loop (HITL) AI may enable an ideal symbiosis of human experts and AI models, harnessing the advantages of both while at the same time overcoming their respective limitations. The purpose of this study was to investigate a novel collective intelligence technology designed to amplify the diagnostic accuracy of networked human groups by forming real-time systems modeled on biological swarms. Using small groups of radiologists, the swarm-based technology was applied to the diagnosis of pneumonia on chest radiographs and compared against human experts alone, as well as two state-of-the-art deep learning AI models. Our work demonstrates that both the swarm-based technology and deep-learning technology achieved superior diagnostic accuracy than the human experts alone. Our work further demonstrates that when used in combination, the swarm-based technology and deep-learning technology outperformed either method alone. The superior diagnostic accuracy of the combined HITL AI solution compared to radiologists and AI alone has broad implications for the surging clinical AI deployment and implementation strategies in future practice.
Project description:Most artificial intelligence (AI) studies have focused primarily on adult imaging, with less attention to the unique aspects of pediatric imaging. The objectives of this study were to (1) identify all publicly available pediatric datasets and determine their potential utility and limitations for pediatric AI studies and (2) systematically review the literature to assess the current state of AI in pediatric chest radiograph interpretation. We searched PubMed, Web of Science and Embase to retrieve all studies from 1990 to 2021 that assessed AI for pediatric chest radiograph interpretation and abstracted the datasets used to train and test AI algorithms, approaches and performance metrics. Of 29 publicly available chest radiograph datasets, 2 datasets included solely pediatric chest radiographs, and 7 datasets included pediatric and adult patients. We identified 55 articles that implemented an AI model to interpret pediatric chest radiographs or pediatric and adult chest radiographs. Classification of chest radiographs as pneumonia was the most common application of AI, evaluated in 65% of the studies. Although many studies report high diagnostic accuracy, most algorithms were not validated on external datasets. Most AI studies for pediatric chest radiograph interpretation have focused on a limited number of diseases, and progress is hindered by a lack of large-scale pediatric chest radiograph datasets.
Project description:The interest in artificial intelligence (AI) has ballooned within radiology in the past few years primarily due to notable successes of deep learning. With the advances brought by deep learning, AI has the potential to recognize and localize complex patterns from different radiological imaging modalities, many of which even achieve comparable performance to human decision-making in recent applications. In this chapter, we review several AI applications in radiology for different anatomies: chest, abdomen, pelvis, as well as general lesion detection/identification that is not limited to specific anatomies. For each anatomy site, we focus on introducing the tasks of detection, segmentation, and classification with an emphasis on describing the technology development pathway with the aim of providing the reader with an understanding of what AI can do in radiology and what still needs to be done for AI to better fit in radiology. Combining with our own research experience of AI in medicine, we elaborate how AI can enrich knowledge discovery, understanding, and decision-making in radiology, rather than replacing the radiologist.
Project description:Background:In settings without access to rapid expert radiographic interpretation, artificial intelligence (AI)-based chest radiograph (CXR) analysis can triage persons presenting with possible tuberculosis (TB) symptoms, to identify those who require additional microbiological testing. However, there is limited evidence of the cost-effectiveness of this technology as a triage tool. Methods:A decision analysis model was developed to evaluate the cost-effectiveness of triage strategies with AI-based CXR analysis for patients presenting with symptoms suggestive of pulmonary TB in Karachi, Pakistan. These strategies were compared to the current standard of care using microbiological testing with smear microscopy or GeneXpert, without prior triage. Positive triage CXRs were considered to improve referral success for microbiologic testing, from 91% to 100% for eligible persons. Software diagnostic accuracy was based on a prospective field study in Karachi. Other inputs were obtained from the Pakistan TB Program. The analysis was conducted from the healthcare provider perspective, and costs were expressed in 2020 US dollars. Results:Compared to upfront smear microscopy for all persons with presumptive TB, triage strategies with AI-based CXR analysis were projected to lower costs by 19%, from $23233 per 1000 persons, and avert 3%-4% disability-adjusted life-years (DALYs), from 372 DALYs. Compared to upfront GeneXpert, AI-based triage strategies lowered projected costs by 37%, from $34346 and averted 4% additional DALYs, from 369 DALYs. Reinforced follow-up for persons with positive triage CXRs but negative microbiologic tests was particularly cost-effective. Conclusions:In lower-resource settings, the addition of AI-based CXR triage before microbiologic testing for persons with possible TB symptoms can reduce costs, avert additional DALYs, and improve TB detection.
Project description:BACKGROUND:Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. METHODS AND FINDINGS:We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. CONCLUSIONS:In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Project description:Artificial intelligence (AI) is here to stay and will change health care as we know it. The availability of big data and the increasing numbers of AI algorithms approved by the US Food and Drug Administration together will help in improving the quality of care for patients and in overcoming human fatigue barriers. In oncology practice, patients and providers rely on the interpretation of radiologists when making clinical decisions; however, there is considerable variability among readers, and in particular for prostate imaging. AI represents an emerging solution to this problem, for which it can provide a much-needed form of standardization. The diagnostic performance of AI alone in comparison to a combination of an AI framework and radiologist assessment for evaluation of prostate imaging has yet to be explored. Here, we compare the performance of radiologists alone versus a combination of radiologists aided by a modern computer-aided diagnosis (CAD) AI system. We show that the radiologist-CAD combination demonstrates superior sensitivity and specificity in comparison to both radiologists alone and AI alone. Our findings demonstrate that a radiologist + AI combination could perform best for detection of prostate cancer lesions. A hybrid technology-human system could leverage the benefits of AI in improving radiologist performance while also reducing physician workload, minimizing burnout, and enhancing the quality of patient care.Patient summaryOur report demonstrates the potential of artificial intelligence (AI) for improving the interpretation of prostate scans. A combination of AI and evaluation by a radiologist has the best performance in determining the severity of prostate cancer. A hybrid system that uses both AI and radiologists could maximize the quality of care for patients while reducing physician workload and burnout.
Project description:ObjectivesWhy is there a major gap between the promises of AI and its applications in the domain of diagnostic radiology? To answer this question, we systematically review and critically analyze the AI applications in the radiology domain.MethodsWe systematically analyzed these applications based on their focal modality and anatomic region as well as their stage of development, technical infrastructure, and approval.ResultsWe identified 269 AI applications in the diagnostic radiology domain, offered by 99 companies. We show that AI applications are primarily narrow in terms of tasks, modality, and anatomic region. A majority of the available AI functionalities focus on supporting the "perception" and "reasoning" in the radiology workflow.ConclusionsThereby, we contribute by (1) offering a systematic framework for analyzing and mapping the technological developments in the diagnostic radiology domain, (2) providing empirical evidence regarding the landscape of AI applications, and (3) offering insights into the current state of AI applications. Accordingly, we discuss the potential impacts of AI applications on the radiology work and we highlight future possibilities for developing these applications.Key points• Many AI applications are introduced to the radiology domain and their number and diversity grow very fast. • Most of the AI applications are narrow in terms of modality, body part, and pathology. • A lot of applications focus on supporting "perception" and "reasoning" tasks.
Project description:ObjectivesThe aim is to offer an overview of the existing training programs and critically examine them and suggest avenues for further development of AI training programs for radiologists.MethodsDeductive thematic analysis of 100 training programs offered in 2019 and 2020 (until June 30). We analyze the public data about the training programs based on their "contents," "target audience," "instructors and offering agents," and "legitimization strategies."ResultsThere are many AI training programs offered to radiologists, yet most of them (80%) are short, stand-alone sessions, which are not part of a longer-term learning trajectory. The training programs mainly (around 85%) focus on the basic concepts of AI and are offered in passive mode. Professional institutions and commercial companies are active in offering the programs (91%), though academic institutes are limitedly involved.ConclusionsThere is a need to further develop systematic training programs that are pedagogically integrated into radiology curriculum. Future training programs need to further focus on learning how to work with AI at work and be further specialized and customized to the contexts of radiology work.Key points• Most of AI training programs are short, stand-alone sessions, which focus on the basics of AI. • The content of training programs focuses on medical and technical topics; managerial, legal, and ethical topics are marginally addressed. • Professional institutions and commercial companies are active in offering AI training; academic institutes are limitedly involved.