Project description:Automated tooth segmentation and identification on dental radiographs are crucial steps in establishing digital dental workflows. While deep learning networks have been developed for these tasks, their performance has been inferior in partially edentulous individuals. This study proposes a novel semi-supervised Transformer-based framework (SemiTNet), specifically designed to improve tooth segmentation and identification performance on panoramic radiographs, particularly in partially edentulous cases, and establish an open-source dataset to serve as a unified benchmark. A total of 16,317 panoramic radiographs (1589 labeled and 14,728 unlabeled images) were collected from various datasets to create a large-scale dataset (TSI15k). The labeled images were divided into training and test sets at a 7:1 ratio, while the unlabeled images were used for semi-supervised learning. The SemiTNet was developed using a semi-supervised learning method with a label-guided teacher-student knowledge distillation strategy, incorporating a Transformer-based architecture. The performance of SemiTNet was evaluated on the test set using the intersection over union (IoU), Dice coefficient, precision, recall, and F1 score, and compared with five state-of-the-art networks. Paired t-tests were performed to compare the evaluation metrics between SemiTNet and the other networks. SemiTNet outperformed other networks, achieving the highest accuracy for tooth segmentation and identification, while requiring minimal model size. SemiTNet's performance was near-perfect for fully dentate individuals (all metrics over 99.69%) and excellent for partially edentulous individuals (all metrics over 93%). In edentulous cases, SemiTNet obtained statistically significantly higher tooth identification performance than all other networks. The proposed SemiTNet outperformed previous high-complexity, state-of-the-art networks, particularly in partially edentulous cases. The established open-source TSI15k dataset could serve as a unified benchmark for future studies.
Project description:Convolutional Neural Networks (CNNs) such as U-Net have been widely used for medical image segmentation. Dental restorations are prominent features of dental radiographs. Applying U-Net on the panoramic image is challenging, as the shape, size and frequency of different restoration types vary. We hypothesized that models trained on smaller, equally spaced rectangular image crops (tiles) of the panoramic would outperform models trained on the full image. A total of 1781 panoramic radiographs were annotated pixelwise for fillings, crowns, and root canal fillings by dental experts. We used different numbers of tiles for our experiments. Five-times-repeated three-fold cross-validation was used for model evaluation. Training with more tiles improved model performance and accelerated convergence. The F1-score for the full panoramic image was 0.7, compared to 0.83, 0.92 and 0.95 for 6, 10 and 20 tiles, respectively. For root canals fillings, which are small, cone-shaped features that appear less frequently on the radiographs, the performance improvement was even higher (+294%). Training on tiles and pooling the results thereafter improved pixelwise classification performance and reduced the time to model convergence for segmenting dental restorations. Segmentation of panoramic radiographs is biased towards more frequent and extended classes. Tiling may help to overcome this bias and increase accuracy.
Project description:When dentists see pediatric patients with more complex tooth development than adults during tooth replacement, they need to manually determine the patient's disease with the help of preoperative dental panoramic radiographs. To the best of our knowledge, there is no international public dataset for children's teeth and only a few datasets for adults' teeth, which limits the development of deep learning algorithms for segmenting teeth and automatically analyzing diseases. Therefore, we collected dental panoramic radiographs and cases from 106 pediatric patients aged 2 to 13 years old, and with the help of the efficient and intelligent interactive segmentation annotation software EISeg (Efficient Interactive Segmentation) and the image annotation software LabelMe. We propose the world's first dataset of children's dental panoramic radiographs for caries segmentation and dental disease detection by segmenting and detecting annotations. In addition, another 93 dental panoramic radiographs of pediatric patients, together with our three internationally published adult dental datasets with a total of 2,692 images, were collected and made into a segmentation dataset suitable for deep learning.
Project description:Ischemic stroke, a leading global cause of death and disability, is commonly caused by carotid arteries atherosclerosis. Carotid artery calcification (CAC) is a well-known marker of atherosclerosis. Such calcifications are classically detected by ultrasound screening. In recent years it was shown that these calcifications can also be inferred from routine panoramic dental radiographs. In this work, we focused on panoramic dental radiographs taken from 500 patients, manually labelling each of the patients' sides (each radiograph was treated as two sides), which were used to develop an artificial intelligence (AI)-based algorithm to automatically detect carotid calcifications. The algorithm uses deep learning convolutional neural networks (CNN), with transfer learning (TL) approach that achieved true labels for each corner, and reached a sensitivity (recall) of 0.82 and a specificity of 0.97 for individual arteries, and a recall of 0.87 and specificity of 0.97 for individual patients. Applying and integrating the algorithm in healthcare units and dental clinics has the potential of reducing stroke events and their mortality and morbidity consequences.
Project description:Osteoporosis is becoming a global health issue due to increased life expectancy. However, it is difficult to detect in its early stages owing to a lack of discernible symptoms. Hence, screening for osteoporosis with widely used dental panoramic radiographs would be very cost-effective and useful. In this study, we investigate the use of deep learning to classify osteoporosis from dental panoramic radiographs. In addition, the effect of adding clinical covariate data to the radiographic images on the identification performance was assessed. For objective labeling, a dataset containing 778 images was collected from patients who underwent both skeletal-bone-mineral density measurement and dental panoramic radiography at a single general hospital between 2014 and 2020. Osteoporosis was assessed from the dental panoramic radiographs using convolutional neural network (CNN) models, including EfficientNet-b0, -b3, and -b7 and ResNet-18, -50, and -152. An ensemble model was also constructed with clinical covariates added to each CNN. The ensemble model exhibited improved performance on all metrics for all CNNs, especially accuracy and AUC. The results show that deep learning using CNN can accurately classify osteoporosis from dental panoramic radiographs. Furthermore, it was shown that the accuracy can be improved using an ensemble model with patient covariates.
Project description:ObjectivesWe aimed to analyse age-related anatomical changes in teeth and mandibular structures using panoramic radiographs.Materials and methodsWe included 471 subjects aged 13-70 years (mean, 35.12 ± 18.72 years). Panoramic radiographs were used to record intraoral condition and radiomorphometric parameters. After grouping the subjects by age decade, descriptive statistics and analysis of variance were performed to assess age-related patterns.ResultsThe number of missing teeth, endodontically treated teeth, full veneer crowns, and implant prosthesis increased with age (all p < .05). The prevalence of periodontitis significantly increased after the 40s and was the highest in the 60s (57.1%). The maxillary canine root was the longest in the 10s and 20s (p < .001). With age, the mandibular canal and mental foramen moved towards the alveolar bone crest, on the opposite side of the mandibular inferior border. The pulp area and pulp-to-tooth ratio of maxillary/mandibular first molars were significantly higher in the 10s and 20s than in other age groups (all p < .05).ConclusionsWe provided comprehensive information on age-related anatomical changes in teeth and mandibular structures based on panoramic radiographs. Various radiographic parameters showed specific changes with increasing age. Assessing these age-related changes can be useful in determining an individual's age, and may aid in medico-legal and forensic judgments.
Project description:BackgroundRecently, deep learning has been increasingly applied in the field of dentistry. The aim of this study is to develop a model for the automatic segmentation, numbering, and state assessment of teeth on panoramic radiographs.MethodsWe created a dual-labeled dataset on panoramic radiographs for training, incorporating both numbering and state labels. We then developed a fusion model that combines a YOLOv9-e instance segmentation model with an EfficientNetv2-l classification model. The instance segmentation model is used for tooth segmentation and numbering, whereas the classification model is used for state evaluation. The final prediction results integrate tooth position, numbering, and state information. The model's output includes result visualization and automatic report generation.ResultsPrecision, Recall, mAP50 (mean Average Precision), and mAP50-95 for the tooth instance segmentation task are 0.989, 0.955, 0.975, and 0.840, respectively. Precision, Recall, Specificity, and F1 Score for the tooth classification task are 0.943, 0.933, 0.985, and 0.936, respectively.ConclusionsThis fusion model is the first to integrate automatic dental segmentation, numbering, and state assessment. It provides highly accurate results, including detailed visualizations and automated report generation.
Project description:A wide range of deep learning (DL) architectures with varying depths are available, with developers usually choosing one or a few of them for their specific task in a nonsystematic way. Benchmarking (i.e., the systematic comparison of state-of-the art architectures on a specific task) may provide guidance in the model development process and may allow developers to make better decisions. However, comprehensive benchmarking has not been performed in dentistry yet. We aimed to benchmark a range of architecture designs for 1 specific, exemplary case: tooth structure segmentation on dental bitewing radiographs. We built 72 models for tooth structure (enamel, dentin, pulp, fillings, crowns) segmentation by combining 6 different DL network architectures (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network, Mask Attention Network) with 12 encoders from 3 different encoder families (ResNet, VGG, DenseNet) of varying depth (e.g., VGG13, VGG16, VGG19). On each model design, 3 initialization strategies (ImageNet, CheXpert, random initialization) were applied, resulting overall into 216 trained models, which were trained up to 200 epochs with the Adam optimizer (learning rate = 0.0001) and a batch size of 32. Our data set consisted of 1,625 human-annotated dental bitewing radiographs. We used a 5-fold cross-validation scheme and quantified model performances primarily by the F1-score. Initialization with ImageNet or CheXpert weights significantly outperformed random initialization (P < 0.05). Deeper and more complex models did not necessarily perform better than less complex alternatives. VGG-based models were more robust across model configurations, while more complex models (e.g., from the ResNet family) achieved peak performances. In conclusion, initializing models with pretrained weights may be recommended when training models for dental radiographic analysis. Less complex model architectures may be competitive alternatives if computational resources and training time are restricting factors. Models developed and found superior on nondental data sets may not show this behavior for dental domain-specific tasks.
Project description:BackgroundOsteoporosis is a complex condition that drives research into its causes, diagnosis, treatment, and prevention, significantly affecting patients and healthcare providers in various aspects of life. Research is exploring orthopantomogram (OPG) radiography for osteoporosis screening instead of bone mineral density (BMD) assessments. Although this method uses various indicators, manual analysis can be challenging. Machine learning and deep learning techniques have been developed to address this. This systematic review and meta-analysis is the first to evaluate the accuracy of deep learning models in predicting osteoporosis from OPG radiographs, providing evidence for their performance and clinical use.MethodsA literature search was conducted in MEDLINE, Scopus, and Web of Science up to February 10, 2025, using the keywords related to deep learning, osteoporosis, and panoramic radiography. We conducted title, abstract, and full-text screening based on inclusion/exclusion criteria. Meta-analysis was performed using a bivariate random-effects model to pool diagnostic accuracy measures, and subgroup analyses explored sources of heterogeneity.ResultsWe found 204 articles, removed 189 duplicates and irrelevant studies, assessed 15articles, and ultimately, seven studies were selected. The DL models showed AUC values of 66.8-99.8%, with sensitivity and specificity ranging from 59 to 97% and 64.9-100%, respectively. No significant differences in diagnostic accuracy were found among subgroups. AlexNet had the highest performance, achieving a sensitivity of 0.89 and a specificity of 0.99. Sensitivity analysis revealed that excluding outliers had little impact on the results. Deeks' funnel plot indicated no significant publication bias (P = 0.54).ConclusionsThis systematic review indicates that deep learning models for osteoporosis diagnosis achieved 80% sensitivity, 92% specificity, and 93% AUC. Models like AlexNet and ResNet demonstrate effectiveness. These findings suggest that DL models are promising for noninvasive early detection, but more extensive multicenter studies are necessary to validate their efficacy in at-risk groups.
Project description:In this study, a deep learning-based method for developing an automated diagnostic support system that detects periodontal bone loss in the panoramic dental radiographs is proposed. The presented method called DeNTNet not only detects lesions but also provides the corresponding teeth numbers of the lesion according to dental federation notation. DeNTNet applies deep convolutional neural networks(CNNs) using transfer learning and clinical prior knowledge to overcome the morphological variation of the lesions and imbalanced training dataset. With 12,179 panoramic dental radiographs annotated by experienced dental clinicians, DeNTNet was trained, validated, and tested using 11,189, 190, and 800 panoramic dental radiographs, respectively. Each experimental model was subjected to comparative study to demonstrate the validity of each phase of the proposed method. When compared to the dental clinicians, DeNTNet achieved the F1 score of 0.75 on the test set, whereas the average performance of dental clinicians was 0.69.