Project description:BackgroundBrain extraction is an essential prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion recognition, location, and segmentation. Segmentation using a fully convolutional neural network (FCN) yields high accuracy but a relatively slow extraction speed.MethodsThis paper proposes an integrated algorithm, FABEM, to address the above issues. This method first uses threshold segmentation, closed operation, convolutional neural network (CNN), and image filling to generate a specific mask. Then, it detects the number of connected regions of the mask. If the number of connected regions equals 1, the extraction is done by directly multiplying with the original image. Otherwise, the mask was further segmented using the region growth method for original images with single-region brain distribution. Conversely, for images with multi-region brain distribution, Deeplabv3 + is used to adjust the mask. Finally, the mask is multiplied with the original image to complete the extraction.ResultsThe algorithm and 5 FCN models were tested on 24 datasets containing different lesions, and the algorithm's performance showed MPA = 0.9968, MIoU = 0.9936, and MBF = 0.9963, comparable to the Deeplabv3+. Still, its extraction speed is much faster than the Deeplabv3+. It can complete the brain extraction of a head CT image in about 0.43 s, about 3.8 times that of the Deeplabv3+.ConclusionThus, this method can achieve accurate brain extraction from head CT images faster, creating a good basis for subsequent brain volume measurement and feature extraction of intracranial lesions.
Project description:ObjectiveTo develop and validate a method for detecting ureteral stent encrustations in medical CT images based on Mask-RCNN and 3D morphological analysis.MethodAll 222 cases of ureteral stent data were obtained from the Fifth Affiliated Hospital of Sun Yat-sen University. Firstly, a neural network was used to detect the region of the ureteral stent, and the results of the coarse detection were completed and connected domain filtered based on the continuity of the ureteral stent in 3D space to obtain a 3D segmentation result. Secondly, the segmentation results were analyzed and detected based on the 3D morphology, and the centerline was obtained through thinning the 3D image, fitting and deriving the ureteral stent, and obtaining radial sections. Finally, the abnormal areas of the radial section were detected through polar coordinate transformation to detect the encrustation area of the ureteral stent.ResultsFor the detection of ureteral stent encrustations in the ureter, the algorithm's confusion matrix achieved an accuracy of 79.6% in the validation of residual stones/ureteral stent encrustations at 186 locations. Ultimately, the algorithm was validated in 222 cases, achieving a ureteral stent segmentation accuracy of 94.4% and a positive and negative judgment accuracy of 87.3%. The average detection time per case was 12 s.ConclusionThe proposed medical CT image ureteral stent wall stone detection method based on Mask-RCNN and 3D morphological analysis can effectively assist clinical doctors in diagnosing ureteral stent encrustations.
Project description:Tissue segmentation of histology whole-slide images (WSI) remains a critical task in automated digital pathology workflows for both accurate disease diagnosis and deep phenotyping for research purposes. This is especially challenging when the tissue structure of biospecimens is relatively porous and heterogeneous, such as for atherosclerotic plaques. In this study, we developed a unique approach called 'EntropyMasker' based on image entropy to tackle the fore- and background segmentation (masking) task in histology WSI. We evaluated our method on 97 high-resolution WSI of human carotid atherosclerotic plaques in the Athero-Express Biobank Study, constituting hematoxylin and eosin and 8 other staining types. Using multiple benchmarking metrics, we compared our method with four widely used segmentation methods: Otsu's method, Adaptive mean, Adaptive Gaussian and slideMask and observed that our method had the highest sensitivity and Jaccard similarity index. We envision EntropyMasker to fill an important gap in WSI preprocessing, machine learning image analysis pipelines, and enable disease phenotyping beyond the field of atherosclerosis.
Project description:PurposeManual delineation of head and neck (H&N) organ-at-risk (OAR) structures for radiation therapy planning is time consuming and highly variable. Therefore, we developed a dynamic multiatlas selection-based approach for fast and reproducible segmentation.MethodsOur approach dynamically selects and weights the appropriate number of atlases for weighted label fusion and generates segmentations and consensus maps indicating voxel-wise agreement between different atlases. Atlases were selected for a target as those exceeding an alignment weight called dynamic atlas attention index. Alignment weights were computed at the image level and called global weighted voting (GWV) or at the structure level and called structure weighted voting (SWV) by using a normalized metric computed as the sum of squared distances of computed tomography (CT)-radiodensity and modality-independent neighborhood descriptors (extracting edge information). Performance comparisons were performed using 77 H&N CT images from an internal Memorial Sloan-Kettering Cancer Center dataset (N = 45) and an external dataset (N = 32) using Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of HD, median of maximum surface distance, and volume ratio error against expert delineation. Pairwise DSC accuracy comparisons of proposed (GWV, SWV) vs single best atlas (BA) or majority voting (MV) methods were performed using Wilcoxon rank-sum tests.ResultsBoth SWV and GWV methods produced significantly better segmentation accuracy than BA (P < 0.001) and MV (P < 0.001) for all OARs within both datasets. SWV generated the most accurate segmentations with DSC of: 0.88 for oral cavity, 0.85 for mandible, 0.84 for cord, 0.76 for brainstem and parotids, 0.71 for larynx, and 0.60 for submandibular glands. SWV's accuracy exceeded GWV's for submandibular glands (DSC = 0.60 vs 0.52, P = 0.019).ConclusionsThe contributed SWV and GWV methods generated more accurate automated segmentations than the other two multiatlas-based segmentation techniques. The consensus maps could be combined with segmentations to visualize voxel-wise consensus between atlases within OARs during manual review.
Project description:BackgroundHigh-throughput population screening for the novel coronavirus disease (COVID-19) is critical to controlling disease transmission. Convolutional neural networks (CNNs) are a cutting-edge technology in the field of computer vision and may prove more effective than humans in medical diagnosis based on computed tomography (CT) images. Chest CT images can show pulmonary abnormalities in patients with COVID-19.MethodsIn this study, CT image preprocessing are firstly performed using fuzzy c-means (FCM) algorithm to extracted the region of the pulmonary parenchyma. Through multiscale transformation, the preprocessed image is subjected to multi scale transformation and RGB (red, green, blue) space construction. After then, the performances of GoogLeNet and ResNet, as the most advanced CNN architectures, were compared in COVID-19 detection. In addition, transfer learning (TL) was employed to solve overfitting problems caused by limited CT samples. Finally, the performance of the models were evaluated and compared using the accuracy, recall rate, and F1 score.ResultsOur results showed that the ResNet-50 method based on TL (ResNet-50-TL) obtained the highest diagnostic accuracy, with a rate of 82.7% and a recall rate of 79.1% for COVID-19. These results showed that applying deep learning technology to COVID-19 screening based on chest CT images is a very promising approach. This study inspired us to work towards developing an automatic diagnostic system that can quickly and accurately screen large numbers of people with COVID-19.ConclusionsWe tested a deep learning algorithm to accurately detect COVID-19 and differentiate between healthy control samples, COVID-19 samples, and common pneumonia samples. We found that TL can significantly increase accuracy when the sample size is limited.
Project description:With the development of hybrid imaging scanners, micro-CT is widely used in locating abnormalities, studying drug metabolism, and providing structural priors to aid image reconstruction in functional imaging. Due to the low contrast of soft tissues, segmentation of soft tissue organs from mouse micro-CT images is a challenging problem. In this paper, we propose a mouse segmentation scheme based on dynamic contrast enhanced micro-CT images. With a homemade fast scanning micro-CT scanner, dynamic contrast enhanced images were acquired before and after injection of non-ionic iodinated contrast agents (iohexol). Then the feature vector of each voxel was extracted from the signal intensities at different time points. Based on these features, the heart, liver, spleen, lung, and kidney could be classified into different categories and extracted from separate categories by morphological processing. The bone structure was segmented using a thresholding method. Our method was validated on seven BALB/c mice using two different classifiers: a support vector machine classifier with a radial basis function kernel and a random forest classifier. The results were compared to manual segmentation, and the performance was assessed using the Dice similarity coefficient, false positive ratio, and false negative ratio. The results showed high accuracy with the Dice similarity coefficient ranging from 0.709 ± 0.078 for the spleen to 0.929 ± 0.006 for the kidney.
Project description:Rodent models are increasingly important in translational neuroimaging research. In rodent neuroimaging, particularly magnetic resonance imaging (MRI) studies, brain extraction is a critical data preprocessing component. Current brain extraction methods for rodent MRI usually require manual adjustment of input parameters due to widely different image qualities and/or contrasts. Here we propose a novel method, termed SHape descriptor selected Extremal Regions after Morphologically filtering (SHERM), which only requires a brain template mask as the input and is capable of automatically and reliably extracting the brain tissue in both rat and mouse MRI images. The method identifies a set of brain mask candidates, extracted from MRI images morphologically opened and closed sequentially with multiple kernel sizes, that match the shape of the brain template. These brain mask candidates are then merged to generate the brain mask. This method, along with four other state-of-the-art rodent brain extraction methods, were benchmarked on four separate datasets including both rat and mouse MRI images. Without involving any parameter tuning, our method performed comparably to the other four methods on all datasets, and its performance was robust with stably high true positive rates and low false positive rates. Taken together, this study provides a reliable automatic brain extraction method that can contribute to the establishment of automatic pipelines for rodent neuroimaging data analysis.
Project description:BackgroundMagnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics.New methodThis novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction.ResultsThis method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved.Comparison with existing methodsA downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement.ConclusionsOur new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI.
Project description:In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.