Project description:White matter hyperintensities (WMHs) have been associated with various cerebrovascular and neurodegenerative diseases. Reliable quantification of WMHs is essential for understanding their clinical impact in normal and pathological populations. Automated segmentation of WMHs is highly challenging due to heterogeneity in WMH characteristics between deep and periventricular white matter, presence of artefacts and differences in the pathology and demographics of populations. In this work, we propose an ensemble triplanar network that combines the predictions from three different planes of brain MR images to provide an accurate WMH segmentation. In the loss functions the network uses anatomical information regarding WMH spatial distribution in loss functions, to improve the efficiency of segmentation and to overcome the contrast variations between deep and periventricular WMHs. We evaluated our method on 5 datasets, of which 3 are part of a publicly available dataset (training data for MICCAI WMH Segmentation Challenge 2017 - MWSC 2017) consisting of subjects from three different cohorts, and we also submitted our method to MWSC 2017 to be evaluated on the unseen test datasets. On evaluating our method separately in deep and periventricular regions, we observed robust and comparable performance in both regions. Our method performed better than most of the existing methods, including FSL BIANCA, and on par with the top ranking deep learning methods of MWSC 2017.
Project description:An accurate determination of the Gleason Score (GS) or Gleason Pattern (GP) is crucial in the diagnosis of prostate cancer (PCa) because it is one of the criterion used to guide treatment decisions for prognostic-risk groups. However, the manually designation of GP by a pathologist using a microscope is prone to error and subject to significant inter-observer variability. Deep learning has been used to automatically differentiate GP on digitized slides, aiding pathologists and reducing inter-observer variability, especially in the early GP of cancer. This article presents a binary semantic segmentation for the GP of prostate adenocarcinoma. The segmentation separates benign and malignant tissues, with the malignant class consisting of adenocarcinoma GP3 and GP4 tissues annotated from 50 unique digitized whole slide images (WSIs) of prostate needle core biopsy specimens stained with hematoxylin and eosin. The pyramidal digitized WSIs were extracted into image patches with a size of 256 × 256 pixels at a magnification of 20×. An ensemble approach is proposed combining U-Net-based architectures, including traditional U-Net, attention-based U-Net, and residual attention-based U-Net. This work initially considers a PCa tissue analysis using a combination of attention gate units with residual convolution units. The performance evaluation revealed a mean Intersection-over-Union of 0.79 for the two classes, 0.88 for the benign class, and 0.70 for the malignant class. The proposed method was then used to produce pixel-level segmentation maps of PCa adenocarcinoma tissue slides in the testing set. We developed a screening tool to discriminate between benign and malignant prostate tissue in digitized images of needle biopsy samples using an AI approach. We aimed to identify malignant adenocarcinoma tissues from our own collected, annotated, and organized dataset. Our approach returned the performance which was accepted by the pathologists.
Project description:Semantic segmentation is an important imaging analysis method enabling the identification of tissue structures. Histological image segmentation is particularly challenging, having large structural information while providing only limited training data. Additionally, labeling these structures to generate training data is time consuming. Here, we demonstrate the feasibility of a semantic segmentation using U-Net with a novel sparse labeling technique. The basic U-Net architecture was extended by attention gates, residual and recurrent links, and dropout regularization. To overcome the high class imbalance, which is intrinsic to histological data, under- and oversampling and data augmentation were used. In an ablation study, various architectures were evaluated, and the best performing model was identified. This model contains attention gates, residual links, and a dropout regularization of 0.125. The segmented images show accurate delineations of the vascular structures (with a precision of 0.9088 and an AUC-ROC score of 0.9717), and the segmentation algorithm is robust to images containing staining variations and damaged tissue. These results demonstrate the feasibility of sparse labeling in combination with the modified U-Net architecture.
Project description:Background and objectivesIntravascular ultrasound (IVUS) evaluation of coronary artery morphology is based on the lumen and vessel segmentation. This study aimed to develop an automatic segmentation algorithm and validate the performances for measuring quantitative IVUS parameters.MethodsA total of 1,063 patients were randomly assigned, with a ratio of 4:1 to the training and test sets. The independent data set of 111 IVUS pullbacks was obtained to assess the vessel-level performance. The lumen and external elastic membrane (EEM) boundaries were labeled manually in every IVUS frame with a 0.2-mm interval. The Efficient-UNet was utilized for the automatic segmentation of IVUS images.ResultsAt the frame-level, Efficient-UNet showed a high dice similarity coefficient (DSC, 0.93±0.05) and Jaccard index (JI, 0.87±0.08) for lumen segmentation, and demonstrated a high DSC (0.97±0.03) and JI (0.94±0.04) for EEM segmentation. At the vessel-level, there were close correlations between model-derived vs. experts-measured IVUS parameters; minimal lumen image area (r=0.92), EEM area (r=0.88), lumen volume (r=0.99) and plaque volume (r=0.95). The agreement between model-derived vs. expert-measured minimal lumen area was similarly excellent compared to the experts' agreement. The model-based lumen and EEM segmentation for a 20-mm lesion segment required 13.2 seconds, whereas manual segmentation with a 0.2-mm interval by an expert took 187.5 minutes on average.ConclusionsThe deep learning models can accurately and quickly delineate vascular geometry. The artificial intelligence-based methodology may support clinicians' decision-making by real-time application in the catheterization laboratory.
Project description:BackgroundThe detection of coronary artery disease (CAD) from the X-ray coronary angiography is a crucial process which is hindered by various issues such as presence of noise, insufficient contrast of the input images along with the uncertainties caused by the motion due to respiration and variation of angles of vessels.MethodsIn this article, an Automated Segmentation and Diagnosis of Coronary Artery Disease (ASCARIS) model is proposed in order to overcome the prevailing challenges in detection of CAD from the X-ray images. Initially, the preprocessing of the input images was carried out by using the modified wiener filter for the removal of both internal and external noise pixels from the images. Then, the enhancement of contrast was carried out by utilizing the optimized maximum principal curvature to preserve the edge information thereby contributing to increasing the segmentation accuracy. Further, the binarization of enhanced images was executed by the means of OTSU thresholding. The segmentation of coronary arteries was performed by implementing the Attention-based Nested U-Net, in which the attention estimator was incorporated to overcome the difficulties caused by intersections and overlapped arteries. The increased segmentation accuracy was achieved by performing angle estimation. Finally, the VGG-16 based architecture was implemented to extract threefold features from the segmented image to perform classification of X-ray images into normal and abnormal classes.ResultsThe experimentation of the proposed ASCARIS model was carried out in the MATLAB R2020a simulation tool and the evaluation of the proposed model was compared with several existing approaches in terms of accuracy, sensitivity, specificity, revised contrast to noise ratio, mean square error, dice coefficient, Jaccard similarity, Hausdorff distance, Peak signal-to-noise ratio (PSNR), segmentation accuracy and ROC curve.DiscussionThe results obtained conclude that the proposed model outperforms the existing approaches in all the evaluation metrics thereby achieving optimized classification of CAD. The proposed method removes the large number of background artifacts and obtains a better vascular structure.
Project description:Blood flow measurements in the ascending aorta and pulmonary artery from phase-contrast magnetic resonance images require accurate time-resolved vessel segmentation over the cardiac cycle. Current semi-automatic segmentation methods often involve time-consuming manual correction, relying on user experience for accurate results. The purpose of this study was to develop a semi-automatic vessel segmentation algorithm with shape constraints based on manual vessel delineations for robust segmentation of the ascending aorta and pulmonary artery, to evaluate the proposed method in healthy volunteers and patients with heart failure and congenital heart disease, to validate the method in a pulsatile flow phantom experiment, and to make the method freely available for research purposes. Algorithm shape constraints were extracted from manual reference delineations of the ascending aorta (n = 20) and pulmonary artery (n = 20) and were included in a semi-automatic segmentation method only requiring manual delineation in one image. Bias and variability (bias ± SD) for flow volume of the proposed algorithm versus manual reference delineations were 0·0 ± 1·9 ml in the ascending aorta (n = 151; seven healthy volunteers; 144 heart failure patients) and -1·7 ± 2·9 ml in the pulmonary artery (n = 40; 25 healthy volunteers; 15 patients with atrial septal defect). Interobserver bias and variability were lower (P = 0·008) for the proposed semi-automatic method (-0·1 ± 0·9 ml) compared to manual reference delineations (1·5 ± 5·1 ml). Phantom validation showed good agreement between the proposed method and timer-and-beaker flow volumes (0·4 ± 2·7 ml). In conclusion, the proposed semi-automatic vessel segmentation algorithm can be used for efficient analysis of flow and shunt volumes in the aorta and pulmonary artery.
Project description:Breast ultrasound medical images often have low imaging quality along with unclear target boundaries. These issues make it challenging for physicians to accurately identify and outline tumors when diagnosing patients. Since precise segmentation is crucial for diagnosis, there is a strong need for an automated method to enhance the segmentation accuracy, which can serve as a technical aid in diagnosis. Recently, the U-Net and its variants have shown great success in medical image segmentation. In this study, drawing inspiration from the U-Net concept, we propose a new variant of the U-Net architecture, called DBU-Net, for tumor segmentation in breast ultrasound images. To enhance the feature extraction capabilities of the encoder, we introduce a novel approach involving the utilization of two distinct encoding paths. In the first path, the original image is employed, while in the second path, we use an image created using the Roberts edge filter, in which edges are highlighted. This dual branch encoding strategy helps to extract the semantic rich information through a mutually informative learning process. At each level of the encoder, both branches independently undergo two convolutional layers followed by a pooling layer. To facilitate cross learning between the branches, a weighted addition scheme is implemented. These weights are dynamically learned by considering the gradient with respect to the loss function. We evaluate the performance of our proposed DBU-Net model on two datasets, namely BUSI and UDIAT, and our experimental results demonstrate superior performance compared to state-of-the-art models.
Project description:Quantitative ultrasound (QUS) aims to reveal information about the tissue microstructure using backscattered echo signals from clinical scanners. Among different QUS parameters, scatterer number density is an important property that can affect the estimation of other QUS parameters. Scatterer number density can be classified into high or low scatterer densities. If there are more than ten scatterers inside the resolution cell, the envelope data are considered as fully developed speckle (FDS) and, otherwise, as underdeveloped speckle (UDS). In conventional methods, the envelope data are divided into small overlapping windows (a strategy here we refer to as patching), and statistical parameters, such as SNR and skewness, are employed to classify each patch of envelope data. However, these parameters are system-dependent, meaning that their distribution can change by the imaging settings and patch size. Therefore, reference phantoms that have known scatterer number density are imaged with the same imaging settings to mitigate system dependency. In this article, we aim to segment regions of ultrasound data without any patching. A large dataset is generated, which has different shapes of scatterer number density and mean scatterer amplitude using a fast simulation method. We employ a convolutional neural network (CNN) for the segmentation task and investigate the effect of domain shift when the network is tested on different datasets with different imaging settings. Nakagami parametric image is employed for multitask learning to improve performance. Furthermore, inspired by the reference phantom methods in QUS, a domain adaptation stage is proposed, which requires only two frames of data from FDS and UDS classes. We evaluate our method for different experimental phantoms and in vivo data.
Project description:PurposeThree-dimensional reconstruction of a vessel centerline from paired planar coronary angiographic images is critical to reconstruct the complex three-dimensional structure of the coronary artery lumen and the relative positioning of implanted devices. In this study, a new vessel centerline reconstruction method that can utilize non-isocentric and non-orthogonal pairs of angiographic images was developed and tested.MethodsOur new method was developed in in vitro phantom models of bifurcated coronary artery with and without stent, and then tested in in vivo swine models (twelve coronary arteries). This method was also validated using data from six patients.ResultsOur new method demonstrated high accuracy (root mean square error = 0.27 mm or 0.76 pixel), and high reproducibility across a broad imaging angle (20°-130°) and between different cardiac cycles in vitro and in vivo. Use of this method demonstrated that the vessel centerline in the stented segment did not deform significantly over a cardiac cycle in vivo. In addition, the total movement of the isocenter in each image could be accurately estimated in vitro and in vivo. The performance of this new method for patient data was similar to that for in vitro phantom models and in vivo animal models.ConclusionsWe developed a vessel centerline reconstruction method for non-isocentric and non-orthogonal angiographic images. It demonstrated high accuracy and good reproducibility in vitro, in vivo, and in clinical setting, suggesting that our new method is clinically applicable despite the small sample size of clinical data.
Project description:Bone metastasis, emerging oncological therapies, and osteoporosis represent some of the distinct clinical contexts which can result in morphological alterations in bone structure. The visual assessment of these changes through anatomical images is considered suboptimal, emphasizing the importance of precise skeletal segmentation as a valuable aid for its evaluation. In the present study, a neural network model for automatic skeleton segmentation from bidimensional computerized tomography (CT) slices is proposed. A total of 77 CT images and their semimanual skeleton segmentation from two acquisition protocols (whole-body and femur-to-head) are used to form a training group and a testing group. Preprocessing of the images includes four main steps: stretcher removal, thresholding, image clipping, and normalization (with two different techniques: interpatient and intrapatient). Subsequently, five different sets are created and arranged in a randomized order for the training phase. A neural network model based on U-Net architecture is implemented with different values of the number of channels in each feature map and number of epochs. The model with the best performance obtains a Jaccard index (IoU) of 0.959 and a Dice index of 0.979. The resultant model demonstrates the potential of deep learning applied in medical images and proving its utility in bone segmentation.