Project description:BackgroundThe deterministic deep learning models have achieved state-of-the-art performance in various medical image analysis tasks, including nuclei segmentation from histopathology images. The deterministic models focus on improving the model prediction accuracy without assessing the confidence in the predictions.MethodsWe propose a semantic segmentation model using Bayesian representation to segment nuclei from the histopathology images and to further quantify the epistemic uncertainty. We employ Bayesian approximation with Monte-Carlo (MC) dropout during the inference time to estimate the model's prediction uncertainty.ResultsWe evaluate the performance of the proposed approach on the PanNuke dataset, which consists of 312 visual fields from 19 organ types. We compare the nuclei segmentation accuracy of our approach with that of a fully convolutional neural network, U-Net, SegNet, and the state-of-the-art Hover-net. We use F1-score and intersection over union (IoU) as the evaluation metrics. The proposed approach achieves a mean F1-score of 0.893 ± 0.008 and an IoU value of 0.868 ± 0.003 on the test set of the PanNuke dataset. These results outperform the Hover-net, which has a mean F1-score of 0.871 ± 0.010 and an IoU value of 0.840 ± 0.032.ConclusionsThe proposed approach, which incorporates Bayesian representation and Monte-Carlo dropout, demonstrates superior performance in segmenting nuclei from histopathology images compared to existing models such as U-Net, SegNet, and Hover-net. By considering the epistemic uncertainty, our model provides a more reliable estimation of the prediction confidence. These findings highlight the potential of Bayesian deep learning for improving medical image analysis tasks and can contribute to the development of more accurate and reliable computer-aided diagnostic systems.
Project description:Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach.
Project description:PurposeTo automatically segment and measure the levator hiatus with a deep learning approach and evaluate the performance between algorithms, sonographers, and different devices.MethodsThree deep learning models (UNet-ResNet34, HR-Net, and SegNet) were trained with 360 images and validated with 42 images. The trained models were tested with two test sets. The first set included 138 images to evaluate the performance between the algorithms and sonographers. An independent dataset including 679 images assessed the performances of algorithms between different ultrasound devices. Four metrics were used for evaluation: DSC, HDD, the relative error of segmentation area, and the absolute error of segmentation area.ResultsThe UNet model outperformed HR-Net and SegNet. It could achieve a mean DSC of 0.964 for the first test set and 0.952 for the independent test set. UNet was creditable compared with three senior sonographers with a noninferiority test in the first test set and equivalent in the two test sets collected by different devices. On average, it took two seconds to process one case with a GPU and 2.4 s with a CPU.ConclusionsThe deep learning approach has good performance for levator hiatus segmentation and good generalization ability on independent test sets. This automatic levator hiatus segmentation approach could help shorten the clinical examination time and improve consistency.
Project description:We present DeepMIB, a new software package that is capable of training convolutional neural networks for segmentation of multidimensional microscopy datasets on any workstation. We demonstrate its successful application for segmentation of 2D and 3D electron and multicolor light microscopy datasets with isotropic and anisotropic voxels. We distribute DeepMIB as both an open-source multi-platform Matlab code and as compiled standalone application for Windows, MacOS and Linux. It comes in a single package that is simple to install and use as it does not require knowledge of programming. DeepMIB is suitable for everyone interested of bringing a power of deep learning into own image segmentation workflows.
Project description:The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.
Project description:Cell death experiments are routinely done in many labs around the world, these experiments are the backbone of many assays for drug development. Cell death detection is usually performed in many ways, and requires time and reagents. However, cell death is preceded by slight morphological changes in cell shape and texture. In this paper, we trained a neural network to classify cells undergoing cell death. We found that the network was able to highly predict cell death after one hour of exposure to camptothecin. Moreover, this prediction largely outperforms human ability. Finally, we provide a simple python tool that can broadly be used to detect cell death.
Project description:The study objective was to investigate the performance of a dedicated convolutional neural network (CNN) optimized for wrist cartilage segmentation from 2D MR images. CNN utilized a planar architecture and patch-based (PB) training approach that ensured optimal performance in the presence of a limited amount of training data. The CNN was trained and validated in 20 multi-slice MRI datasets acquired with two different coils in 11 subjects (healthy volunteers and patients). The validation included a comparison with the alternative state-of-the-art CNN methods for the segmentation of joints from MR images and the ground-truth manual segmentation. When trained on the limited training data, the CNN outperformed significantly image-based and PB-U-Net networks. Our PB-CNN also demonstrated a good agreement with manual segmentation (Sørensen-Dice similarity coefficient [DSC] = 0.81) in the representative (central coronal) slices with a large amount of cartilage tissue. Reduced performance of the network for slices with a very limited amount of cartilage tissue suggests the need for fully 3D convolutional networks to provide uniform performance across the joint. The study also assessed inter- and intra-observer variability of the manual wrist cartilage segmentation (DSC = 0.78-0.88 and 0.9, respectively). The proposed deep learning-based segmentation of the wrist cartilage from MRI could facilitate research of novel imaging markers of wrist osteoarthritis to characterize its progression and response to therapy.
Project description:Recent advancements in deep learning have revolutionized the way microscopy images of cells are processed. Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require a massive amount of annotated data. A common way of improving accuracy builds on the artificial increase of the training set by using different augmentation techniques. A less common way relies on test-time augmentation (TTA) which yields transformed versions of the image for prediction and the results are merged. In this paper we describe how we have incorporated the test-time argumentation prediction method into two major segmentation approaches utilized in the single-cell analysis of microscopy images. These approaches are semantic segmentation based on the U-Net, and instance segmentation based on the Mask R-CNN models. Our findings show that even if only simple test-time augmentations (such as rotation or flipping and proper merging methods) are applied, TTA can significantly improve prediction accuracy. We have utilized images of tissue and cell cultures from the Data Science Bowl (DSB) 2018 nuclei segmentation competition and other sources. Additionally, boosting the highest-scoring method of the DSB with TTA, we could further improve prediction accuracy, and our method has reached an ever-best score at the DSB.
Project description:Magnetic resonance imaging (MRI) is widely used for ischemic stroke lesion detection in mice. A challenge is that lesion segmentation often relies on manual tracing by trained experts, which is labor-intensive, time-consuming, and prone to inter- and intra-rater variability. Here, we present a fully automated ischemic stroke lesion segmentation method for mouse T2-weighted MRI data. As an end-to-end deep learning approach, the automated lesion segmentation requires very little preprocessing and works directly on the raw MRI scans. We randomly split a large dataset of 382 MRI scans into a subset (n = 293) to train the automated lesion segmentation and a subset (n = 89) to evaluate its performance. We compared Dice coefficients and accuracy of lesion volume against manual segmentation, as well as its performance on an independent dataset from an open repository with different imaging characteristics. The automated lesion segmentation produced segmentation masks with a smooth, compact, and realistic appearance that are in high agreement with manual segmentation. We report dice scores higher than the agreement between two human raters reported in previous studies, highlighting the ability to remove individual human bias and standardize the process across research studies and centers.
Project description:Since cone-beam computed tomography (CBCT) technology has been widely adopted in orthodontics, multiple attempts have been made to devise techniques for mandibular segmentation and 3D superimposition. Unfortunately, as the software utilized in these methods are not specifically designed for orthodontics, complex procedures are often necessary to analyze each case. Thus, this study aimed to establish an orthodontist-friendly protocol for segmenting the mandible from CBCT images that maintains access to the internal anatomic structures. The "sculpting tool" in the Dolphin 3D Imaging software was used for segmentation. The segmented mandible images were saved as STL files for volume matching in the 3D Slicer to validate the repeatability of the current protocol and were exported as DICOM files for internal structure analysis and voxel-based superimposition. The mandibles of all tested CBCT datasets were successfully segmented. The volume matching analysis showed high consistency between two independent segmentations for each mandible. The intraclass correlation coefficient (ICC) analysis on 20 additional CBCT mandibular segmentations further demonstrated the high consistency of the current protocol. Moreover, all of the anatomical structures for superimposition identified by the American Board of Orthodontics were found in the voxel-based superimposition, demonstrating the ability to conduct precise internal structure analyses with the segmented images. An efficient and precise protocol to segment the mandible while retaining access to the internal structures was developed on the basis of CBCT images.