Unknown

Dataset Information

0

Unified cross-modality feature disentangler for unsupervised multi-domain MRI abdomen organs segmentation.


ABSTRACT: Our contribution is a unified cross-modality feature disentagling approach for multi-domain image translation and multiple organ segmentation. Using CT as the labeled source domain, our approach learns to segment multi-modal (T1-weighted and T2-weighted) MRI having no labeled data. Our approach uses a variational auto-encoder (VAE) to disentangle the image content from style. The VAE constrains the style feature encoding to match a universal prior (Gaussian) that is assumed to span the styles of all the source and target modalities. The extracted image style is converted into a latent style scaling code, which modulates the generator to produce multi-modality images according to the target domain code from the image content features. Finally, we introduce a joint distribution matching discriminator that combines the translated images with task-relevant segmentation probability maps to further constrain and regularize image-to-image (I2I) translations. We performed extensive comparisons to multiple state-of-the-art I2I translation and segmentation methods. Our approach resulted in the lowest average multi-domain image reconstruction error of 1.34±0.04. Our approach produced an average Dice similarity coefficient (DSC) of 0.85 for T1w and 0.90 for T2w MRI for multi-organ segmentation, which was highly comparable to a fully supervised MRI multi-organ segmentation network (DSC of 0.86 for T1w and 0.90 for T2w MRI).

SUBMITTER: Jiang J 

PROVIDER: S-EPMC7757792 | biostudies-literature | 2020 Oct

REPOSITORIES: biostudies-literature

altmetric image

Publications

Unified cross-modality feature disentangler for unsupervised multi-domain MRI abdomen organs segmentation.

Jiang Jue J   Veeraraghavan Harini H  

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention 20200929


Our contribution is a unified cross-modality feature disentagling approach for multi-domain image translation and multiple organ segmentation. Using CT as the labeled source domain, our approach learns to segment multi-modal (T1-weighted and T2-weighted) MRI having no labeled data. Our approach uses a variational auto-encoder (VAE) to disentangle the image content from style. The VAE constrains the style feature encoding to match a universal prior (Gaussian) that is assumed to span the styles of  ...[more]

Similar Datasets

| S-EPMC7075308 | biostudies-literature
| S-EPMC10415941 | biostudies-literature
| S-EPMC7757913 | biostudies-literature
| S-EPMC10442428 | biostudies-literature
| S-EPMC5709221 | biostudies-other
| S-EPMC10623055 | biostudies-literature
| S-EPMC8270218 | biostudies-literature
| S-EPMC9850278 | biostudies-literature
| S-EPMC6052096 | biostudies-literature
| S-EPMC7954354 | biostudies-literature