Semantic-Oriented Labeled-to-Unlabeled Distribution Translation for Image Segmentation | IEEE Journals & Magazine | IEEE Xplore

Semantic-Oriented Labeled-to-Unlabeled Distribution Translation for Image Segmentation


Abstract:

Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based mod...Show More

Abstract:

Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based models usually regarded the segmentation task as pixel-wise classification and neglected the semantic correlations of pixels across different images, leading to vague feature distribution. Moreover, pixel-wise annotated data is rare in medical domain, and the scarce annotated data usually exhibits the biased distribution against the desired one, hindering the performance improvement under the supervised learning setting. In this paper, we propose a novel Labeled-to-unlabeled Distribution Translation (L2uDT) framework with Semantic-oriented Contrastive Learning (SoCL), mainly for addressing the aforementioned issues in medical image segmentation. In SoCL, a semantic grouping module is designed to cluster pixels into a set of semantically coherent groups, and a semantic-oriented contrastive loss is advanced to constrain group-wise prototypes, so as to explicitly learn a feature space with intra-class compactness and inter-class separability. We then establish a L2uDT strategy to approximate the desired data distribution for unbiased optimization, where we translate the labeled data distribution with the guidance of extensive unlabeled data. In particular, a bias estimator is devised to measure the distribution bias, then a gradual-paced shift is derived to progressively translate the labeled data distribution to unlabeled one. Both labeled and translated data are leveraged to optimize the segmentation model simultaneously. We illustrate the effectiveness of the proposed method on two benchmark datasets, EndoScene and PROSTATEx, and our method achieves state-of-the-art performance, which clearly demonstrates its effectiveness for medical image segmentation. The source code is available at https://github.com/CityU-AIM-Group/L2uDT.
Published in: IEEE Transactions on Medical Imaging ( Volume: 41, Issue: 2, February 2022)
Page(s): 434 - 445
Date of Publication: 20 September 2021

ISSN Information:

PubMed ID: 34543194

Funding Agency:

Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China

I. Introduction

Medical image segmentation is an essential step in a wide range of clinical applications. For instance, prostate zonal segmentation is beneficial for treatment planning [1], [2], and polyp segmentation in colonoscopy images can provide valuable boundary information for the further surgery [3], [4]. Recently, manual annotation is common in clinical practice, however, it is labor-intensive and prone to inter and intra-observer variability. Hence, there is a high demand on accurate and reliable automatic segmentation methods to derive quantitative assessment for clinical applications.

Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, SAR, China

Contact IEEE to Subscribe

References

References is not available for this document.