Loading [MathJax]/extensions/MathMenu.js
MCC: Multi-Cluster Contrastive Semi-Supervised Segmentation Framework for Echocardiogram Videos | IEEE Journals & Magazine | IEEE Xplore

MCC: Multi-Cluster Contrastive Semi-Supervised Segmentation Framework for Echocardiogram Videos


The proposed Multi-Cluster Contrastive (MCC) learning framework leverages multi-cluster contrastive loss and an anchor frame selection algorithm to be compatible with mos...

Abstract:

Variability in sonographer expertise often leads to low-quality ultrasound imaging, presenting significant challenges for accurate echocardiogram video segmentation. Curr...Show More

Abstract:

Variability in sonographer expertise often leads to low-quality ultrasound imaging, presenting significant challenges for accurate echocardiogram video segmentation. Current methods require extensive annotations, which are impractical given the large number of frames and artifacts in videos. To address this, we propose a Multi-Cluster Contrastive (MCC) learning framework, a semi-supervised approach that minimizes annotation requirements while maintaining high segmentation performance. Leveraging contrastive loss to enhance foreground feature extraction, our method incorporates multi-cluster contrastive loss to utilize multiple annotated ground-truths per batch and an anchor frame selection algorithm to improve segmentation performance. Experimental results on two public echocardiography datasets (MCE and EchoNet-Dynamic) demonstrate the effectiveness of our method, achieving state-of-the-art performance. The MCC framework enhances segmentation practicality by reducing annotation requirements, particularly for developing new datasets, and facilitates efficient segmentation of low-quality echocardiogram videos. Our implementation is available at https://github.com/windstormer/MCC.
The proposed Multi-Cluster Contrastive (MCC) learning framework leverages multi-cluster contrastive loss and an anchor frame selection algorithm to be compatible with mos...
Published in: IEEE Access ( Volume: 13)
Page(s): 30543 - 30554
Date of Publication: 13 February 2025
Electronic ISSN: 2169-3536
Figures are not available for this document.

Figures are not available for this document.

References

References is not available for this document.