Abstract:
Segment Anything Model (SAM) is a foundational image segmentation model, which shows superior performance for natural image segmentation tasks. Several SAM-based medical ...Show MoreMetadata
Abstract:
Segment Anything Model (SAM) is a foundational image segmentation model, which shows superior performance for natural image segmentation tasks. Several SAM-based medical image segmentations have been proposed. However, these SAM-based medical image segmentation methods heavily depend on prior manual guidance involving points, boxes, and coarse-grained masks, which lack adaptability and flexibility. Moreover, the inherent challenge of edge blurring in medical images is critical, as it directly affects the quality of segmentation. To address these challenges, we propose an uncertainty-driven edge prompt generation network for medical image segmentation, called UDEG-Net. Specifically, to better adapt to medical image segmentation, we fine-tune the encoder by using Low-Rank Adaptation (LoRA) technology to enhance the encoder’s learning capability and capture enriched medical image features. Furthermore, to overcome the limitations of interactive prompts, we develop an auto edge prompt generator to generate edge prompt information and further enhance the structural representation. Finally, to focus on the high-uncertainty edge areas, we introduce an evidence-based uncertainty estimation and a progressive uncertainty-driven loss to drive the auto edge prompt generator to yield robust edge prompt information and reliable segmentation results. Experimental results on three public datasets and one private dataset show that our UDEG-Net outperforms the state-of-the-art medical image segmentation methods.
Published in: IEEE Transactions on Medical Imaging ( Early Access )