Abstract:
Given the high annotation costs and ethical considerations associated with medical images, leveraging a limited number of annotated samples for Few-Shot Medical Image Seg...Show MoreMetadata
Abstract:
Given the high annotation costs and ethical considerations associated with medical images, leveraging a limited number of annotated samples for Few-Shot Medical Image Segmentation (FSMIS) has become increasingly prevalent. However, existing models tend to focus on visible foreground support information, often overlooking extreme foreground-background imbalances. In addition, query images sometimes have slight different appearance compared to support images of the same category due to the differences in size as well as slicing angle, thus employing only support images to generate prototypes inevitably leads to matching bias. To address these challenges, we present an innovative approach through learning a Multiple Twin-support Prototypes Network (MTPNet). Our approach includes the design of the Scale Consistent Sampling (SCS) module, which adaptively adjusts the foreground and background points within the support set, thereby balancing the influence of various structural elements in the image. Additionally, the Twin-support Prototypes Extraction (TPE) module facilitates the critical interaction between query and support features to extract twin-support prototypes. This module incorporates a Backtrace Interaction Filter (BIF) to eliminate erroneous interaction prototypes. Extensive experimental validation on three widely used medical image datasets demonstrates that our method surpasses current State-of-the-arts, showcasing its potential to address key limitations in FSMIS. The code is available at https://github.com/FeifanSong/MTPNet.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information: