Abstract:
Class-incremental learning (CIL) enables models to continuously learn new classes while addressing catastrophic forgetting. With the introduction of pre-trained models, n...Show MoreMetadata
Abstract:
Class-incremental learning (CIL) enables models to continuously learn new classes while addressing catastrophic forgetting. With the introduction of pre-trained models, new tuning paradigms have emerged for CIL. This paper revisits parameter-efficient fine-tuning (PEFT) methods in the context of incremental learning. Prior studies reveal that PEFT methods’ extended parameters do not directly contribute to semantic perception, limiting performance with significant category and domain gaps. To address this, we propose semantic-oriented visual prompt learning (SVPL), which enhances semantic perception and improves task-specific knowledge extraction. SVPL assigns learnable prompts to each class, using a contrastive group alignment to align prompts to task-specific semantic spaces, thus preserving relationships between old and new knowledge. Additionally, hierarchical semantic delivery allows the semantic transformation of prompt groups from shallow to deep layers to facilitate efficient knowledge mining and enable effective learning of new knowledge. Extensive experimental results on five benchmarks demonstrate the superior performance of our methods.
Published in: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 06-11 April 2025
Date Added to IEEE Xplore: 07 March 2025
ISBN Information: