Abstract:
Transfer learning aims to leverage the learned knowledge from the source domain to facilitate learning in an unlabeled target domain. Unsupervised domain adaptation (UDA)...Show MoreMetadata
Abstract:
Transfer learning aims to leverage the learned knowledge from the source domain to facilitate learning in an unlabeled target domain. Unsupervised domain adaptation (UDA), a significant challenge for transfer learning, typically requires alignment between the source and target domains without knowing any target labels. Deep-learning-based methods are generally trained from end-to-end with intensive computational resources and are prone to overfitting, especially when target data is scarce. This paper aims to reduce computational complexity and address the overfitting problem. We present a new green-learning-based method named Green Image Label Transfer (GILT). Compared to DL-based methods, the proposed method is not trained end to end but modularized into three phases for better interpretability: (1) Joint Discriminant Subspace Learning, (2) Source-to-Target Label Transfer, and (3) Supervised Learning in the Target Domain. We evaluate GILT by transferring among MNIST, USPS, and SVHN. Experimental results indicate that GILT can transfer labels among multiple domains with a small model size, which makes it easier to deploy on edge and mobile devices without sacrificing performance.
Published in: 2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR)
Date of Conference: 07-09 August 2024
Date Added to IEEE Xplore: 15 October 2024
ISBN Information: