Framework of UDA-PLC. The orange represents the samples in the source domain, the blue represents the samples in the target domain. Different shapes mean different featur...
Abstract:
Unsupervised domain adaptation aims to align the distributions of data in source and target domains, as well as assign the labels to data in the target domain. In this pa...Show MoreMetadata
Abstract:
Unsupervised domain adaptation aims to align the distributions of data in source and target domains, as well as assign the labels to data in the target domain. In this paper, we propose a new method named Unsupervised Domain Adaptation based on Pseudo-Label Confidence (UDA-PLC). Concretely, UDA-PLC first learns a new feature representation by projecting data of source and target domains into a latent subspace. In this subspace, the distribution of data in two domains are aligned and the discriminability of features in both domains is improved. Then, UDA-PLC applies Structured Prediction (SP) and Nearest Class Prototype (NCP) to predicting pseudo-labels of data in the target domain, and it takes a fraction of samples with high confidence rather than all the pseudo-labeled target samples into next iterative learning. Finally, experimental results validate that the proposed method outperforms several state-of-the-art methods on three benchmark data sets.
Framework of UDA-PLC. The orange represents the samples in the source domain, the blue represents the samples in the target domain. Different shapes mean different featur...
Published in: IEEE Access ( Volume: 9)