Loading [a11y]/accessibility-menu.js
An Iterative Framework for Self-Supervised Deep Speaker Representation Learning | IEEE Conference Publication | IEEE Xplore

An Iterative Framework for Self-Supervised Deep Speaker Representation Learning


Abstract:

In this paper, we propose an iterative framework for self-supervised speaker representation learning based on a deep neural network (DNN). The framework starts with train...Show More

Abstract:

In this paper, we propose an iterative framework for self-supervised speaker representation learning based on a deep neural network (DNN). The framework starts with training a self-supervision speaker embedding network by maximizing agreement between different segments within an utterance via a contrastive loss. Taking advantage of DNN’s ability to learn from data with label noise, we propose to cluster the speaker embedding obtained from the previous speaker network and use the subsequent class assignments as pseudo labels to train a new DNN. Moreover, we iteratively train the speaker network with pseudo labels generated from the previous step to bootstrap the discriminative power of a DNN. Speaker verification experiments are conducted on the VoxCeleb dataset. The results show that our proposed iterative self-supervised learning framework outperformed previous works using self-supervision. The speaker network after 5 iterations obtains a 61% performance gain over the speaker embedding model trained with contrastive loss.
Date of Conference: 06-11 June 2021
Date Added to IEEE Xplore: 13 May 2021
ISBN Information:

ISSN Information:

Conference Location: Toronto, ON, Canada

1. INTRODUCTION

Speaker recognition refers to identify or verify a claimed speaker by analyzing the given speech from that speaker. Over the past few years, supervised deep learning methods greatly improve the performance of speaker recognition system [1], [2], [3]. These methods require large-scale datasets to learn discriminative speaker representations. However, manually annotating speaker labels for a large scale dataset may sometimes be expensive and problematic. On the other hand, there are vast numbers of unlabeled speech data that can be used for training DNNs. With self-supervision methods, deep learning can automate the labeling process and benefit from massive amounts of data. Self-supervised learning is an old, active research area, and has recently received growing attention in speech signal processing [4], [5], [6], [7], [8], natural language processing [9], and computer vision [10], [11], [12], [13], [14], [15].

Contact IEEE to Subscribe

References

References is not available for this document.