Skip to Main Content
We investigate the symmetric Kullback-Leibler (KL2) distance in speaker clustering and its unreported effects for differently-sized feature matrices. Speaker data is represented as Mel frequency cepstral coefficient (MFCC) vectors, and features are compared using the KL2 metric to form clusters of speech segments for each speaker. We make two observations with respect to clustering based on KL2: 1.) The accuracy of clustering is strongly dependent on the absolute lengths of the speech segments and their extracted feature vectors. 2.) The accuracy of the similarity measure strongly degrades with the length of the shorter of the two speech segments. These effects of length can be attributed to the measure of covariance used in KL2. We demonstrate an empirical correction of this sample-size effect that increases clustering accuracy. We draw parallels to two vector quantization-based (VQ) similarity measures, one which exhibits an equivalent effect of sample size, and the second being less influenced by it.