Loading [MathJax]/extensions/MathMenu.js
Discriminative and Uncorrelated Feature Selection With Constrained Spectral Analysis in Unsupervised Learning | IEEE Journals & Magazine | IEEE Xplore

Discriminative and Uncorrelated Feature Selection With Constrained Spectral Analysis in Unsupervised Learning


Abstract:

The existing unsupervised feature extraction methods frequently explore low-redundant features by an uncorrelated constraint. However, the constrained models might incur ...Show More

Abstract:

The existing unsupervised feature extraction methods frequently explore low-redundant features by an uncorrelated constraint. However, the constrained models might incur trivial solutions, due to the singularity of scatter matrix triggered by high-dimensional data. In this paper, we propose a regularized regression model with a generalized uncorrelated constraint for feature selection, which leads to three merits: 1) exploring the low-redundant and discriminative features; 2) avoiding the trivial solutions and 3) simplifying the optimization. Besides that, the local cluster structure is achieved via a novel constrained spectral analysis for the unsupervised learning, where MustLinks and Cannot-Links are transformed into a intrinsic graph and a penalty graph respectively, rather than incorporated into a mixed affinity graph. Accordingly, a discriminative and uncorrelated feature selection with constrained spectral analysis (DUCFS) is proposed with adopting σ-norm regularization for interpolating between F-norm and ℓ2,1-norm. Due to the flexible gradient and global differentiability, our model converges fast. Extensive experiments on benchmark datasets among several state-of-the-art approaches verify the effectiveness of the proposed method.
Published in: IEEE Transactions on Image Processing ( Volume: 29)
Page(s): 2139 - 2149
Date of Publication: 28 October 2019

ISSN Information:

PubMed ID: 31670668

Funding Agency:


I. Introduction

Due to the rapid development and ongoing improvements of information technology, huge amounts of high-dimensional data are generated correspondingly. Under these circumstances, feature selection, which dedicates to picking out the most informative dimensions from high-dimensional data, becomes an urgent task and has gained wide applications in the spectrum of machine learning [1], data mining [2] and bio-informatics [3], etc. Based on utilizing labels or not, feature selection methods are categorized into supervised [4]–[6] and unsuper- vised [7]–[9]. Labels of observations reflect the discriminative information and guide the subspace learning directly. Unfortunately, laborious and expensive cost makes it difficult to obtain labels of entire samples in reality. Hence, we pay attentions to unsupervised feature selection in this paper.

Contact IEEE to Subscribe

References

References is not available for this document.