Skip to Main Content
In this paper, we introduce an information theory motivated algorithm for constructing a low dimensional representation for data sampled from a higher dimensional space. The proposed minimum entropy linear embedding algorithm tries to minimize the information uncertainty (measured by entropy) as much as possible. The entropy is estimated by Gaussian mixture model probability density function and an upper bound of entropy is derived. As a result, the numerical integration involved in the objective function is reduced to a computationally efficient eigenfunction problem. The superiority of proposed method is that it can be used to find the intrinsic character of high dimensional data and has potential ability to reduce redundancy and to improve classification accuracy. Numerical results on toy data, UCI machine learning data set and face recognition illustrate this superiority.