Skip to Main Content
Current nonlinear dimensionality reduction (NLDR) algorithms have quadratic or cubic complexity in the number of data, which limits their ability to process real-world large-scale datasets. Learning over a small set of landmark points can potentially allow much more effective NLDR and make such algorithms scalable to large dataset problems. In this paper, we show that the approximation to an unobservable intrinsic manifold by a few latent points residing on the manifold can be cast in a novel dictionary learning problem over the observation space. This leads to the presented locality constrained dictionary learning (LCDL) algorithm, which effectively learns a compact set of atoms consisting of locality-preserving landmark points on a nonlinear manifold. Experiments comparing state-of-the-art DL algorithms, including K-SVD, LCC and LLC, show that LCDL improves the embedding quality and greatly reduces the complexity of NLDR algorithms.
Date of Publication: April 2013