Abstract:
Inspired by the concept of Principal Curves, in this paper, we define Principal Polynomials as a non-linear generalization of Principal Components to overcome the conditi...Show MoreMetadata
Abstract:
Inspired by the concept of Principal Curves, in this paper, we define Principal Polynomials as a non-linear generalization of Principal Components to overcome the conditional mean independence restriction of PCA. Principal Polynomials deform the straight Principal Components by minimizing the regression error (or variance) in the corresponding orthogonal subspaces. We propose to use a projection on a series of these polynomials to set a new nonlinear data representation: the Principal Polynomial Analysis (PPA). We prove that the dimensionality reduction error in PPA is always lower than in PCA. Lower truncation error and increased independence suggest that unsupervised PPA features can be better suited to image classification than those identified by other unsupervised techniques. We analyze the performance of Linear Discriminant Analysis in the feature space after dimensionality reduction using the proposed PPA, the classical PCA, and locally linear embedding (LLE). Experiments on very high resolution data confirm the suitability of PPA to describe nonlinear manifolds found in remote sensing data.
Date of Conference: 24-29 July 2011
Date Added to IEEE Xplore: 20 October 2011
ISBN Information: