Skip to Main Content
Human action analysis has recently received growing interest from vision researchers. In this paper, we present a simple but effective approach for action recognition by combining multiple complementary features with Gaussian process classification. Since it is often insufficient for a single type of feature derived from action videos to characterize variations among different motions, we propose to combine two types of features for better action representation, namely the quantized vocabulary of spatio-temporal (ST) volumes and the quantized vocabulary of silhouette projection (SP) histograms, capturing the global and local motions as well as the appearances of the actions respectively. The use of such features also elegantly converts the original temporal classification problem into a static classification problem, which enables the use of any existing classifiers. Besides the traditional K-nearest neighbor (KNN), we use the state-of-the-art Gaussian process (GP) model for achieving stable performance. We also investigate how the number of features can be reduced while still maintaining classification accuracy. Our experimental results have shown that the fusion of multiple features improved the recognition accuracy compared with the use of any single feature type. We also show that the redundancy of fused features can be reduced by spectral feature analysis. Compared with KNN, the use of the GP model significantly improved the stability of the algorithm's performance with respect to the parameter settings.