Skip to Main Content
In human action classification, many effective methods are based on the detection and representation of the Space-Time Interesting Point (STIP). There are two widely used representations of STIP feature: appearance and motion description. But the appearance representation is limited by inter-class variations. In the paper the experiments are designed to demonstrate the effect of the motion description in human action classification by comparing with appearance description. In the experiment, the KTH datasets are considered. HOG and HOF features are the typical appearance and motion representation. Based on this, there are two comparisons about the STIP description in human action classification in the paper. One is directly using the SVM classifier for STIP feature with different description and using STIP categories to vote the action category. The other is using the motion description instead of the origin description in the state of the art method. All of the results show that the motion representation on STIP is better than the appearance representation in human action classification.