Skip to Main Content
This paper proposes a model for measuring similarity between videos which content is Chinese Sign Language (CSL), vision and sign language semantic are considered for the model. Vision component of the model is distance based on Volume Local Binary Patterns (VLBP), which is robust for motion and illumination. Semantic component of the model computes semantic distance based on definition of sign language semantic, which is defined as hand shape, location, orientation and movements. While quantizing the sign language semantic, contour is used to measure shape and orientation; trajectory is used for measuring location and movement. Experiment results show that proposed assessment model is effective and assessing result given by the model is close to subjective scoring.