Skip to Main Content
This letter proposes a novel approach for automated recognition of human-human interactions. First, the motion directions are obtained and the spatial relationships between persons (topological and directional relations) are modeled based on the tracking results. Then, we propose a method to extract the spatial semantics between persons, including front, back, face to face, back to back, and left or right. Finally, we adopt context-free grammar (CFG) to recognize the interactions, and thereinto the production rules are established based on the transformation of spatial semantics. Extensive experiments validate the effectiveness of the proposed approach.
Date of Publication: March 2012