Skip to Main Content
In this paper, a motion retrieval system is investigated from a multiple-instance learning view. In order to retrieve similar motion data, each human joint's motion clip is regarded as a bag, while each of its segments is regarded as an instance. First 3D temporal-spatial features and their keyspaces of each human joint are extracted. Then data driven decision trees based on ensemble multiple-instance are automatically constructed to reflect the influence of each point during the comparison of motion similarity. At last the method of multiple-instance retrieval is used to complete motion retrieval. Experimental results show that our approaches are effective for motion data retrieval.