By Topic

Learning Style-directed Dynamics of Human Motion for Automatic Motion Synthesis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yi Wang ; Department of Computer Science, Tsinghua University, 100084 Beijing, China ; Zhi-Qiang Liu ; Li-Zhu Zhou

This paper presents a new model, the HMM/Mix-SDTG, which describes Markov processes under control of a global vector variable called style variable. We present an EM learning algorithm to learn an HMM/Mix-SDTG from one or more 3D motion capture sequences labelled by their style values. Because each dimension of the style variable has explicit physical meaning, with the presented synthesis algorithm, we are able to generate arbitrarily new motion with style exactly as demand by specifying a style value. The output densities of HMM/Mix-SDTG is represented by mixtures of stylized decomposable triangulated graphs (Mix-SDTG), which, in addition to parameterizing the Markov process with the style variable, also achieve more numerical robustness and preventing common artifacts of 3D motion synthesis.

Published in:

2006 IEEE International Conference on Systems, Man and Cybernetics  (Volume:5 )

Date of Conference:

8-11 Oct. 2006