By Topic

Learn2Dance: Learning Statistical Music-to-Dance Mappings for Choreography Synthesis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Ofli, F. ; Electr. Eng. & Comput. Sci. Dept., Univ. of California at Berkeley, Berkeley, CA, USA ; Erzin, E. ; Yemez, Y. ; Tekalp, A.M.

We propose a novel framework for learning many-to-many statistical mappings from musical measures to dance figures towards generating plausible music-driven dance choreographies. We obtain music-to-dance mappings through use of four statistical models: 1) musical measure models, representing a many-to-one relation, each of which associates different melody patterns to a given dance figure via a hidden Markov model (HMM); 2) exchangeable figures model, which captures the diversity in a dance performance through a one-to-many relation, extracted by unsupervised clustering of musical measure segments based on melodic similarity; 3) figure transition model, which captures the intrinsic dependencies of dance figure sequences via an n-gram model; 4) dance figure models, capturing the variations in the way particular dance figures are performed, by modeling the motion trajectory of each dance figure via an HMM. Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm. The motion parameters of the dance figures in the synthesized choreography are then computed using the dance figure models. Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model. Objective and subjective evaluation results demonstrate that the proposed framework is able to produce compelling music-driven choreographies.

Published in:

Multimedia, IEEE Transactions on  (Volume:14 ,  Issue: 3 )