Cart (Loading....) | Create Account
Close category search window
 

back to article  |  Figures
All Figures

Toward a Dancing Robot With Listening Capability: Keypose-Based Integration of Lower-, Middle-, and Upper-Body Motions for Varying Music Tempos

Figure 1

Figure 1
Learning from observation.

Figure 2

Figure 2
Lower-body task models.

Figure 3

Figure 3
Lower-body and middle-body skill parameters.

Figure 4

Figure 4
Keyposes in Aizu-bandaisan dance. Upper row: keyposes depicted by a dance teacher. Bottom row: brief stop motions of dancers corresponding to music beats extracted from [42].

Figure 5

Figure 5
Maximum speed of foot tip and length of a stride: Each marker represents the average maximum speed of foot tip and length of a stride in STEP tasks at each musical tempo.

Figure 6

Figure 6
Variance of start/end timings of each STEP task: Red, green, and blue markers provide variances at the original tempo, 1.5 times faster, and 2.0 times faster.

Figure 7

Figure 7
Trajectories of a foot tip in a STEP task labeled as R-STEP4 at each tempo. The STEP task has a special trajectory like kicking up at the last part of each cycle of the dance.

Figure 8

Figure 8
Comparison of mean joint angle trajectories of the left shoulder in the logarithmic space of a quaternion. Lines of different colors represent different tempos; the original musical tempo (red), 1.2 times faster tempo (green), and 1.5 times faster tempo (blue). (a) Mean motion using a single-layer B-spline, (b) mean motion using a three-layer hierarchical B-spline, and (c) mean motion using a five-layer hierarchical B-spline. Variation of trajectory according to tempos in (c) is greater than that in (a); higher order motions are omitted preferentially with increased music tempo.

Figure 9

Figure 9
Comparison of variance sequences of joint angle trajectories of the left shoulder. Lines of different colors represent different tempos; variance sequences at the original musical tempo (red), 1.2 times faster tempo (green), and 1.5 times faster tempo (blue). Those sequences are temporally normalized for comparison. Postures corresponding to the common local minimum of variances are depicted in the top row. Variance sequences for each speed tend to become a local minimum at keyposes.

Figure 10

Figure 10
Maximum speed, maximum depth, and average timing of SQUAT task with varying music tempos: Red, green, and blue lines represent three dancers, respectively.

Figure 11

Figure 11
STEP and STAND tasks grouped by keypose timings.

Figure 12

Figure 12
Sampling method to consider keypose information for hierarchical motion decomposition. Vertical lines represent sampled time instants, and a dashed curve represents ground truth of a continuous joint angle trajectory. Data sampled by our method (black dots) are used.

Figure 13

Figure 13
Skill parameter adjustment for temporal scaling of upper-body motion. This adjustment process gradually decreases the weighting factors from the finest layer of the hierarchical B-spline. Both speed limitations and angular limitations are considered simultaneously in this process.

Figure 14

Figure 14
Experiment of whole-body dance motion for the Aizu-bandaisan dance with a physical humanoid robot HRP-2. Top row: reference of the sequence of postures for the original tempo of music generated using the Nakaoka system [6]. Middle and bottom rows: the sequence of postures for a tempo 1.2 and 1.5 times faster than the original.

Figure 15

Figure 15
Velocity sequences of the right knee angle (upper row) and left shoulder pitch angle (lower row) generated for 1.2 (left row) and 1.5 (right row) times faster tempos than the original tempo. The green lines represent the corresponding joint angular velocities of the motion generated by simple temporal scaling; proportional temporal shrinkages are applied to task sequences and motion generation is done using the Nakaoka system [6]. The blue lines represent the joint angular velocities of the motion generated using our proposed system. The gray lines represent the upper/lower limits of the velocity. Motions generated using our method satisfy the limitations and are feasible for the physical humanoid robot.

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.