Skip to Main Content
In this work, we address the problem of learning arm gestures from imitation by humanoid robots when the training set contains missing data. We assume that multiple gesture demonstrations are available. The problem is challenging because of the fact that there is no temporal alignment between the demonstrations. In this work, we propose two approaches to handle the missing data problem. One approach is to use interpolation to fill in the gaps of the observed trajectory, temporally align the trajectories and then obtain a generalized representation by averaging. Another approach is to temporally align the fragmented trajectories and then perform averaging and interpolation to derive a generalized trajectory. We evaluate both approaches using a Nao Humanoid robot platform.