Skip to Main Content
The ability to control the movements of an object or person in a video sequence has applications in the movie and animation industries, and in HCI. In this paper, we introduce a new algorithm for real-time motion control and demonstrate its application to pre-recorded video clips and HCI. Firstly, a dataset of video frames are projected into a lower dimension space. A k-mediod clustering algorithm with a distance metric is used to determine groups of similar frames which operate as cut points, segmenting the data into smaller subsequences. A multivariate probability distribution is learnt and probability density estimation is used to determine transitions between the subsequences to develop novel motion. To facilitate real-time control, conditional probabilities are used to derive motion given user commands. The motion controller is extended to HCI using speech Mel-Frequency Ceptral Coefficients (MFCCs) to trigger movement from an input speech signal. We demonstrate the flexibility of the model by presenting results ranging from datasets composed of both vectorised images and 2D point representation. Results show plausible motion generation and lifelike blends between different types of movement.