Skip to Main Content
In computer animation, human motion capture from video is a widely used technique to acquire motion parameters. The acquisition process typically requires an intrusion into the scene in the form of optical markers which are used to estimate the parameters of motion as well as the kinematic structure of the performer. Marker-free optical motion capture approaches exist, but due to their dependence on a specific type of a priori model they can hardly be used to track other subjects, e.g. animals. To bridge the gap between the generality of marker-based methods and the applicability of marker-free methods, we present a flexible non-intrusive approach that estimates both, a kinematic model and its parameters of motion from a sequence of voxel-volumes. The volume sequences are reconstructed from multi-view video data by means of a shape-from-silhouette technique. The described method is well-suited for but not limited to motion capture of human subjects.