Skip to Main Content
We consider the problem of modeling a scene containing multiple dynamic textures undergoing multiple rigid-body motions, e.g., a video sequence of water taken by a rigidly moving camera. We propose to model each moving dynamic texture with a time varying linear dynamical system (LDS) plus a 2D translational motion model. We first consider a scene with a single moving dynamic texture and show how to simultaneously learn the parameters of the time varying LDS as well as the optical flow of the scene using the so-called dynamic texture constancy constraint (DTCC). We then consider a scene with multiple non-moving dynamic textures and show that learning the parameters of each time invariant LDS as well as their region of support is equivalent to clustering data living in multiple subspaces. We solve this problem with a combination of PCA and GPCA. Finally, we consider a scene with multiple moving dynamic textures, and show how to simultaneously learn the parameters of multiple time varying LDS and multiple 2D translational models, by clustering data living in multiple dynamically evolving subspaces. We test our approach on sequences of flowers, water, grass, and a beating heart.