Skip to Main Content
A new approach for motion characterization in image sequences is presented. It relies on the probabilistic modeling of temporal and scale co-occurrence distributions of local motion-related measurements directly computed over image sequences. Temporal multiscale Gibbs models allow us to handle both spatial and temporal aspects of image motion content within a unified statistical framework. Since this modeling mainly involves the scalar product between co-occurrence values and Gibbs potentials, we can formulate and address several fundamental issues: model estimation according to the ML criterion (hence, model training and learning) and motion classification. We have conducted motion recognition experiments over a large set of real image sequences comprising various motion types such as temporal texture samples, human motion examples, and rigid motion situations.