Skip to Main Content
This paper presents an improved warping background subtraction model for moving object detection, in order to effectively detect moving targets when background undergoes complex motions with self-occlusions. Unlike many approaches modelling each pixel independently, in this paper, each pixel is believed to correlate with the surrounding ones,which enable us to compare the image pixel with the surrounding pixels on their intensity and distribution to distinguish foreground from background. The underlying warping of pixel location is molded which arose from background motion. The background is modeled as a set of warping layers, and different layers may be visible due to the motion of an occluding layer. Foreground regions are thus defined as those that can't be modeled by the set. Through the training set, we could get each pixel warping range, and generate a reference background associated with neighborhood table. Then through this table, the canonical image can be divided into dynamic and static block, for further computing with different strategies. From the research result, it is proved that this approach compares favorably with the state of the art, while showing more precision and less time.