Skip to Main Content
In this paper we discuss the analogy between spatial distributed sensor network analysis and image processing. The analogy comes from the fact that in high density sensor networks the output of sensors is correlated both spatially and temporally. This means that the output of a sensor is correlated with the outputs of its neighbours. This characteristic is very similar to the pixels' output (intensity) in video signals. The video signal consists of multiple correlated frames (correlation in time), and each frame consists of large number of pixels, and usually there is high correlation between pixels (spatial correlation). By defining this relation one can use the well-known image processing techniques for sensor data compression, fusion, and analysis. As an example we show how to use the quadtree image decomposition for sensor spatial decomposition.