Skip to Main Content
Recently much research has been conducted in visual sensor networks. Compared to traditional sensor networks, vision networks differ in various aspects such as the amount of data to be processed and transmitted, the requirements on quality-of-service, and the level of collaboration among the sensor nodes. This paper deals with sensor fusion on visual sensor networks. We focus here on methods for fusing data from various distributed sensors and present a generic framework for fusion on embedded sensor nodes. This paper extends our previous work on distributed smart cameras and presents our approach toward the transformation of smart cameras into a distributed, embedded multisensor network. Our generic fusion model has been completely implemented on a distributed embedded system. It provides a middleware which supports automatic mapping of our fusion model to the target hardware. This middleware features dynamic reconfiguration to support modification of the fusion application at runtime without loss of sensor data. The feasibility and reusability of the I-SENSE concept is demonstrated with experimental results of two case studies: vehicle classification and bulk good separation. Qualitative and quantitative benefits of multilevel information fusion are outlined in this article.