Skip to Main Content
In this work we consider an event-driven wireless visual sensor network (WVSN) where each camera node transmits a frame to the cluster-head only if an event of interest was captured in the frame for energy and bandwidth conservation. Specifically, we consider the scenario where each camera node receives decision support from an independent but possibly attacked (and hence error-prone) scalar-sensor regarding the presence or absence of an event. We study the overall detection performance achieved by various techniques that utilize the scalar and image-based decisions. We conclude that in image sequences involving extraneous lighting and background changes (such as in the case of outdoor surveillance), the combination techniques generally achieve a lower total probability of error.