Skip to Main Content
Over the past decade, large-scale camera networks have become increasingly prevalent in a wide range of applications, such as security and surveillance, disaster response, and environmental modeling. In many applications, bandwidth constraints, security concerns, and difficulty in storing and analyzing large amounts of data centrally at a single location necessitate the development of distributed camera network architectures. Thus, the development of distributed scene-analysis algorithms has received much attention lately. However, the performance of these algorithms often suffers because of the inability to effectively acquire the desired images, especially when the targets are dispersed over a wide field of view (FOV). In this article, we show how to develop an end-to-end framework for integrated sensing and analysis in a distributed camera network so as to maximize various scene-understanding performance criteria (e.g., tracking accuracy, best shot, and image resolution).
Date of Publication: May 2011