Skip to Main Content
We propose a human activity classification algorithm that has a distributed and lightweight implementation appropriate for wireless camera networks. With input from multiple cameras, our algorithm achieves invariance to the orientation of the actor and to the camera viewpoint. We conceptually describe how the algorithm can be implemented on a distributed architecture, obviating the need for centralized processing of the entire multi-camera data. The lightweight implementation is made possible by the very affordable memory and communication bandwidth requirements of the algorithm. Notwithstanding its lightweight nature, the performance of the algorithm is comparable to that of the earlier multi-camera approaches that are based on computationally expensive 3D human model construction, silhouette matching using reprojected 2D views, and so on. Our algorithm is based on multi-view spatio-temporal histogram features obtained directly from acquired images; no background subtraction is required. Results are analyzed for two publicly available multi-camera multi-action datasets. The system's advantages relative to single camera techniques are also discussed.