By Topic

Linear Dynamic Data Fusion Techniques for Face Orientation Estimation in Smart Camera Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chung-Ching Chang ; Wireless Sensor Networks Lab, Department of the Electrical Engineering, Stanford University, Stanford, CA 94305 ; Hamid Aghajan

Face orientation estimation problems arise in applications of camera networks such as human-computer interface (HCI), and person recognition and tracking. In this paper, we propose and compare two collaborative face orientation estimation techniques in smart camera networks based on fusion of coarse local estimates in a joint estimation model at network level. The techniques employ low-complexity methods for in-node face orientation and angular motion estimation to accommodate computational limitations of smart camera nodes. The local estimates are hence assumed coarse and prone to errors. In the joint refined estimation phase, the problem is modeled as a discrete-time linear dynamical system, and linear quadratic regulation (LQR) and Kalman filtering (KF) methods are applied. In the LQR-based analysis, the spa-tiotemporal consistency between cameras is measured by a cost function, which is composed as a weighted quadratic sum of spatial inconsistency, input energy, and in-node estimation error. Minimizing the cost function through LQR provides a robust closed-loop feedback system that successfully estimates the face orientation, angular motion, and relative angular differences to the face between cameras. In the KF-based analysis, the confidence level of each local estimate is used as a weight in the measurement update. This model can be further extended to missing data cases where not all local estimates are collected in the network, hence offering flexibility in communication scheduling between the nodes. The proposed technique does not require camera locations to be known a priori, and hence is applicable to vision networks deployed casually without localization.

Published in:

2007 First ACM/IEEE International Conference on Distributed Smart Cameras

Date of Conference:

25-28 Sept. 2007