By Topic

Continuous global evidence-based Bayesian modality fusion for simultaneous tracking of multiple objects

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
J. Sherrah ; Dept. of Comput. Sci., Queen Mary Univ., London, UK ; Shaogang Gong

Robust, real-time tracking of objects from visual data requires probabilistic fusion of multiple visual cues. Previous approaches have either been ad hoc or relied on a Bayesian network with discrete spatial variables which suffers from discretisation and computational complexity problems. We present a new Bayesian modality fusion network that uses continuous domain variables. The network architecture distinguishes between cues that are necessary or unnecessary for the object's presence. Computationally expensive and inexpensive modalities are also handled differently to minimise cost. The method provides a formal, tractable and robust probabilistic method for simultaneously tracking multiple objects. While instantaneous inference is exact, approximation is required for propagation over time

Published in:

Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on  (Volume:2 )

Date of Conference: