By Topic

Active lighting learning for 3D model based vehicle tracking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Tingbo Hou ; Computer Science Department, Stony Brook University, USA ; Sen Wang ; Hong Qin

Varying illumination is a challenging issue in many computer vision problems (e.g., tagging, matching, and tracking), while in inverse rendering, people are interested in estimating illumination from rendered images or videos. Can these two techniques be combined together to form a unified framework for vehicle tracking and lighting learning? This paper gives probably the first thought in this joint problem, by presenting a framework to adaptively learn lighting from an image sequence while tracking the object (specifically, the vehicle) in it. We formulate the illumination model with both diffusion and specularity components using a frequency-space representation, and design a nonlinear model to estimate lighting coefficients in a low-dimensional subspace. The lighting learning and vehicle tracking are integrated in a unified Markov network, which can be solved by an iterative believe propagation (BP) method. The proposed framework can track a vehicle moving in a video, as well as transfer the learned lighting to other objects, which shows its potential in augmented reality.

Published in:

2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops

Date of Conference:

13-18 June 2010