Skip to Main Content
Varying illumination is a challenging issue in many computer vision problems (e.g., tagging, matching, and tracking), while in inverse rendering, people are interested in estimating illumination from rendered images or videos. Can these two techniques be combined together to form a unified framework for vehicle tracking and lighting learning? This paper gives probably the first thought in this joint problem, by presenting a framework to adaptively learn lighting from an image sequence while tracking the object (specifically, the vehicle) in it. We formulate the illumination model with both diffusion and specularity components using a frequency-space representation, and design a nonlinear model to estimate lighting coefficients in a low-dimensional subspace. The lighting learning and vehicle tracking are integrated in a unified Markov network, which can be solved by an iterative believe propagation (BP) method. The proposed framework can track a vehicle moving in a video, as well as transfer the learned lighting to other objects, which shows its potential in augmented reality.