By Topic

Recovering fluid-type motions using Navier-Stokes potential flow

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Feng Li ; Department of Computer and Information Sciences, University of Delaware, Newark, DE19716, USA ; Liwei Xu ; Philippe Guyenne ; Jingyi Yu

The classical optical flow assumes that a feature point maintains constant brightness across the frames. For fluid-type motions such as smoke or clouds, the constant brightness assumption does not hold, and accurately estimating the motion flow from their images is difficult. In this paper, we introduce a simple but effective Navier-Stokes (NS) potential flow model for recovering fluid-type motions. Our method treats the image as a wavefront surface and models the 3D potential flow beneath the surface. The gradient of the velocity potential describes the motion flow at every voxel. We first derive a general brightness constraint that explicitly models wavefront (brightness) variations in terms of the velocity potential. We then use a series of partial differential equations to separately model the dynamics of the potential flow. To solve for the potential flow, we use the Dirichlet-Neumann Operator (DNO) to simplify the 3D volumetric velocity potential to 2D surface velocity potential. We approximate the DNO via Taylor expansions and develop a Fourier domain method to efficiently estimate the Taylor coefficients. Finally we show how to use the DNO to recover the velocity potential from images as well as to propagate the wavefront (image) over time. Experimental results on both synthetic and real images show that our technique is robust and reliable.

Published in:

Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on

Date of Conference:

13-18 June 2010