By Topic

Visual conditioning for augmented-reality-assisted video conferencing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Guleryuz, O.G. ; Futurewei Technol., Inc., Santa Clara, CA, USA ; Kalker, A.

Typical video conferencing scenarios bring together individuals from disparate environments. Unless one commits to expensive tele-presence rooms, conferences involving many individuals result in a cacophony of visuals and backgrounds. Ideally one would like to separate participant visuals from their respective environments and render them over visually pleasing backgrounds that enhance immersion for all. Yet available image/video segmentation techniques are limited and result in significant artifacts even with recently popular commodity depth sensors. In this paper we present a technique that accomplishes robust and visually pleasing rendering of segmented participants over adaptively-designed virtual backgrounds. Our method works by determining virtual backgrounds that match and highlight participant visuals and uses directional textures to hide segmentation artifacts due to noisy segmentation boundaries, missing regions, etc. Taking advantage of simple computations and look-up-tables, our work leads to fast, real-time implementations that can run on mobile and other computationally-limited platforms.

Published in:

Multimedia Signal Processing (MMSP), 2012 IEEE 14th International Workshop on

Date of Conference:

17-19 Sept. 2012