Skip to Main Content
Typical video conferencing scenarios bring together individuals from disparate environments. Unless one commits to expensive tele-presence rooms, conferences involving many individuals result in a cacophony of visuals and backgrounds. Ideally one would like to separate participant visuals from their respective environments and render them over visually pleasing backgrounds that enhance immersion for all. Yet available image/video segmentation techniques are limited and result in significant artifacts even with recently popular commodity depth sensors. In this paper we present a technique that accomplishes robust and visually pleasing rendering of segmented participants over adaptively-designed virtual backgrounds. Our method works by determining virtual backgrounds that match and highlight participant visuals and uses directional textures to hide segmentation artifacts due to noisy segmentation boundaries, missing regions, etc. Taking advantage of simple computations and look-up-tables, our work leads to fast, real-time implementations that can run on mobile and other computationally-limited platforms.