By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 4 • Date 5 Aug. 2005

Filter Results

Displaying Results 1 - 13 of 13
  • Visual media production

    Publication Year: 2005 , Page(s): 385 - 386
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (68 KB)  

    The present editorial introduces a special issue which presents extended versions of selected papers from the 1st IEE European Conference on Visual Media Production (CVMP 2004) held 15-16 March 2004 in London. The scope of the conference was in the convergence of image processing, computer vision and computer graphics with application to the production of visual media. The intention of the organisers was to create a forum that fills the gap in Europe between high profile academically oriented conferences and more commercially oriented industrial showcases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semi-automatic foreground/background segmentation of motion picture images and image sequences

    Publication Year: 2005 , Page(s): 387 - 397
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1516 KB)  

    Segmentation of images into foreground (an actor) and background is required for many motion picture special effects. To produce these shots, the unwanted background must be removed so that none of it appears in the final composite shot. The standard approach requires the background to be a blue screen. Systems that are capable of segmenting actors from more natural backgrounds have been proposed, but many of these are not readily adaptable to the resolution involved in motion picture imaging. An algorithm is presented that requires minimal human interaction to segment motion picture resolution images. Results from this algorithm are quantitatively compared with alternative approaches. Adaptations to the algorithm, which enable segmentation even when the foreground is lit from behind, are described. Segmentation of image sequences normally requires manual creation of a separate hint image for each frame of a sequence. An algorithm is presented that generates such hint images automatically, so that only a single input is required for an entire sequence. Results are presented that show that the algorithm successfully generates hint images where an alternative approach fails. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shadow-aware object-based video processing

    Publication Year: 2005 , Page(s): 398 - 406
    Cited by:  Papers (9)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (956 KB)  

    Local illumination changes due to shadows often reduce the quality of object-based video composition and mislead object recognition. This problem makes shadow detection a desirable tool for a wide range of applications, such as video production and visual surveillance. In this paper, an algorithm for the isolation of video objects from the local illumination changes they generate in real world sequences when camera, illumination and the scene characteristics are not known is presented. The algorithm combines a change detector and a shadow detector with a spatio-temporal verification stage. Colour information and spatio-temporal constraints are embedded to define the overall algorithm. Colour information is exploited in a selective way. First, relevant areas to analyse are identified in each image. Then, the colour components that carry most of the needed information are selected. Finally, spatial and temporal constraints are used to verify the results of the colour analysis. The proposed algorithm is demonstrated on both indoor and outdoor video sequences. Moreover, performance comparisons show that the proposed algorithm outperforms state-of-the-art methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated rig removal with Bayesian motion interpolation

    Publication Year: 2005 , Page(s): 407 - 414
    Cited by:  Papers (4)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1991 KB)  

    Some of the most convincing film and video effects are created in digital post-production by removing apparatus that supports or manipulates actors and objects. Wires, cranes and other objects can be removed by digitally painting them out of the scene provided that some 'clean plate' image is available for pasting in the missing regions. The problem of when no such plate is available is addressed. Provided that the undesired object (the rig) is moving, it is possible to use the motion throughout the sequence to reconstruct automatically the image material that was obscured. The paper takes a novel approach that allows the estimation of the motion of the material beneath the rig and then reconstruction of the missing image material. A Bayesian framework is used to solve the motion reconstruction problem, and a unique tool is developed for automated rig removal. This tool holds great potential for speeding up one of the major tasks performed in the effects industry and is currently being tested in that environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Affine motion compensation using a content-based mesh

    Publication Year: 2005 , Page(s): 415 - 423
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2097 KB)  

    A content-based approach to the design of a triangular mesh is presented, and its application to affine motion compensation is investigated. An image is first segmented into moving objects, which are then approximated with polygons. Then, a triangular mesh is generated within each polygon, thus ensuring that no triangle straddles multiple regions. Translation and affine motion parameters are determined for each triangle, using bidirectional motion estimation. Results for three test sequences demonstrate the advantages offered by the proposed mesh design method, and by the use of affine motion compensation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Techniques for automated reverse storyboarding

    Publication Year: 2005 , Page(s): 425 - 436
    Cited by:  Papers (7)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1796 KB)  

    Storyboarding is a standard method for visual summarisation of shots in film and video preproduction. Reverse storyboarding is the generation of similar visualisations from existing footage. The key attributes of preproduction storyboards are identified, then computational techniques that extract corresponding features from video, render them appropriately, and composite them into a single storyboard image are developed. The result succinctly represents background composition, foreground object appearance and motion, and camera motion. For a variety of shots, it is shown that the visual representation conveys all the essential elements of shot composition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated camerawork for capturing desktop presentations

    Publication Year: 2005 , Page(s): 437 - 447
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1006 KB)  

    A novel automated camera control method for capturing desktop presentations is introduced. Typical features and the camerawork of shots that appear frequently in TV programs are discussed. To realise those features in this automated video capturing system, the purpose of camerawork is classified from two points of view: target and aspect-of-target. Then, the correspondence between the classification and typical shots and camerawork is considered. A virtual-frame control algorithm based on this idea is proposed, and its implementation in a video production system. Results are shown that verify this method through two kinds of experiments, virtual video capturing using CG animations and real video capturing of real presentations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time video effects on a PlayStation2

    Publication Year: 2005 , Page(s): 448 - 453
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (306 KB)  

    The Sony PlayStation2 (PS2), with its powerful rendering and vector processing capabilities, built-in MPEG decoder, and large capacity hard disc drive (HDD), possesses fundamental assets that make it an ideal platform for processing video. Research carried out at Sony BPRL, where we have extensive experience of video effects processing, has shown that this inexpensive games console can, in fact, create, in real-time, the type of video effects, some of which normally require investment in dedicated hardware (or endless patience as a PC renders them slowly). The paper gives an introduction to the architecture of the PlayStation2, and how it can be used for processing video efficiently in this way. It also covers the main video effects developed, and how they are implemented. These effects include wipes, 3D linear and nonlinear effects (such as pageturn and ripple), colour effects and 'old film' effects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The ORIGAMI Project: Advanced tools for creating and mixing real and virtual content in film and TV production

    Publication Year: 2005 , Page(s): 454 - 469
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3690 KB)  

    The goal of the EC-funded IST project ORIGAMI was the development of advanced tools and new production techniques for high-quality and seamless integration of real and virtual content in film and TV productions. In particular, the project focused on pre-production tools for automatic virtual set creation through image-based 3-D modelling of environments and objects. One key goal of the project was to achieve real-time in-studio pre-visualisation of the virtual elements (objects and actors) of the extended set. In this contribution the studio pre-visualisation system that has been developed for the project is illustrated, and its usage within a high-end film and TV production environment is described. Furthermore an overview of the developed solutions for automatic generation of 3-D models of environments, static objects and (moving) actors is given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-based rendering of complex scenes from a multi-camera rig

    Publication Year: 2005 , Page(s): 470 - 480
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2147 KB)  

    Image-based rendering is a method to synthesise novel views from a set of given real images. Two methods to extrapolate novel views of complex scenes with occlusions and large depth discontinuities from images of a moving uncalibrated multi-camera rig are described. The real camera viewpoints are calibrated from the image data and dense depth maps are estimated for each real view. Novel views are synthesised from this representation with view-dependent image-based rendering techniques at interactive rates. Since the 3D scene geometry is available in this approach, it is well suited for mixed reality applications where synthetic 3D objects are seamlessly embedded in the novel view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3D studio production of animated actor models

    Publication Year: 2005 , Page(s): 481 - 490
    Cited by:  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1252 KB)  

    A framework for constructing detailed animated models of an actor's shape and appearance from multiple view images is presented. Multiple views of an actor are captured in a studio with controlled illumination and background. An initial low-resolution approximation of the person's shape is reconstructed by deformation of a generic humanoid model to fit the visual hull using shape constrained optimisation to preserve the surface parameterisation for animation. Stereo reconstruction with multiple view constraints is then used to reconstruct the detailed surface shape. High-resolution shape detail from stereo is represented in a structured format for animation by displacement mapping from the low-resolution model surface. A novel integration algorithm using displacement maps is introduced to combine overlapping stereo surface measurements from multiple views into a single displacement map representation of the high-resolution surface detail. Results of 3D actor modelling in a 14 camera studio demonstrate improved representation of detailed surface shapes, such as creases in clothing, compared to previous model fitting approaches. Actor models can be animated and rendered from arbitrary views under different illumination to produce free-viewpoint video sequences. The proposed framework enables rapid transformation of captured multiple view images into a structured representation suitable for realistic animation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realistic speech animation based on observed 3D face dynamics

    Publication Year: 2005 , Page(s): 491 - 500
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2002 KB)  

    An efficient system for realistic speech animation is proposed. The system supports all steps of the animation pipeline, from the capture or design of 3D head models up to the synthesis and editing of the performance. This pipeline is fully 3D, which yields high flexibility in the use of the animated character. Real detailed 3D face dynamics, observed at video frame rate for thousands of points on the face of speaking actors, underpin the realism of the facial deformations. These are given a compact and intuitive representation via independent component analysis (ICA). Performances amount to trajectories through this 'viseme space'. When asked to animate a face, the system replicates the 'visemes' that it has learned, and adds the necessary coarticulation effects. Realism has been improved through comparisons with motion captured groundtruth. Faces for which no 3D dynamics have been observed can be animated nonetheless. Their visemes are adapted automatically to their physiognomy by localising the face in a 'face space'. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic optimisation for high-dimensional tracking in dense range maps

    Publication Year: 2005 , Page(s): 501 - 512
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3637 KB)  

    The main challenge of tracking articulated structures like hands is their many degrees of freedom (DOFs). A realistic 3-D model of the human hand has at least 26 DOFs. The arsenal of tracking approaches that can track such structures fast and reliably is still very small. This paper proposes a tracker based on stochastic meta-descent (SMD) for optimisations in such high-dimensional state spaces. This new algorithm is based on a gradient descent approach with adaptive and parameter-specific step sizes. The SMD tracker facilitates the integration of constraints, and combined with a stochastic sampling technique, can get out of spurious local minima. Furthermore, the integration of a deformable hand model based on linear blend skinning and anthropometrical measurements reinforces the robustness of the tracker. Experiments show the efficiency of the SMD algorithm in comparison with common optimisation methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.