By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 4 • Date Oct.-Dec. 2000

Filter Results

Displaying Results 1 - 9 of 9
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 381 - 382
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Subject index

    Page(s): 382 - 384
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • Calibration-free augmented reality in perspective

    Page(s): 346 - 359
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5540 KB)  

    This paper deals with video-based augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of the video camera in projective space. A virtual camera, by which views of graphics objects are generated, is attached to a real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has the function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows the theoretical and experimental results of our application of nonmetric vision to augmented reality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A BRDF postprocess to integrate porosity on rendered surfaces

    Page(s): 306 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1736 KB)  

    The behavior of light interacting with materials is a crucial factor in achieving a high degree of realism in image synthesis. Local illumination processes, describing the interactions between a point of the surface and a shading ray, are evaluated by bidirectional reflectance distribution functions (BRDFs). Current theoretical BRDFs use surface models restricted to roughness only, sometimes at different scales. We present a more complete surface micro-geometry description, suitable for some common surface defects, including porosity and micro-cracks; both of them are crucial surface features since they strongly influence light reflection properties. These new features are modeled by holes inserted in the surface profile, depending on two parameters: the proportion of surface covered by the defects and the mean geometric characteristic of these defects. In order to preserve the advantages and characteristics of existing BRDFs, a postprocessing method is adopted (we integrate our technique into existing models, instead of defining a completely new one). Beyond providing graphical results closely matching real behaviors, this method moreover opens the way to various important new considerations in computer graphics (for example, changes of appearance due to the degree of humidity) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perception-based fast rendering and antialiasing of walkthrough sequences

    Page(s): 360 - 379
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2156 KB)  

    We consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance, we use a combination of a hybrid ray tracing and image-based rendering (IBR) technique and a novel perception-based antialiasing technique. In our rendering solution, we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal animation quality metric (AQM) is used to automatically guide such a hybrid rendering. The image flow (IF) obtained as a byproduct of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An order of magnitude faster isosurface rendering in software on a PC than using dedicated, general purpose rendering hardware

    Page(s): 335 - 345
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB)  

    The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consist of 10 different objects from various parts of the body and various modalities (CT, MR, and MRA) with a variety of surface sizes (up to 1 million voxels/2 million triangles) and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the Marching Cubes algorithm. The hardware environment consists of a variety of platforms, including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300 MHz Pentium PC. The results indicate that the software method (shell rendering) was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm (shell rendering) can outperform dedicated hardware. We conclude that, for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms (shell rendering) on a 300 MHz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of head pose accuracy in augmented reality

    Page(s): 319 - 334
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1452 KB)  

    A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive virtual relighting of real scenes

    Page(s): 289 - 305
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2212 KB)  

    Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina