Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 1 • Date Jan-Mar 1998

Filter Results

Displaying Results 1 - 6 of 6
  • Modeling, animating, and rendering complex scenes using volumetric textures

    Publication Year: 1998 , Page(s): 55 - 70
    Cited by:  Papers (31)  |  Patents (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2888 KB)  

    Complex repetitive scenes containing forests, foliage, grass, hair, or fur, are challenging for common modeling and rendering tools. The amount of data, the tediousness of modeling and animation tasks, and the cost of realistic rendering have caused such kind of scene to see only limited use even in high-end productions. The author describes how the use of volumetric textures is well suited to such scenes. These primitives can greatly simplify modeling and animation tasks. More importantly, they can be very efficiently rendered using ray tracing with few aliasing artifacts. The main idea, initially introduced by Kajiya and Kay (1989), is to represent a pattern of 3D geometry in a reference volume, that is tiled over an underlying surface much like a regular 2D texture. In our contribution, the mapping is independent of the mesh subdivision, the pattern can contain any kind of shape, and it is prefiltered at different scales as for MIP-mapping. Although the model encoding is volumetric, the rendering method differs greatly from traditional volume rendering. A volumetric texture only exists in the neighborhood of a surface, and the repeated instances (called texels) of the reference volume are spatially deformed. Furthermore, each voxel of the reference volume contains a key feature which controls the reflectance function that represents aggregate intravoxel geometry. This allows for ray tracing of highly complex scenes with very few aliasing artifacts, using a single ray per pixel (for the part of the scene using the volumetric texture representation). The major technical considerations of our method lie in the ray-path determination and in the specification of the reflectance function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast horizon computation at all points of a terrain with visibility and shading applications

    Publication Year: 1998 , Page(s): 82 - 93
    Cited by:  Papers (19)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4764 KB)  

    A terrain is most often represented with a digital elevation map consisting of a set of sample points from the terrain surface. This paper presents a fast and practical algorithm to compute the horizon, or skyline, at all sample points of a terrain. The horizons are useful in a number of applications, including the rendering of self-shadowing displacement maps, visibility culling for faster flight simulation, and rendering of cartographic data. Experimental and theoretical results are presented which show that the algorithm is more accurate that previous algorithms and is faster than previous algorithms in terrains of more than 100,000 sample points View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Line art illustrations of parametric and implicit forms

    Publication Year: 1998 , Page(s): 71 - 81
    Cited by:  Papers (16)  |  Patents (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB)  

    A technique is presented for line art rendering of scenes composed of freeform surfaces. The line art that is created for parametric surfaces is practically intrinsic and is globally invariant to changes in the surface parameterization. This method is equally applicable for line art rendering of implicit forms, creating a unified line art rendering method for both parametric and implicit forms. This added flexibility exposes a new horizon of special, parameterization independent, line art effects. Moreover, the production of the line art illustrations can be combined with traditional rendering techniques such as transparency and texture mapping. Examples that demonstrate the capabilities of the proposed approach are presented for both the parametric and implicit forms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient collision detection using bounding volume hierarchies of k-DOPs

    Publication Year: 1998 , Page(s): 21 - 36
    Cited by:  Papers (198)  |  Patents (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1848 KB)  

    Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force feedback can require over 1000 collision queries per second. We develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a discrete orientation polytope (k-DOP), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (BV-trees) of bounding k-DOPs. Further, we propose algorithms for maintaining an effective BV-tree of k-DOPs for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Calibration-free augmented reality

    Publication Year: 1998 , Page(s): 1 - 20
    Cited by:  Papers (48)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3092 KB)  

    Camera calibration and the acquisition of Euclidean 3D measurements have so far been considered necessary requirements for overlaying three-dimensional graphical objects with live video. We describe a new approach to video-based augmented reality that avoids both requirements: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four fiducial points that are specified by the user during system initialization and whose world coordinates are unknown. Our approach is based on the following observation: given a set of four or more noncoplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by: tracking regions and color fiducial points at frame rate; and representing virtual objects in a non-Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points. Experimental results on two augmented reality systems, one monitor-based and one head-mounted, demonstrate that the approach is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A high accuracy volume renderer for unstructured data

    Publication Year: 1998 , Page(s): 37 - 54
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8944 KB)  

    This paper describes a volume rendering system for unstructured data, especially finite element data, that creates images with very high accuracy. The system will currently handle meshes whose cells are either linear or quadratic tetrahedra. Compromises or approximations are not introduced for the sake of efficiency. Whenever possible, exact mathematical solutions for the radiance integrals involved and for interpolation are used. The system will also handle meshes with mixed cell types: tetrahedra, bricks, prisms, wedges, and pyramids, but not with high accuracy. Accurate semi-transparent shaded isosurfaces may be embedded in the volume rendering. For very small cells, subpixel accumulation by splatting is used to avoid sampling error. A revision to an existing accurate visibility ordering algorithm is described, which includes a correction and a method for dramatically increasing its efficiency. Finally, hardware assisted projection and compositing are extended from tetrahedra to arbitrary convex polyhedra View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu