By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 4 • Date Oct.-Dec. 2001

Filter Results

Displaying Results 1 - 9 of 9
  • Author index

    Publication Year: 2001 , Page(s): 380 - 381
    Save to Project icon | Request Permissions | PDF file iconPDF (23 KB)  
    Freely Available from IEEE
  • Subject index

    Publication Year: 2001 , Page(s): 381 - 384
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Minimally immersive flow visualization

    Publication Year: 2001 , Page(s): 343 - 350
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1184 KB) |  | HTML iconHTML  

    This paper describes a minimally immersive interactive system for flow visualization of multivariate volumetric data. The system, SFA, uses perceptually motivated rendering to increase the quantity and clarity of information perceived. Proprioception, stereopsis, perceptually motivated shape visualization, and three-dimensional interaction are combined in SFA to allow the three-dimensional volumetric visualization, manipulation, navigation, and analysis of multivariate, time-varying flow data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-quality texture reconstruction from multiple scans

    Publication Year: 2001 , Page(s): 318 - 332
    Cited by:  Papers (34)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2679 KB) |  | HTML iconHTML  

    The creation of three-dimensional digital content by scanning real objects has become common practice in graphics applications for which visual quality is paramount, such as animation, e-commerce, and virtual museums. While a lot of attention has been devoted recently to the problem of accurately capturing the geometry of scanned objects, the acquisition of high-quality textures is equally important, but not as widely studied. In this paper, we focus on methods to construct accurate digital models of scanned objects by integrating high-quality texture and normal maps with geometric data. These methods are designed for use with inexpensive, electronic camera-based systems in which low-resolution range images and high-resolution intensity images are acquired. The resulting models are well-suited for interactive rendering on the latest-generation graphics hardware with support for bump mapping. Our contributions include new techniques for processing range, reflectance, and surface normal data, for image-based registration of scans, and for reconstructing high-quality textures for the output digital object View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting objects from range and radiance images

    Publication Year: 2001 , Page(s): 351 - 364
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2405 KB) |  | HTML iconHTML  

    In this paper, we present a pipeline and several key techniques necessary for editing a real scene captured with both cameras and laser range scanners. We develop automatic algorithms to segment the geometry from range images into distinct surfaces, register texture from radiance images with the geometry, and synthesize compact high-quality texture maps. The result is an object-level representation of the scene which can be rendered with modifications to structure via traditional rendering methods. The segmentation algorithm for geometry operates directly on the point cloud from multiple registered 3D range images instead of a reconstructed mesh. It is a top-down algorithm which recursively partitions a point set into two subsets using a pairwise similarity measure. The result is a binary tree with individual surfaces as leaves. Our image registration technique performs a very efficient search to automatically find the camera poses for arbitrary position and orientation relative to the geometry. Thus, we can take photographs from any location without precalibration between the scanner and the camera. The algorithms have been applied to large-scale real data. We demonstrate our ability to edit a captured scene by moving, inserting, and deleting objects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended specifications and test data sets for data level comparisons of direct volume rendering algorithms

    Publication Year: 2001 , Page(s): 299 - 317
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1751 KB) |  | HTML iconHTML  

    Direct volume rendering (DVR) algorithms do not generate intermediate geometry to create a visualization, yet they produce countless variations in the resulting images. Therefore, comparative studies are essential for objective interpretation. Even though image and data level comparison metrics are available, it is still difficult to compare results because of the numerous rendering parameters and algorithm specifications involved. Most of the previous comparison methods use information from the final rendered images only. We overcome limitations of image level comparisons with our data level approach using intermediate rendering information. We provide a list of rendering parameters and algorithm specifications to guide comparison studies. We extend Williams and Uselton's rendering parameter list with algorithm specification items and provide guidance on how to compare algorithms. Real data are often too complex to study algorithm variations with confidence. Most of the analytic test data sets reported are often useful only for a limited feature of DVR algorithms. We provide simple and easily reproducible test data sets, a checkerboard and a ramp, that can make clear differences in a wide range of algorithm variations. With data level metrics, our test data sets make it possible to perform detailed comparison studies. A number of examples illustrate how to use these tools View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient conservative visibility culling using the prioritized-layered projection algorithm

    Publication Year: 2001 , Page(s): 365 - 379
    Cited by:  Papers (18)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (691 KB) |  | HTML iconHTML  

    We propose a novel conservative visibility culling technique based on the Prioritized-Layered Projection (PLP) algorithm. PLP is a time-critical rendering technique that computes, for a given viewpoint, a partially correct image by rendering only a subset of the geometric primitives, those that PLP determines to be most likely visible. Our new algorithm builds on PLP and provides an efficient way of finding the remaining visible primitives. We do this by adding a second phase to PLP which uses image-space techniques for determining the visibility status of the remaining geometry. Another contribution of our work is to show how to efficiently implement such image-space visibility queries using currently available OpenGL hardware and extensions. We report on the implementation of our techniques on several graphics architectures, analyze their complexity, and discuss a possible hardware extension that has the potential to further increase performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Preventing self-intersection under free-form deformation

    Publication Year: 2001 , Page(s): 289 - 298
    Cited by:  Papers (16)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (947 KB) |  | HTML iconHTML  

    Free-Form Deformation (FFD) is a versatile and efficient modeling technique which transforms an object by warping the surrounding space. The conventional user-interface is a lattice of movable control points but this tends to be cumbersome and counterintuitive. Directly Manipulated Free-Form Deformation (DMFFD) allows the user to drag object points directly and has proven useful in an interactive sculpting context. A serious shortcoming of both FFD and DMFFD is that some deformations cause self-intersection of the object. This is unrealistic and compromises the object's validity and suitability for later use. An in-built self-intersection test is thus required for FFD and its extensions to be truly robust In this paper, we present the following novel results set of theoretical conditions for preventing self-intersection by ensuring the injectivity (one-to-one mapping) of FFD, an exact. (necessary and sufficient) injectivity test which is accurate but computationally costly, an efficient but approximate injectivity test which is a sufficient condition only, and a new form of DMFFD which acts by composing many small injective deformations. The latter expands the range of possible deformations without sacrificing the speed of the approximate test View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable path for virtual endoscopy: ensuring complete examination of human organs

    Publication Year: 2001 , Page(s): 333 - 342
    Cited by:  Papers (15)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (538 KB) |  | HTML iconHTML  

    Virtual endoscopy is a computerized, noninvasive procedure for detecting anomalies inside human organs. Several preliminary studies have demonstrated the benefits and effectiveness of this modality. Unfortunately, previous work cannot guarantee that an existing anomaly will be detected, especially for complex organs with multiple branches. In this paper, we introduce the concept of reliable navigation, which ensures the interior organ surface is fully examined by the physician performing the virtual endoscopy procedure. To achieve this, we propose computing a reliable fly-through path that ensures no blind areas during the navigation. Theoretically, we discuss the criteria of evaluating a reliable path and prove that the problem of generating an optimal reliable path for virtual endoscopy is NP-complete. In practice, we develop an efficient method for the calculation of an effective reliable path. First, a small set of center observation points are automatically located inside the hollow organ. For each observation point, there exists at least one patch of interior surface visible to it, but that cannot be seen from any of the other observation points. These chosen points are then linked with a path that stays in the center of the organ. Finally, new points inside the organ are recursively selected and connected into the path until the entire organ surface is visible from the path. We present encouraging results from experiments on several data sets. For a medium-size volumetric model with several hundred thousand inner voxels, an effective reliable path can be generated in several minutes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu