By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 2 • Date March-April 2005

Filter Results

Displaying Results 1 - 19 of 19
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (307 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB)  
    Freely Available from IEEE
  • Accelerated unsteady flow line integral convolution

    Page(s): 113 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5441 KB)  

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Barycentric parameterizations for isotropic BRDFs

    Page(s): 126 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2780 KB)  

    A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In This work, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF mode) proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiresolution representation for massive meshes

    Page(s): 139 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1763 KB) |  | HTML iconHTML  

    We present a new external memory multiresolution surface representation for massive polygonal meshes. Previous methods for building such data structures have relied on resampled surface data or employed memory intensive construction algorithms that do not scale well. Our proposed representation combines efficient access to sampled surface data with access to the original surface. The construction algorithm for the surface representation exhibits memory requirements that are insensitive to the size of the input mesh, allowing it to process meshes containing hundreds of millions of polygons. The multiresolution nature of the surface representation has allowed us to develop efficient algorithms for view-dependent rendering, approximate collision detection, and adaptive simplification of massive meshes. The empirical performance of these algorithms demonstrates that the underlying data structure is a powerful and flexible tool for operating on massive geometric data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial domain wavelet design for feature preservation in computational data sets

    Page(s): 149 - 159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1682 KB) |  | HTML iconHTML  

    High-fidelity wavelet transforms can facilitate visualization and analysis of large scientific data sets. However, it is important that salient characteristics of the original features be preserved under the transformation. We present a set of filter design axioms in the spatial domain which ensure that certain feature characteristics are preserved from scale to scale and that the resulting filters correspond to wavelet transforms admitting in-place implementation. We demonstrate how the axioms can be used to design linear feature-preserving filters that are optimal in the sense that they are closest in L2 to the ideal low pass filter. We are particularly interested in linear wavelet transforms for large data sets generated by computational fluid dynamics simulations. Our effort is different from classical filter design approaches which focus solely on performance in the frequency domain. Results are included that demonstrate the feature-preservation characteristics of our filters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A statistical wisp model and pseudophysical approaches for interactive hairstyle generation

    Page(s): 160 - 170
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1741 KB) |  | HTML iconHTML  

    This work presents an interactive technique that produces static hairstyles by generating individual hair strands of the desired shape and color, subject to the presence of gravity and collisions. A variety of hairstyles can be generated by adjusting the wisp parameters, while the deformation is solved efficiently, accounting for the effects of gravity and collisions. Wisps are generated employing statistical approaches. As for hair deformation, we propose a method which is based on physical simulation concepts, but is simplified to efficiently solve the static shape of hair. On top of the statistical wisp model and the deformation solver, a constraint-based styler is proposed to model artificial features that oppose the natural flow of hair under gravity and hair elasticity, such as a hairpin. Our technique spans a wider range of human hairstyles than previously proposed methods and the styles generated by this technique are fairly realistic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geometry-aware bases for shape approximation

    Page(s): 171 - 180
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1399 KB) |  | HTML iconHTML  

    We introduce a new class of shape approximation techniques for irregular triangular meshes. Our method approximates the geometry of the mesh using a linear combination of a small number of basis vectors. The basis vectors are functions of the mesh connectivity and of the mesh indices of a number of anchor vertices. There is a fundamental difference between the bases generated by our method and those generated by geometry-oblivious methods, such as Laplacian-based spectral methods. In the latter methods, the basis vectors are functions of the connectivity alone. The basis vectors of our method, in contrast, are geometry-aware since they depend on both the connectivity and on a binary tagging of vertices that are "geometrically important" in the given mesh (e.g., extrema). We show that, by defining the basis vectors to be the solutions of certain least-squares problems, the reconstruction problem reduces to solving a single sparse linear least-squares problem. We also show that this problem can be solved quickly using a state-of-the-art sparse-matrix factorization algorithm. We show how to select the anchor vertices to define a compact effective basis from which an approximated shape can be reconstructed. Furthermore, we develop an incremental update of the factorization of the least-squares system. This allows a progressive scheme where an initial approximation is incrementally refined by a stream of anchor points. We show that the incremental update and solving the factored system are fast enough to allow an online refinement of the mesh geometry View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sharpen&Bend: recovering curved sharp edges in triangle meshes produced by feature-insensitive sampling

    Page(s): 181 - 192
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2819 KB) |  | HTML iconHTML  

    Various acquisition, analysis, visualization, and compression approaches sample surfaces of 3D shapes in a uniform fashion without any attempt to align the samples with sharp edges or to adapt the sampling density to the surface curvature. Consequently, triangle meshes that interpolate these samples usually chamfer sharp features and exhibit a relatively large error in their vicinity. We present two new filters that improve the quality of these resampled models. EdgeSharpener restores the sharp edges by splitting the chamfer edges and forcing the new vertices to lie on intersections of planes extending the smooth surfaces incident upon these chamfers. Bender refines the resulting triangle mesh using an interpolating subdivision scheme that preserves the sharpness of the recovered sharp edges while bending their polyline approximations into smooth curves. A combined Sharpen&Bend postprocessing significantly reduces the error produced by feature-insensitive sampling processes. For example, we have observed that the mean-squared distortion introduced by the SwingWrapper remeshing-based compressor can often be reduced by 80 percent executing EdgeSharpener alone after decompression. For models with curved regions, this error may be further reduced by an additional 60 percent if we follow the EdgeSharpening phase by Bender. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Camera-based calibration techniques for seamless multiprojector displays

    Page(s): 193 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1979 KB) |  | HTML iconHTML  

    Multiprojector, large-scale displays are used in scientific visualization, virtual reality, and other visually intensive applications. In recent years, a number of camera-based computer vision techniques have been proposed to register the geometry and color of tiled projection-based display. These automated techniques use cameras to "calibrate" display geometry and photometry, computing per-projector corrective warps and intensity corrections that are necessary to produce seamless imagery across projector mosaics. These techniques replace the traditional labor-intensive manual alignment and maintenance steps, making such displays cost-effective, flexible, and accessible. In this paper, we present a survey of different camera-based geometric and photometric registration techniques reported in the literature to date. We discuss several techniques that have been proposed and demonstrated, each addressing particular display configurations and modes of operation. We overview each of these approaches and discuss their advantages and disadvantages. We examine techniques that address registration on both planar (video walls) and arbitrary display surfaces and photometric correction for different kinds of display surfaces. We conclude with a discussion of the remaining challenges and research opportunities for multiprojector displays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A practical approach to spectral volume rendering

    Page(s): 207 - 216
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (829 KB) |  | HTML iconHTML  

    To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial", for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Warp sculpting

    Page(s): 217 - 227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (918 KB) |  | HTML iconHTML  

    The task of computer-based free-form shape design is fraught with practical and conceptual difficulties. Incorporating elements of traditional clay sculpting has long been recognized as a means of shielding the user from these complexities. We present warp sculpting, a variant of spatial deformation, which allows deformations to be initiated by the rigid body transformation or uniform scaling of volumetric tools. This is reminiscent of a tool imprinting, flexing, and molding clay. Unlike previous approaches, the deformation is truly interactive. Tools, encoded in a distance field, can have arbitrarily complex shapes. Although individual tools have a static shape, several tools can be applied simultaneously. We enhance the basic formulation of warp sculpting in two ways. First, deformation is toggled to automatically overcome the problem of "sticky" tools, where the object's surface clings to parts of a tool that are moving away. Second, unlike many other spatial deformations, we ensure that warp sculpting remains foldover-free and, hence, prevent self-intersecting objects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correcting interperspective aliasing in autostereoscopic displays

    Page(s): 228 - 236
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (653 KB) |  | HTML iconHTML  

    An image presented on an autostereoscopic system should not contain discontinuities between adjacent views. A viewer should experience a continuous scene when moving from one view to the next. If corresponding points in two perspectives do not spatially abut, a viewer will experience jumps in the scene. This is known as interperspective aliasing. Interperspective aliasing is caused by object features far away from the stereoscopic screen being too small, which results in visual artifacts. By modeling a 3D point as a defocused image point, we can adapt Fourier analysis to devise a depth-dependent filter kernel that allows filtering of a stereoscopic 3D image. For synthetic 3D data, we use a simpler approach, which is to smear the data by a distance proportional to its depth View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Advertisement]

    Page(s): 237
    Save to Project icon | Request Permissions | PDF file iconPDF (336 KB)  
    Freely Available from IEEE
  • [Advertisement]

    Page(s): 238
    Save to Project icon | Request Permissions | PDF file iconPDF (373 KB)  
    Freely Available from IEEE
  • [Advertisement]

    Page(s): 239
    Save to Project icon | Request Permissions | PDF file iconPDF (663 KB)  
    Freely Available from IEEE
  • [Advertisement]

    Page(s): 240
    Save to Project icon | Request Permissions | PDF file iconPDF (513 KB)  
    Freely Available from IEEE
  • TVCG Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (345 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (307 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina