By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 5 • Date Sept.-Oct. 2010

Filter Results

Displaying Results 1 - 20 of 20
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (1681 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (147 KB)  
    Freely Available from IEEE
  • Guest Editor's Introduction: Special Section on the Symposium on Interactive 3D Graphics and Games (I3D)

    Page(s): 705 - 706
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Two Fast Methods for High-Quality Line Visibility

    Page(s): 707 - 717
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2892 KB) |  | HTML iconHTML  

    Lines drawn over or in place of shaded 3D models can often provide greater comprehensibility and stylistic freedom than shading alone. A substantial challenge for making stylized line drawings from 3D models is the visibility computation. Current algorithms for computing line visibility in models of moderate complexity are either too slow for interactive rendering, or too brittle for coherent animation. We introduce two methods that exploit graphics hardware to provide fast and robust line visibility. First, we present a simple shader that performs a visibility test for high-quality, simple lines drawn with the conventional implementation. Next, we offer a full optimized pipeline that supports line visibility and a broad range of stylization options. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel View-Dependent Level-of-Detail Control

    Page(s): 718 - 728
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3064 KB) |  | HTML iconHTML  

    We present a scheme for view-dependent level-of-detail control that is implemented entirely on programmable graphics hardware. Our scheme selectively refines and coarsens an arbitrary triangle mesh at the granularity of individual vertices to create meshes that are highly adapted to dynamic view parameters. Such fine-grain control has previously been demonstrated using sequential CPU algorithms. However, these algorithms involve pointer-based structures with intricate dependencies that cannot be handled efficiently within the restricted framework of GPU parallelism. We show that by introducing new data structures and dependency rules, one can realize fine-grain progressive mesh updates as a sequence of parallel streaming passes over the mesh elements. A major design challenge is that the GPU processes stream elements in isolation. The mesh update algorithm has time complexity proportional to the selectively refined mesh, and moreover, can be amortized across several frames. The result is a single standard index buffer that can be used directly for rendering. The static data structure is remarkably compact, requiring only 57 percent more memory than an indexed triangle list. We demonstrate real-time exploration of complex models with normals and textures, as well as shadowing and semitransparent surface rendering applications that make direct use of the resulting dynamic index buffer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Indirect Illumination Using Adaptive Multiresolution Splatting

    Page(s): 729 - 741
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3466 KB) |  | HTML iconHTML  

    Global illumination provides a visual richness not achievable with the direct illumination models used by most interactive applications. To generate global effects, numerous approximations attempt to reduce global illumination costs to levels feasible in interactive contexts. One such approximation, reflective shadow maps, samples a shadow map to identify secondary light sources whose contributions are splatted into eye space. This splatting introduces significant overdraw that is usually reduced by artificially shrinking each splat's radius of influence. This paper introduces a new multiresolution approach for interactively splatting indirect illumination. Instead of reducing GPU fill rate by reducing splat size, we reduce fill rate by rendering splats into a multiresolution buffer. This takes advantage of the low-frequency nature of diffuse and glossy indirect lighting, allowing rendering of indirect contributions at low resolution where lighting changes slowly and at high-resolution near discontinuities. Because this multiresolution rendering occurs on a per-splat basis, we can significantly reduce fill rate without arbitrarily clipping splat contributions below a given threshold-those regions simply are rendered at a coarse resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Creased Approximate Subdivision Surfaces with Displacements

    Page(s): 742 - 751
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2618 KB) |  | HTML iconHTML  

    We present an extension of Loop and Schaefer's approximation of Catmull-Clark surfaces (ACC) for surfaces with creases and corners. We discuss the integration of ACC into Valve's Source game engine and analyze performance of our implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Volume-Based Ambient Occlusion

    Page(s): 752 - 762
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2512 KB) |  | HTML iconHTML  

    Real-time rendering can benefit from global illumination methods to make the 3D environments look more convincing and lifelike. On the other hand, the conventional global illumination algorithms for the estimation of the diffuse surface interreflection make heavy usage of intra- and interobject visibility calculations, so they are time-consuming, and using them in real-time graphics applications can be prohibitive for complex scenes. Modern illumination approximations, such as ambient occlusion variants, use precalculated or frame-dependent data to reduce the problem to a local shading one. This paper presents a fast real-time method for visibility sampling using volumetric data in order to produce accurate inter- and intraobject ambient occlusion. The proposed volume sampling technique disassociates surface representation data from the visibility calculations, and therefore, makes the method suitable for both primitive-order or screen-order rendering, such as deferred rendering. The sampling mechanism can be used in any application that performs visibility queries or ray marching. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sample-Based Surface Coloring

    Page(s): 763 - 776
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2844 KB) |  | HTML iconHTML  

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)-an extension of the Layered Depth Cube-as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The General Pinhole Camera: Effective and Efficient Nonuniform Sampling for Visualization

    Page(s): 777 - 790
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2876 KB) |  | HTML iconHTML  

    We introduce the general pinhole camera (GPC), defined by a center of projection (i.e., the pinhole), an image plane, and a set of sampling locations in the image plane. We demonstrate the advantages of the GPC in the contexts of remote visualization, focus-plus-context visualization, and extreme antialiasing, which benefit from the GPC sampling flexibility. For remote visualization, we describe a GPC that allows zooming-in at the client without the need for transferring additional data from the server. For focus-plus-context visualization, we describe a GPC with multiple regions of interest with sampling rate continuity to the surrounding areas. For extreme antialiasing, we describe a GPC variant that allows supersampling locally with a very high number of color samples per output pixel (e.g., 1,024×), supersampling levels that are out of reach for conventional approaches that supersample the entire image. The GPC supports many types of data, including surface geometry, volumetric, and image data, as well as many rendering modes, including highly view-dependent effects such as volume rendering. Finally, GPC visualization is efficient-GPC images are rendered and resampled with the help of graphics hardware at interactive rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Topology-Aware Evenly Spaced Streamline Placement

    Page(s): 791 - 801
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7944 KB) |  | HTML iconHTML  

    This paper presents a new streamline placement algorithm that produces evenly spaced long streamlines while preserving topological features of a flow field. Singularities and separatrices are extracted to decompose the flow field into topological regions. In each region, a seeding path is selected from a set of streamlines integrated in the orthogonal flow field. The uniform sample points on this path are then used as seeds to generate streamlines in the original flow field. Additional seeds are placed where a large gap between adjacent streamlines occurs. The number of short streamlines is significantly reduced as evenly spaced long streamlines spawned along the seeding paths can fill the topological regions very well. Several metrics for evaluating streamline placement quality are discussed and applied to our method as well as some other approaches. Compared to previous work in uniform streamline placement, our method is more effective in creating evenly spaced long streamlines and preserving topological features. It has the potential to provide both intuitive perception of important flow characteristics and detail reconstruction across visually pleasing streamlines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smooth, Volume-Accurate Material Interface Reconstruction

    Page(s): 802 - 814
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3753 KB) |  | HTML iconHTML  

    A new material interface reconstruction method for volume fraction data is presented. Our method is comprised of two components: first, we generate initial interface topology; then, using a combination of smoothing and volumetric forces within an active interface model, we iteratively transform the initial material interfaces into high-quality surfaces that accurately approximate the problem's volume fractions. Unlike all previous work, our new method produces material interfaces that are smooth, continuous across cell boundaries, and segment cells into regions with proper volume. These properties are critical during visualization and analysis. Generating high-quality mesh representations of material interfaces is required for accurate calculations of interface statistics, and dramatically increases the utility of material boundary visualizations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary Mesh Partitioning for Cache-Efficient Visualization

    Page(s): 815 - 828
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2888 KB) |  | HTML iconHTML  

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cacheaware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a O(N log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a JV-size mesh in dimension d induces less than N/B + O(N/M1/d) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative Visualization for Parameter Studies of Dataset Series

    Page(s): 829 - 840
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3759 KB) |  | HTML iconHTML  

    This paper proposes comparison and visualization techniques to carry out parameter studies for the special application area of dimensional measurement using 3D X-ray computed tomography (3DCT). A dataset series is generated by scanning a specimen multiple times by varying parameters of an industrial 3DCT device. A high-resolution series is explored using our planar-reformatting-based visualization system. We present a novel multi-image view and an edge explorer for comparing and visualizing gray values and edges of several datasets simultaneously. Visualization results and quantitative data are displayed side by side. Our technique is scalable and generic. It can be effective in various application areas like parameter studies of imaging modalities and dataset artifact detection. For fast data retrieval and convenient usability, we use bricking of the datasets and efficient data structures. We evaluate the applicability of the proposed techniques in collaboration with our company partners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Cognitive Fit Theory to Evaluate the Effectiveness of Information Visualizations: An Example Using Quality Assurance Data

    Page(s): 841 - 853
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2511 KB) |  | HTML iconHTML  

    Cognitive fit theory, along with the proximity compatibility principle, is investigated as a basis to evaluate the effectiveness of information visualizations to support a decision-making task. The task used in this study manipulates varying levels of task complexity for quality control decisions in a high-volume discrete manufacturing environment. The volume of process monitoring and quality control data produced in this type of environment can be daunting. Today's managers need effective decision support tools to sort through the morass of data in a timely fashion to make critical decisions on product and process quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Active Shape Modeling with Electric Flows

    Page(s): 854 - 869
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3075 KB) |  | HTML iconHTML  

    Physics-based particle systems are an effective tool for shape modeling. Also, there has been much interest in the study of shape modeling using deformable contour approaches. In this paper, we describe a new deformable model with electric flows based upon computer simulations of a number of charged particles embedded in an electrostatic system. Making use of optimized numerical techniques, the electric potential associated with the electric field in the simulated system is rapidly calculated using the finite-size particle (FSP) method. The simulation of deformation evolves based upon the vector sum of two interacting forces: one from the electric fields and the other from the image gradients. Inspired by the concept of the signed distance function associated with the entropy condition in the level set framework, we efficiently handle topological changes at the interface. In addition to automatic splitting and merging, the evolving contours enable simultaneous detection of various objects with varying intensity gradients at both interior and exterior boundaries. This electric flows approach for shape modeling allows one to connect electric properties in electrostatic equilibrium and classical active contours based upon the theory of curve evolution. Our active contours can be applied to model arbitrarily complicated objects including shapes with sharp corners and cusps, and to situations where no a priori knowledge about the object's topology and geometry is made. We demonstrate the capabilities of this new algorithm in recovering a wide variety of structures on simulated and real images in both 2D and 3D. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Example-Based Human Motion Denoising

    Page(s): 870 - 879
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2023 KB) |  | HTML iconHTML  

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Introducing OnlinePlus

    Page(s): 880
    Save to Project icon | Request Permissions | PDF file iconPDF (516 KB)  
    Freely Available from IEEE
  • TVCG Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (147 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (1681 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina