By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 8 • Date Aug. 2011

Filter Results

Displaying Results 1 - 20 of 20
  • [Front cover]

    Publication Year: 2011 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (117 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2011 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (96 KB)  
    Freely Available from IEEE
  • Editor's Note

    Publication Year: 2011 , Page(s): 1033
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Guest Editor's Introduction: Special Section on the Symposium on Interactive 3D Graphics and Games (I3D)

    Publication Year: 2011 , Page(s): 1034 - 1035
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Stochastic Transparency

    Publication Year: 2011 , Page(s): 1036 - 1047
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (678 KB) |  | HTML iconHTML  

    Stochastic transparency provides a unified approach to order-independent transparency, antialiasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops, other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Sparse Voxel Octrees

    Publication Year: 2011 , Page(s): 1048 - 1059
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1071 KB) |  | HTML iconHTML  

    In this paper, we examine the possibilities of using voxel representations as a generic way for expressing complex and feature-rich geometry on current and future GPUs. We present in detail a compact data structure for storing voxels and an efficient algorithm for performing ray casts using this structure. We augment the voxel data with novel contour information that increases geometric resolution, allows more compact encoding of smooth surfaces, and accelerates ray casts. We also employ a novel normal compression format for storing high-precision object-space normals. Finally, we present a variable-radius postprocess filtering technique for smoothing out blockiness caused by discrete sampling of shading attributes. Based on benchmark results, we show that our voxel representation is competitive with triangle-based representations in terms of ray casting performance, while allowing tremendously greater geometric detail and unique shading information for every voxel. Our voxel codebase is open sourced and available at http://code.google.com/p/efficient-sparse-voxel-octrees/. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frankenrigs: Building Character Rigs from Multiple Sources

    Publication Year: 2011 , Page(s): 1060 - 1070
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2114 KB) |  | HTML iconHTML  

    We present a new rigging and skinning method which uses a database of partial rigs extracted from a set of source characters. Given a target mesh and a set of joint locations, our system can automatically scan through the database to find the best-fitting body parts, tailor them to match the target mesh, and transfer their skinning information onto the new character. For the cases where our automatic procedure fails, we provide an intuitive set of tools to fix the problems. When used fully automatically, the system can generate results of much higher quality than a standard smooth bind, and with some user interaction, it can create rigs approaching the quality of artist-created manual rigs in a small fraction of the time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Shape Depiction under Arbitrary Rendering

    Publication Year: 2011 , Page(s): 1071 - 1081
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1786 KB) |  | HTML iconHTML  

    Based on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated with surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and interreflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real time on modern graphics hardware making it suitable for any interactive 3D visualization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusing Multiview and Photometric Stereo for 3D Reconstruction under Uncalibrated Illumination

    Publication Year: 2011 , Page(s): 1082 - 1095
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1603 KB) |  | HTML iconHTML  

    We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Gabor Noise

    Publication Year: 2011 , Page(s): 1096 - 1107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1728 KB) |  | HTML iconHTML  

    We have recently proposed a new procedural noise function, Gabor noise, which offers a combination of properties not found in the existing noise functions. In this paper, we present three significant improvements to Gabor noise: 1) an isotropic kernel for Gabor noise, which speeds up isotropic Gabor noise with a factor of roughly two, 2) an error analysis of Gabor noise, which relates the kernel truncation radius to the relative error of the noise, and 3) spatially varying Gabor noise, which enables spatial variation of all noise parameters. These improvements make Gabor noise an even more attractive alternative for the existing noise functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Representativity for Robust and Adaptive Multiple Importance Sampling

    Publication Year: 2011 , Page(s): 1108 - 1121
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (960 KB) |  | HTML iconHTML  

    We present a general method enhancing the robustness of estimators based on multiple importance sampling (MIS) in a numerical integration context. MIS minimizes variance of estimators for a given sampling configuration, but when this configuration is less adapted to the integrand, the resulting estimator suffers from extra variance. We address this issue by introducing the notion of "representativity” of a sampling strategy, and demonstrate how it can be used to increase robustness of estimators, by adapting them to the integrand. We first show how to compute representativities using common rendering informations such as BSDF, photon maps, or caches in order to choose the best sampling strategy for MIS. We then give hints to generalize our method to any integration problem and demonstrate that it can be used successfully to enhance robustness in different common rendering algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Exact Nearest Patch Matching for Patch-Based Image Editing and Processing

    Publication Year: 2011 , Page(s): 1122 - 1134
    Cited by:  Papers (3)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1693 KB) |  | HTML iconHTML  

    This paper presents an efficient exact nearest patch matching algorithm which can accurately find the most similar patch-pairs between source and target image. Traditional match matching algorithms treat each pixel/patch as an independent sample and build a hierarchical data structure, such as kd-tree, to accelerate nearest patch finding. However, most of these approaches can only find approximate nearest patch and do not explore the sequential overlap between patches. Hence, they are neither accurate in quality nor optimal in speed. By eliminating redundant similarity computation of sequential overlap between patches, our method finds the exact nearest patch in brute-force style but reduces its running time complexity to be linear on the patch size. Furthermore, relying on recent multicore graphics hardware, our method can be further accelerated by at least an order of magnitude (≥ 10 ×). This greatly improves performance and ensures that our method can be efficiently applied in an interactive editing framework for moderate-sized image even video. To our knowledge, this approach is the fastest exact nearest patch matching method for high-dimensional patch and also its extra memory requirement is minimal. Comparisons with the popular nearest patch matching methods in the experimental results demonstrate the merits of our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Edit Propagation Using Hierarchical Data Structure

    Publication Year: 2011 , Page(s): 1135 - 1147
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2188 KB) |  | HTML iconHTML  

    This paper presents a novel unified hierarchical structure for scalable edit propagation. Our method is based on the key observation that in edit propagation, appearance varies very smoothly in those regions where the appearance is different from the user-specified pixels. Uniformly sampling in these regions leads to redundant computation. We propose to use a quadtree-based adaptive subdivision method such that more samples are selected in similar regions and less in those that are different from the user-specified regions. As a result, both the computation and the memory requirement are significantly reduced. In edit propagation, an edge-preserving propagation function is first built, and the full solution for all the pixels can be computed by interpolating from the solution obtained from the adaptively subdivided domain. Furthermore, our approach can be easily extended to accelerate video edit propagation using an adaptive octree structure. In order to improve user interaction, we introduce several new Gaussian Mixture Model (GMM) brushes to find pixels that are similar to the user-specified regions. Compared with previous methods, our approach requires significantly less time and memory, while achieving visually same results. Experimental results demonstrate the efficiency and effectiveness of our approach on high-resolution photographs and videos. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Line Integration

    Publication Year: 2011 , Page(s): 1148 - 1163
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1564 KB) |  | HTML iconHTML  

    This paper presents an acceleration scheme for the numerical computation of sets of trajectories in vector fields or iterated solutions in maps, possibly with simultaneous evaluation of quantities along the curves such as integrals or extrema. It addresses cases with a dense evaluation on the domain, where straightforward approaches are subject to redundant calculations. These are avoided by first calculating short solutions for the whole domain. From these, longer solutions are then constructed in a hierarchical manner until the designated length is achieved. While the computational complexity of the straightforward approach depends linearly on the length of the solutions, the computational cost with the proposed scheme grows only logarithmically with increasing length. Due to independence of subtasks and memory locality, our algorithm is suitable for parallel execution on many-core architectures like GPUs. The trade-offs of the method - lower accuracy and increased memory consumption - are analyzed, including error order as well as numerical error for discrete computation grids. The usefulness and flexibility of the scheme are demonstrated with two example applications: line integral convolution and the computation of the finite-time Lyapunov exponent. Finally, results and performance measurements of our GPU implementation are presented for both synthetic and simulated vector fields from computational fluid dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sort-First Parallel Volume Rendering

    Publication Year: 2011 , Page(s): 1164 - 1177
    Cited by:  Papers (4)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2149 KB) |  | HTML iconHTML  

    Sort-first distributions have been studied and used far less than sort-last distributions for parallel volume rendering, especially when the data are too large to be replicated fully. We demonstrate that sort-first distributions are not only a viable method of performing data-scalable parallel volume rendering, but more importantly they allow for a range of rendering algorithms and techniques that are not efficient with sort-last distributions. Several of these algorithms are discussed and two of them are implemented in a parallel environment: a new improved variant of early ray termination to speed up rendering when volumetric occlusion occurs and a volumetric shadowing technique that produces more realistic and informative images based on half angle slicing. Improved methods of distributing the computation of the load balancing and loading portions of a subdivided data set are also presented. Our detailed test results for a typical GPU cluster with distributed memory show that our sort-first rendering algorithm outperforms sort-last rendering in many scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Template-Based 3D Model Fitting Using Dual-Domain Relaxation

    Publication Year: 2011 , Page(s): 1178 - 1190
    Cited by:  Papers (3)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2048 KB) |  | HTML iconHTML  

    We introduce a template fitting method for 3D surface meshes. A given template mesh is deformed to closely approximate the input 3D geometry. The connectivity of the deformed template model is automatically adjusted to facilitate the geometric fitting and to ascertain high quality of the mesh elements. The template fitting process utilizes a specially tailored Laplacian processing framework, where in the first, coarse fitting stage we approximate the input geometry with a linearized biharmonic surface (a variant of LS-mesh), and then the fine geometric detail is fitted further using iterative Laplacian editing with reliable correspondence constraints and a local surface flattening mechanism to avoid foldovers. The latter step is performed in the dual mesh domain, which is shown to encourage near-equilateral mesh elements and significantly reduces the occurrence of triangle foldovers, a well-known problem in mesh fitting. To experimentally evaluate our approach, we compare our method with relevant state-of-the-art techniques and confirm significant improvements of results. In addition, we demonstrate the usefulness of our approach to the application of consistent surface parameterization (also known as cross-parameterization). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stay Connected with the IEEE Computer Society [advertisement]

    Publication Year: 2011 , Page(s): 1191
    Save to Project icon | Request Permissions | PDF file iconPDF (292 KB)  
    Freely Available from IEEE
  • Distinguish yourself with the CSDP [advertisement]

    Publication Year: 2011 , Page(s): 1192
    Save to Project icon | Request Permissions | PDF file iconPDF (426 KB)  
    Freely Available from IEEE
  • TVCG Information for authors

    Publication Year: 2011 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (96 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2011 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (117 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu