By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 2 • Date March-April 2006

Filter Results

Displaying Results 1 - 22 of 22
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (228 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • Guest Editors' Introduction: Special Section on ACM VRST

    Page(s): 129 - 130
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB)  
    Freely Available from IEEE
  • Real-time animation of complex hairstyles

    Page(s): 131 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3355 KB) |  | HTML iconHTML  

    True real-time animation of complex hairstyles on animated characters is the goal of this work, and the challenge is to build a mechanical model of the hairstyle which is sufficiently fast for real-time performance while preserving the particular behavior of the hair medium and maintaining sufficient versatility for simulating any kind of complex hairstyles. Rather than building a complex mechanical model directly related to the structure of the hair strands, we take advantage of a volume free-form deformation scheme. We detail the construction of an efficient lattice mechanical deformation model which represents the volume behavior of the hair strands. The lattice is deformed as a particle system using state-of-the-art numerical methods, and animates the hairs using quadratic B-spline interpolation. The hairstyle reacts to the body skin through collisions with a metaball-based approximation. The model is highly scalable and allows hairstyles of any complexity to be simulated in any rendering context with the appropriate trade off between accuracy and computation speed, fitting the need of level-of-detail optimization schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast and reliable collision culling using graphics hardware

    Page(s): 143 - 154
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1623 KB) |  | HTML iconHTML  

    We present a reliable culling algorithm that enables fast and accurate collision detection between triangulated models in a complex environment. Our algorithm performs fast visibility queries on the GPUs for eliminating a subset of primitives that are not in close proximity. In order to overcome the accuracy problems caused by the limited viewport resolution, we compute the Minkowski sum of each primitive with a sphere and perform reliable 2.5D overlap tests between the primitives. We are able to achieve more effective collision culling as compared to prior object-space culling algorithms. We integrate our culling algorithm with CULLIDE and use it to perform reliable GPU-based collision queries at interactive rates on all types of models, including nonmanifold geometry, deformable models, and breaking objects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scanning scene tunnel for city traversing

    Page(s): 155 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3441 KB) |  | HTML iconHTML  

    This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrostatic tactile display with thin film slider and its application to tactile telepresentation systems

    Page(s): 168 - 177
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2310 KB) |  | HTML iconHTML  

    A new electrostatic tactile display is proposed to realize compact tactile display devices that can be incorporated with virtual reality systems. The tactile display of this study consists of a thin conductive film slider with stator electrodes that excite electrostatic forces. Users of the device experience tactile texture sensations by moving the slider with their fingers. The display operates by applying two-phase cyclic voltage patterns to the electrodes. The display is incorporated into a tactile telepresentation system to realize explorations of remote surface textures with real-time tactile feedback. In the system, a PVDF tactile sensor and a DSP controller automatically generate voltage patterns to present surface texture sensations through the tactile display. A sensor, in synchronization with finger motion on the tactile display, scans a texture sample and outputs information about the sample surface. The information is processed by a DSP and fed back to the tactile display in real time. The tactile telepresentation system was evaluated in texture discrimination tests and demonstrated a 79 percent correct answer ratio. A transparent electrostatic tactile display is also reported in which the tactile display is combined with an LCD to realize a visual-tactile integrated display system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transpost: a novel approach to the display and transmission of 360 degrees-viewable 3D solid images

    Page(s): 178 - 185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (870 KB) |  | HTML iconHTML  

    Three-dimensional displays are drawing attention as next-generation devices. Some techniques which can reproduce three-dimensional images prepared in advance have already been developed. However, technology for the transmission of 3D moving pictures in real-time is yet to be achieved. In this paper, we present a novel method for 360-degrees viewable 3D displays and the Transpost system in which we implement the method. The basic concept of our system is to project multiple images of the object, taken from different angles, onto a spinning screen. The key to the method is projection of the images onto a directionally reflective screen with a limited viewing angle. The images are reconstructed to give the viewer a three-dimensional image of the object displayed on the screen. The display system can present images of computer-graphics pictures, live pictures, and movies. Furthermore, the reverse optical process of that in the display system can be used to record images of the subject from multiple directions. The images can then be transmitted to the display in real-time. We have developed prototypes of a 3D display and a 3D human-image transmission system. Our preliminary working prototypes demonstrate new possibilities of expression and forms of communication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive display of isosurfaces with global illumination

    Page(s): 186 - 196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3642 KB) |  | HTML iconHTML  

    In many applications, volumetric data sets are examined by displaying isosurfaces, surfaces where the data, or some function of the data, takes on a given value. Interactive applications typically use local lighting models to render such surfaces. This work introduces a method to precompute or lazily compute global illumination to improve interactive isosurface renderings. The precompiled illumination resides in a separate volume and includes direct light, shadows, and intersections. Using this volume, interactive globally illuminated renderings of isosurfaces become feasible while still allowing dynamic manipulation of lighting, viewpoint and isovalue. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geometry-dependent lighting

    Page(s): 197 - 207
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2212 KB) |  | HTML iconHTML  

    In this paper, we introduce geometry-dependent lighting that allows lighting parameters to be defined independently and possibly discrepantly over an object or scene based on the local geometry. We present and discuss light collages, a lighting design system with geometry-dependent lights for effective feature-enhanced visualization. Our algorithm segments the objects into local surface patches and places lights that are locally consistent but globally discrepant to enhance the perception of shape. We use spherical harmonics for efficiently storing and computing light placement and assignment. We also outline a method to find the minimal number of light sources sufficient to illuminate an object well with our globally discrepant lighting approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization of boundaries in volumetric data sets using LH histograms

    Page(s): 208 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1847 KB) |  | HTML iconHTML  

    A crucial step in volume rendering is the design of transfer functions that highlights those aspects of the volume data that are of interest to the user. For many applications, boundaries carry most of the relevant information. Reliable detection of boundaries is often hampered by limitations of the imaging process, such as blurring and noise. We present a method to identify the materials that form the boundaries. These materials are then used in a new domain that facilitates interactive and semiautomatic design of appropriate transfer functions. We also show how the obtained boundary information can be used in region-growing-based segmentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving contact realism through event-based haptic feedback

    Page(s): 219 - 230
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4232 KB) |  | HTML iconHTML  

    Tapping on surfaces in a typical virtual environment feels like contact with soft foam rather than a hard object. The realism of such interactions can be dramatically improved by superimposing event-based, high-frequency transient forces over traditional position-based feedback. When scaled by impact velocity, hand-tuned pulses and decaying sinusoids produce haptic cues that resemble those experienced during real impacts. Our new method for generating appropriate transients inverts a dynamic model of the haptic device to determine the motor forces required to create prerecorded acceleration profiles at the user's fingertips. After development, the event-based haptic paradigm and the method of acceleration matching were evaluated in a carefully controlled user study. Sixteen individuals blindly tapped on nine virtual and three real samples, rating the degree to which each felt like real wood. Event-based feedback achieved significantly higher realism ratings than the traditional rendering method. The display of transient signals made virtual objects feel similar to a real sample of wood on a foam substrate, while position feedback alone received ratings similar to those of foam. This work provides an important new avenue for increasing the realism of contact in haptic interactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Artifacts caused by simplicial subdivision

    Page(s): 231 - 242
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3056 KB) |  | HTML iconHTML  

    We review schemes for dividing cubic cells into simplices (tetrahedra) for interpolating from sampled data to R3, present visual and geometric artifacts generated in isosurfaces and volume renderings, and discuss how these artifacts relate to the filter kernels corresponding to the subdivision schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete Sibson interpolation

    Page(s): 243 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2081 KB) |  | HTML iconHTML  

    Natural-neighbor interpolation methods, such as Sibson's method, are well-known schemes for multivariate data fitting and reconstruction. Despite its many desirable properties, Sibson's method is computationally expensive and difficult to implement, especially when applied to higher-dimensional data. The main reason for both problems is the method's implementation based on a Voronoi diagram of all data points. We describe a discrete approach to evaluating Sibson's interpolant on a regular grid, based solely on finding nearest neighbors and rendering and blending d-dimensional spheres. Our approach does not require us to construct an explicit Voronoi diagram, is easily implemented using commodity three-dimensional graphics hardware, leads to a significant speed increase compared to traditional approaches, and generalizes easily to higher dimensions. For large scattered data sets, we achieve two-dimensional (2D) interpolation at interactive rates and 3D interpolation (3D) with computation times of a few seconds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise-resistant fitting for spherical harmonics

    Page(s): 254 - 265
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3088 KB) |  | HTML iconHTML  

    Spherical harmonic (SH) basis functions have been widely used for representing spherical functions in modeling various illumination properties. They can compactly represent low-frequency spherical functions. However, when the unconstrained least square method is used for estimating the SH coefficients of a hemispherical function, the magnitude of these SH coefficients could be very large. Hence, the rendering result is very sensitive to quantization noise (introduced by modern texture compression like S3TC, IEEE half float data type on GPU, or other lossy compression methods) in these SH coefficients. Our experiments show that, as the precision of SH coefficients are reduced, the rendered images may exhibit annoying visual artifacts. To reduce the noise sensitivity of the SH coefficients, this paper first discusses how the magnitude of SH coefficients affects the rendering result when there is quantization noise. Then, two fast fitting methods for estimating the noise-resistant SH coefficients are proposed. They can effectively control the magnitude of the estimated SH coefficients and, hence, suppress the rendering artifacts. Both statistical and visual results confirm our theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate visible speech synthesis based on concatenating variable length motion capture data

    Page(s): 266 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1289 KB) |  | HTML iconHTML  

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is described. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergartens through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optical merger of direct vision with virtual images for scaled teleoperation

    Page(s): 277 - 285
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1296 KB) |  | HTML iconHTML  

    Scaled teleoperation is increasingly prevalent in medicine, as well as in other applications of robotics. Visual feedback in such systems is essential and should make maximal use of natural hand-eye coordination. This paper describes a new method of visual feedback for scaled teleoperation in which the operator manipulates the handle of a remote tool in the presence of a registered virtual image of the target in real time. The method adapts a concept already used successfully in a new medical device called the sonic flashlight, which permits direct in situ visualization of ultrasound during invasive procedures. The sonic flashlight uses a flat-panel monitor and a half-silvered mirror to merge the visual outer surface of a patient with a simultaneous ultrasound scan of the patient's interior. Adapting the concept to scaled teleoperation involves removing the imaging device and the target to a remote location and adding a master-slave control device. This permits the operator to see his hands, along with what appears to be the tool, and the target, merged in a workspace that preserves natural hand-eye coordination. Three functioning prototypes are described, one based on ultrasound and two on light microscopy. The limitations and potential of the new approach are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Advertisement]

    Page(s): 286
    Save to Project icon | Request Permissions | PDF file iconPDF (825 KB)  
    Freely Available from IEEE
  • [Advertisement]

    Page(s): 287
    Save to Project icon | Request Permissions | PDF file iconPDF (725 KB)  
    Freely Available from IEEE
  • [Advertisement]

    Page(s): 288
    Save to Project icon | Request Permissions | PDF file iconPDF (357 KB)  
    Freely Available from IEEE
  • TVCG Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (228 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina