Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 2 • Date March-April 2007

Filter Results

Displaying Results 1 - 23 of 23
  • [Front cover]

    Publication Year: 2007 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (323 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2007 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Volume Splitting and Its Applications

    Publication Year: 2007 , Page(s): 193 - 203
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2722 KB) |  | HTML iconHTML  

    Splitting a volumetric object is a useful operation in volume graphics and its applications, but is not widely supported by existing systems for volume-based modeling and rendering. In this paper, we present an investigation into two main algorithmic approaches, namely, explicit and implicit splitting, for modeling and rendering splitting actions. We consider a generalized notion based on scalar fields, which encompasses discrete specifications (e.g., volume data sets) as well as procedural specifications (e.g., hypertextures) of volumetric objects. We examine the correctness, effectiveness, efficiency, and deficiencies of each approach in specifying and controlling a spatial and temporal specification of splitting. We propose methods for implementing these approaches and for overcoming their deficiencies. We present a modeling tool for creating specifications of splitting functions, and describe the use of volume scene graphs for facilitating direct rendering of volume splitting. We demonstrate the use of these approaches with examples of volume visualization, medical illustration, volume animation, and special effects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of a Low-Cost 3D Sound System for Immersive Virtual Reality Training Systems

    Publication Year: 2007 , Page(s): 204 - 212
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1305 KB) |  | HTML iconHTML  

    Since head mounted displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "virtual training systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Survey on Hair Modeling: Styling, Simulation, and Rendering

    Publication Year: 2007 , Page(s): 213 - 234
    Cited by:  Papers (22)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2849 KB) |  | HTML iconHTML  

    Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in alt these areas, a broad diversity of approaches is used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-Based Color Ink Diffusion Rendering

    Publication Year: 2007 , Page(s): 235 - 246
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2596 KB) |  | HTML iconHTML  

    This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Streaming-Based Solution for Remote Visualization of 3D Graphics on Mobile Devices

    Publication Year: 2007 , Page(s): 247 - 260
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2277 KB) |  | HTML iconHTML  

    Mobile devices such as personal digital assistants, tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications is now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, personal digital assistants (PDAs), and tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pose-Oblivious Shape Signature

    Publication Year: 2007 , Page(s): 261 - 271
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6613 KB) |  | HTML iconHTML  

    A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Caustics Mapping: An Image-Space Technique for Real-Time Caustics

    Publication Year: 2007 , Page(s): 272 - 280
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1367 KB) |  | HTML iconHTML  

    In this paper, we present a simple and practical technique for real-time rendering of caustics from reflective and refractive objects. Our algorithm, conceptually similar to shadow mapping, consists of two main parts: creation of a caustic map texture, and utilization of the map to render caustics onto nonshiny surfaces. Our approach avoids performing any expensive geometric tests, such as ray-object intersection, and involves no precomputation; both of which are common features in previous work. The algorithm is well suited for the standard rasterization pipeline and runs entirely on the graphics hardware View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Personality and Emotion-Based High-Level Control of Affective Story Characters

    Publication Year: 2007 , Page(s): 281 - 293
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1648 KB) |  | HTML iconHTML  

    Human emotional behavior, personality, and body language are the essential elements in the recognition of a believable synthetic story character. This paper presents an approach using story scripts and action descriptions in a form similar to the content description of storyboards to predict specific personality and emotional states. By adopting the Abridged Big Five Circumplex (AB5C) Model of personality from the study of psychology as a basis for a computational model, we construct a hierarchical fuzzy rule-based system to facilitate the personality and emotion control of the body language of a dynamic story character. The story character can consistently perform specific postures and gestures based on his/her personality type. Story designers can devise a story context in the form of our story interface which predictably motivates personality and emotion values to drive the appropriate movements of the story characters. Our system takes advantage of relevant knowledge described by psychologists and researchers of storytelling, nonverbal communication, and human movement. Our ultimate goal is to facilitate the high-level control of a synthetic character View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graph Visualization Techniques for Web Clustering Engines

    Publication Year: 2007 , Page(s): 294 - 304
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1474 KB) |  | HTML iconHTML  

    One of the most challenging issues in mining information from the World Wide Web is the design of systems that present the data to the end user by clustering them into meaningful semantic categories. We show that the analysis of the results of a clustering engine can significantly take advantage of enhanced graph drawing and visualization techniques. We propose a graph-based user interface for Web clustering engines that makes it possible for the user to explore and visualize the different semantic categories and their relationships at the desired level of detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TopoLayout: Multilevel Graph Layout by Topological Features

    Publication Year: 2007 , Page(s): 305 - 317
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2645 KB) |  | HTML iconHTML  

    We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Collision Detection for Deformable Models Using Streaming AABBs

    Publication Year: 2007 , Page(s): 318 - 329
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1616 KB) |  | HTML iconHTML  

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform overtime, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30~100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm and o- - bserved about two times improvement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Topology-Controlled Volume Rendering

    Publication Year: 2007 , Page(s): 330 - 341
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1401 KB) |  | HTML iconHTML  

    Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are: 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Light Scattering from Filaments

    Publication Year: 2007 , Page(s): 342 - 356
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1832 KB) |  | HTML iconHTML  

    Photorealistic visualization of a huge number of individual filaments like in the case of hair, fur, or knitwear is a challenging task: Explicit rendering approaches for simulating radiance transfer at a filament get totally impracticable with respect to rendering performance and it is also not obvious how to derive efficient scattering functions for different levels of (geometric) abstraction or how to deal with very complex scattering mechanisms. We present a novel uniform formalism for light scattering from filaments in terms of radiance, which we call the bidirectional fiber scattering distribution function (BFSDF). We show that previous specialized approaches, which have been developed in the context of hair rendering, can be seen as instances of the BFSDF. Similar to the role of the BSSRDF for surface scattering functions, the BFSDF can be seen as a general approach for light scattering from filaments, which is suitable for deriving approximations in a canonic and systematic way. For the frequent cases of distant light sources and observers, we deduce an efficient far field approximation (bidirectional curve scattering distribution function, BCSDF). We show that on the basis of the BFSDF, parameters for common rendering techniques can be estimated in a non-ad-hoc, but physically-based way View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Model and Framework for Visualization Exploration

    Publication Year: 2007 , Page(s): 357 - 369
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2645 KB) |  | HTML iconHTML  

    Visualization exploration is the process of extracting insight from data via interaction with visual depictions of that data. Visualization exploration is more than presentation; the interaction with both the data and its depiction is as important as the data and depiction itself. Significant visualization research has focused on the generation of visualizations (the depiction); less effort has focused on the exploratory aspects of visualization (the process). However, without formal models of the process, visualization exploration sessions cannot be fully utilized to assist users and system designers. Toward this end, we introduce the P-Set model of visualization exploration for describing this process and a framework to encapsulate, share, and analyze visual explorations. In addition, systems utilizing the model and framework are more efficient as redundant exploration is avoided. Several examples drawn from visualization applications demonstrate these benefits. Taken together, the model and framework provide an effective means to exploit the information within the visual exploration process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fracturing Rigid Materials

    Publication Year: 2007 , Page(s): 370 - 378
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1604 KB) |  | HTML iconHTML  

    We propose a novel approach to fracturing (and denting) brittle materials. To avoid the computational burden imposed by the stringent time step restrictions of explicit methods or with solving nonlinear systems of equations for implicit methods, we treat the material as a fully rigid body in the limit of infinite stiffness. In addition to a triangulated surface mesh and level set volume for collisions, each rigid body is outfitted with a tetrahedral mesh upon which finite element analysis can be carried out to provide a stress map for fracture criteria. We demonstrate that the commonly used stress criteria can lead to arbitrary fracture (especially for stiff materials) and instead propose the notion of a time averaged stress directly into the FEM analysis. When objects fracture, the virtual node algorithm provides new triangle and tetrahedral meshes in a straightforward and robust fashion. Although each new rigid body can be rasterized to obtain a new level set, small shards can be difficult to accurately resolve. Therefore, we propose a novel collision handling technique for treating both rigid bodies and rigid body thin shells represented by only a triangle mesh View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aura 3D Textures

    Publication Year: 2007 , Page(s): 379 - 389
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2132 KB) |  | HTML iconHTML  

    This paper presents a new technique, called aura 3D textures, for generating solid textures based on input examples. Our method is fully automatic and requires no user interactions in the process. Given an input texture sample, our method first creates its aura matrix representations and then generates a solid texture by sampling the aura matrices of the input sample constrained in multiple view directions. Once the solid texture is generated, any given object can be textured by the solid texture. We evaluate the results of our method based on extensive user studies. Based on the evaluation results using human subjects, we conclude that our algorithm can generate faithful results of both stochastic and structural textures with an average successful rate of 76.4 percent. Our experimental results also show that the new method outperforms Wei and Levoy's method and is comparable to that proposed by Jagnow et al. (2004) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Animation of Lightning Using an Adaptive Mesh

    Publication Year: 2007 , Page(s): 390 - 402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1522 KB) |  | HTML iconHTML  

    We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Framework for Holographic Scene Representation and Image Synthesis

    Publication Year: 2007 , Page(s): 403 - 415
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1661 KB) |  | HTML iconHTML  

    We present a framework for the holographic representation and display of graphics objects. As opposed to traditional graphics representations, our approach reconstructs the light wave reflected or emitted by the original object directly from the underlying digital hologram. Our novel holographic graphics pipeline consists of several stages including the digital recording of a full-parallax hologram, the reconstruction and propagation of its wavefront, and rendering of the final image onto conventional, framebuffer-based displays. The required view-dependent depth image is computed from the phase information inherently represented in the complex-valued wavefront. Our model also comprises a correct physical modeling of the camera taking into account optical elements, such as lens and aperture. It thus allows for a variety of effects including depth of field, diffraction, interference, and features built-in anti-aliasing. A central feature of our framework is its seamless integration into conventional rendering and display technology which enables us to elegantly combine traditional 3D object or scene representations with holograms. The presented work includes the theoretical foundations and allows for high quality rendering of objects consisting of large numbers of elementary waves while keeping the hologram at a reasonable size View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 180,000 aritlces in the IEEE Computer Society Digital Library [advertisement]

    Publication Year: 2007 , Page(s): 416
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • TVCG Information for authors

    Publication Year: 2007 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2007 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (323 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu