Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 3 • Date May-June 2005

Filter Results

Displaying Results 1 - 15 of 15
  • [Front cover]

    Publication Year: 2005 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (390 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2005 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • Editor's note

    Publication Year: 2005 , Page(s): 241 - 242
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Signed distance computation using the angle weighted pseudonormal

    Publication Year: 2005 , Page(s): 243 - 253
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (507 KB) |  | HTML iconHTML  

    The normals of closed, smooth surfaces have long been used to determine whether a point is inside or outside such a surface. It is tempting to also use this method for polyhedra represented as triangle meshes. Unfortunately, this is not possible since, at the vertices and edges of a triangle mesh, the surface is not C1 continuous, hence, the normal is undefined at these loci. In this paper, we undertake to show that the angle weighted pseudonormal (originally proposed by Thurmer and Wuthrich and independently by Sequin) has the important property that it allows us to discriminate between points that are inside and points that are outside a mesh, regardless of whether a mesh vertex, edge, or face is the closest feature. This inside-outside information is usually represented as the sign in the signed distance to the mesh. In effect, our result shows that this sign can be computed as an integral part of the distance computation. Moreover, it provides an additional argument in favor of the angle weighted pseudonormals being the natural extension of the face normals. Apart from the theoretical results, we also propose a simple and efficient algorithm for computing the signed distance to a closed C0 mesh. Experiments indicate that the sign computation overhead when running this algorithm is almost negligible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Registration based on projective reconstruction technique for augmented reality systems

    Publication Year: 2005 , Page(s): 254 - 264
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2013 KB) |  | HTML iconHTML  

    In AR systems, registration is one of the most difficult problems currently limiting their application. In this paper, we propose a simple registration method using projective reconstruction. This method consists of two steps: embedding and tracking. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In tracking, a projective reconstruction technique is used to track these four specified points to compute the model view transformation for augmentation. This method is simple, as only four points need to be specified at the embedding stage and the virtual object can then be easily augmented onto a real scene from a video sequence. In addition, it can be extended to a scenario using the projective matrix that has been obtained from previous registration results using the same AR system. The proposed method has three advantages: 1) it is fast because the linear least square method can be used to estimate the related matrix in the algorithm and it is not necessary to calculate the fundamental matrix in the extended case. 2) A virtual object can still be superimposed on a related area even if some parts of the specified area are occluded during the whole process. 3) This method is robust because it remains effective even when not all the reference points are detected during the whole process, as long as at least six pairs of related reference points correspondences can be found. Some experiments have been conducted to validate the performance of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method to generate soft shadows using a layered depth image and warping

    Publication Year: 2005 , Page(s): 265 - 272
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2205 KB) |  | HTML iconHTML  

    We present an image-based method for propagating area light illumination through a layered depth image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An intelligent system approach to higher-dimensional classification of volume data

    Publication Year: 2005 , Page(s): 273 - 284
    Cited by:  Papers (41)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1926 KB) |  | HTML iconHTML  

    In volume data visualization, the classification step is used to determine voxel visibility and is usually carried out through the interactive editing of a transfer function that defines a mapping between voxel value and color/opacity. This approach is limited by the difficulties in working effectively in the transfer function space beyond two dimensions. We present a new approach to the volume classification problem which couples machine learning and a painting metaphor to allow more sophisticated classification in an intuitive manner. The user works in the volume data space by directly painting on sample slices of the volume and the painted voxels are used in an iterative training process. The trained system can then classify the entire volume. Both classification and rendering can be hardware accelerated, providing immediate visual feedback as painting progresses. Such an intelligent system approach enables the user to perform classification in a much higher dimensional space without explicitly specifying the mapping for every dimension used. Furthermore, the trained system for one data set may be reused to classify other data sets with similar characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware-assisted visibility sorting for unstructured volume rendering

    Publication Year: 2005 , Page(s): 285 - 295
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (821 KB) |  | HTML iconHTML  

    Harvesting the power of modern graphics hardware to solve the complex problem of real-time rendering of large unstructured meshes is a major research goal in the volume visualization community. While, for regular grids, texture-based techniques are well-suited for current GPUs, the steps necessary for rendering unstructured meshes are not so easily mapped to current hardware. We propose a novel volume rendering technique that simplifies the CPU-based processing and shifts much of the sorting burden to the GPU, where it can be performed more efficiently. Our hardware-assisted visibility sorting algorithm is a hybrid technique that operates in both object-space and image-space. In object-space, the algorithm performs a partial sort of the 3D primitives in preparation for rasterization. The goal of the partial sort is to create a list of primitives that generate fragments in nearly sorted order. In image-space, the fragment stream is incrementally sorted using a fixed-depth sorting network. In our algorithm, the object-space work is performed by the CPU and the fragment-level sorting is done completely on the GPU. A prototype implementation of the algorithm demonstrates that the fragment-level sorting achieves rendering rates of between one and six million tetrahedral cells per second on an ATI Radeon 9800. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reflectance from images: a model-based approach for human faces

    Publication Year: 2005 , Page(s): 296 - 305
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1584 KB) |  | HTML iconHTML  

    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uniform remeshing with an adaptive domain: a new scheme for view-dependent level-of-detail rendering of meshes

    Publication Year: 2005 , Page(s): 306 - 316
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1704 KB) |  | HTML iconHTML  

    We present a new algorithm for view-dependent level-of-detail rendering of meshes. Not only can it effectively resolve complex geometry features similar to edge collapse-based schemes, but it also produces meshes that modern graphics hardware can render efficiently. This is accomplished through a novel hybrid approach: for each frame, we view-dependently refine the progressive mesh (PM) representation of the original mesh and use the output as the base domain of uniform regular refinements. The algorithm exploits frame-to-frame coherence and only updates portions of the output mesh corresponding to modified domain triangles. The PM representation is built using a custom volume preservation-based error function. A simple k-d tree enhanced jump-and-walk scheme is used to quickly map from the dynamic base domain to the original mesh during regular refinements. In practice, the PM refinement provides a view-optimized base domain for later regular refinements. The regular refinements ensure almost-everywhere regularity of output meshes, allowing optimization for vertex cache coherence and caching of geometry data in high-performance graphics memory. Combined, they also have the effect of allowing our algorithm to operate on uniform clusters of triangles instead of individual ones, reducing CPU workload. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Creating and simulating skeletal muscle from the visible human data set

    Publication Year: 2005 , Page(s): 317 - 328
    Cited by:  Papers (34)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1218 KB) |  | HTML iconHTML  

    Simulation of the musculoskeletal system has important applications in biomechanics, biomedical engineering, surgery simulation, and computer graphics. The accuracy of the muscle, bone, and tendon geometry as well as the accuracy of muscle and tendon dynamic deformation are of paramount importance in all these applications. We present a framework for extracting and simulating high resolution musculoskeletal geometry from the segmented visible human data set. We simulate 30 contact/collision coupled muscles in the upper limb and describe a computationally tractable implementation using an embedded mesh framework. Muscle geometry is embedded in a nonmanifold, connectivity preserving simulation mesh molded out of a lower resolution BCC lattice containing identical, well-shaped elements, leading to a relaxed time step restriction for stability and, thus, reduced computational cost. The muscles are endowed with a transversely isotropic, quasiincompressible constitutive model that incorporates muscle fiber fields as well as passive and active components. The simulation takes advantage of a new robust finite element technique that handles both degenerate and inverted tetrahedra. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic interaction between deformable surfaces and nonsmooth objects

    Publication Year: 2005 , Page(s): 329 - 340
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2205 KB) |  | HTML iconHTML  

    In this paper, we introduce new techniques that enhance the computational performance for the interactions between sharp objects and deformable surfaces. The new formulation is based on a time-domain predictor-corrector model. For this purpose, we define a new kind of (π, β, I)-surface. The partitioning of a deformable surface into a finite set of (π, β, I)-surfaces allows us to prune a large number of noncolliding feature pairs. This leads to a significant performance improvement in the collision detection process. The intrinsic collision detection is performed in the time domain. Although it is more expensive compared to the static interference test, it avoids portions of the surfaces passing through each other in a single time step. In order to resolve all the possible collision events at a given time, a penetration-free motion space is constructed for each colliding particle. By keeping the velocity of each particle inside the motion space, we guarantee that the current colliding feature pairs will not penetrate each other in the subsequent motion. A static analysis approach is adopted to handle friction by considering the forces acting on the particles and their velocities. In our formulation, we further reduce the computational complexity by eliminating the need to compute repulsive forces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Creating speech-synchronized animation

    Publication Year: 2005 , Page(s): 341 - 352
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1720 KB) |  | HTML iconHTML  

    We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TVCG Information for authors

    Publication Year: 2005 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2005 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (390 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu