By Topic

Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on

Date 9-11 Oct. 2002

Filter Results

Displaying Results 1 - 25 of 70
  • Proceedings 10th Pacific Conference on Computer Graphics and Applications

    Save to Project icon | Request Permissions | PDF file iconPDF (358 KB)  
    Freely Available from IEEE
  • Hierarchical representation of time-varying volume data with /sup 4//spl radic/2 subdivision and quadrilinear B-spline wavelets

    Page(s): 346 - 355
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (603 KB) |  | HTML iconHTML  

    Multiresolution methods for representing data at multiple levels of detail are widely used for large-scale two- and three-dimensional data sets. We present a four-dimensional multiresolution approach for time-varying volume data. This approach supports a hierarchy with spatial and temporal scalability. The hierarchical data organization is based on /sup 4//spl radic/2 subdivision. The /sup n//spl radic/2-subdivision scheme only doubles the overall number of grid points in each subdivision step. This fact leads to fine granularity and high adaptivity, which is especially desirable in the spatial dimensions. For high-quality data approximation on each level of detail, we use quadrilinear B-spline wavelets. We present a linear B-spline wavelet lifting scheme based on /sup n//spl radic/2 subdivision to obtain narrow masks for the update rules. Narrow masks provide a basis for out-of-core data exploration techniques and view-dependent visualization of sequences of time steps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 491 - 492
    Save to Project icon | Request Permissions | PDF file iconPDF (184 KB)  
    Freely Available from IEEE
  • On-line graphics recognition

    Page(s): 256 - 264
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    A novel and fast shape classification and regularization algorithm for on-line sketchy graphics recognition is proposed. We divided the on-line graphics recognition process into four stages: preprocessing, shape classification, shape fitting, and regularization. The attraction force model is proposed to combine progressively the vertices on the input sketchy stroke and reduce the total number of vertices before the type of shape can be determined After that, the shape is fitted and gradually rectified to a regular one, thus the regularized shape fits the user-intended one precisely. Experimental results show that this algorithm can rapidly yield good recognition precision (averagely above 90%) and a fine regularization effect. Consequently, it is especially suitable for weak computation environments such as PDAs, which solely depend on a pen-based user interface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subdivision surface simplification

    Page(s): 477 - 480
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    A modified quadric error metric (QEM) for simplification of Loop subdivision surfaces is presented The suggested error metric not only measures the geometric difference but also controls the smoothness and well-shapedness of the triangles that result from the decimation process. Minimizing the error with respect to the original limit surface, our method allows for drastic simplification of Loop control meshes with convenient control over the reproduction of sharp features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete differential error metric for surface simplification

    Page(s): 276 - 283
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1027 KB) |  | HTML iconHTML  

    In this paper we propose a new discrete differential error metric for surface simplification. Many surface simplification algorithms have been developed in order to produce rapidly high quality approximations of polygonal models, and the quadric error metric based on the distance error is the most popular and successful error metric so far Even though such distance based error metrics give visually pleasing results with a reasonably fast speed, it is hard to measure an accurate geometric error on a highly curved and thin region since the error measured by the distance metric on such a region is usually small and causes a loss of visually important features. To overcome such a drawback, we define a new error metric based on the theory of local differential geometry in such a way that the first and the second order discrete differentials approximated locally on a discrete polygonal surface are integrated into the usual distance error metric. The benefits of our error metric are preservation of sharp feature regions after a drastic simplification, small geometric errors, and fast computation comparable to the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization of multidimensional, multivariate volume data using hardware-accelerated non-photorealistic rendering techniques

    Page(s): 394 - 402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20909 KB) |  | HTML iconHTML  

    This paper presents a set of feature enhancement techniques. coupled with hardware-accelerated non-photorealistic rendering for generating more perceptually effective visualizations of multidimensional, multivariate volume data, such as those obtained from typical computational fluid dynamics simulations. For time-invariant data, one or more variables are used to either highlight important features in another variable, or add contextural information to the visualization. For time-varying data, rendering of each time step also takes into account the values at neighboring time steps to reinforce the perception of the changing features in the data over time. With hardware-accelerated rendering, interactive visualization becomes possible leading to increased explorability and comprehension of the data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of multiple directional light sources for synthesis of mixed reality images

    Page(s): 38 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1711 KB) |  | HTML iconHTML  

    We present a new method for the detection and estimation of multiple directional illuminants, using a single image of any object with known geometry and Lambertian reflectance. We use the resulting highly accurate estimates to modify virtually the illumination and geometry of a real scene and produce correctly illuminated mixed reality images. Our method obviates the need to modify the imaged scene by inserting calibration objects of any particular geometry, relying instead on partial knowledge of the geometry of the scene. Thus, the recovered multiple illuminants can be used both for image-based rendering and for shape reconstruction. Our method combines information both from the shading of the object and from shadows cast on the scene by the object. Initially we use a method based on shadows and a method based on shading independently. The shadow based method utilizes brightness variation inside the shadows cast by the object, whereas the shading based method utilizes brightness variation on the directly illuminated portions of the object. We demonstrate how the two sources of information complement each other in a number of occasions. We then describe an approach that integrates the two methods, with results superior to those obtained if the two methods are used separately. The resulting illumination information can be used (i) to render synthetic objects in a real photograph with correct illumination effects, and (ii) to virtually re-light the scene. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compressing hexahedral volume meshes

    Page(s): 284 - 293
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1233 KB) |  | HTML iconHTML  

    Unstructured hexahedral volume meshes are of particular interest for visualization and simulation applications. They allow regular tiling of the three-dimensional space and show good numerical behaviour in finite element computations. Beside such appealing properties, volume meshes take huge amount of space when stored in a raw format. We present a technique for encoding connectivity and geometry of unstructured hexahedral volume meshes. For connectivity compression, we extend the idea of coding with degrees as pioneered by Touma and Gotsman (1998) to volume meshes. Hexahedral connectivity is coded as a sequence of edge degrees. This naturally exploits the regularity of typical hexahedral meshes. We achieve compression rates of around 1.5 bits per hexahedron (bph) that go down to 0.18 bph for regular meshes. On our test meshes the average connectivity compression ratio is 1:162.7. For geometry compression, we perform simple parallelogram prediction on uniformly quantized vertices within the side of a hexahedron. Tests show an average geometry compression ratio of 1:3.7 at a quantization level of 16 bits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • "May I talk to you? : -) " - facial animation from text

    Page(s): 77 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7759 KB) |  | HTML iconHTML  

    We introduce a facial animation system that produces real-time animation sequences including speech synchronization and non-verbal speech-related facial expressions from plain text input. A state-of-the-art text-to-speech synthesis component performs linguistic analysis of the text input and creates a speech signal from phonetic and intonation information. The phonetic transcription is additionally used to drive a speech synchronization method for the physically based facial animation. Further high-level information from the linguistic analysis such as different types of accents or pauses as well as the type of the sentence is used to generate non-verbal speech-related facial expressions such as movement of head, eyes, and eyebrows or voluntary eye blinks. Moreover, emotions are translated into XML markup that triggers emotional facial expressions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cartoon motion capture by shape matching

    Page(s): 454 - 456
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    This paper presents a novel approach for cartoon motion capture from traditionally hand drawn cartoon animations. Unlike previous methods based on skeletal model or key-shape representation, the motion of cartoon character is represented by global affine transformation and local non-affine deformation using thin-plate splines (TPS). The shape of cartoon character is represented by its contour We directly match shapes between adjacent cartoon frames (or keyframes), and retarget recovered motion to a new character, which will move like original one. The novelty of our approach lies in the ability to capture and retarget cartoon motion by shape matching without prior model. Results are presented to show its ability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive solid texturing for Web3D applications

    Page(s): 433 - 434
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    Solid texturing is a well-known computer graphics technology, which is problematic because it consumes too much time if every pixel is calculated on the fly or has a very high memory requirement if all of the pixels are stored at the beginning. Although some methods have been proposed, almost all of them need the support of specific hardware accelerators. Hence, these methods could not be applied to all kinds of machine, especially low-cost ones available over the Internet. Therefore, we present a new method for procedural solid texturing. Our approach could almost render an object with solid texturing in real-time using only a software solution. Furthermore, to demonstrate that our approach is widely applicable we choose pure Java for its implementation, since it does not receive any benefit from hardware and could be executed on the Internet directly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable self-orienting surfaces: a compact, texture-enhanced representation for interactive visualization of 3D vector fields

    Page(s): 356 - 365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (14752 KB) |  | HTML iconHTML  

    This paper presents a study of field line visualization techniques. To address both the computational and perceptual issues in visualizing large scale, complex, dense field line data commonly found in many scientific applications, a new texture-based field line representation which we call self-orienting surfaces is introduced This scalable representation facilitates hardware-accelerated rendering and incorporation of various perceptually-effective techniques, resulting in intuitive visualization and interpretation of the data under study. An electromagnetic data set obtained from accelerator modeling and a fluid flow data set from aerodynamics modeling are used for evaluation and demonstration of the techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computing distances between surfaces using line geometry

    Page(s): 236 - 245
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (754 KB) |  | HTML iconHTML  

    We present an algorithm for computing the distance between two free-form surfaces. Using line geometry, the distance computation is reformulated as a simple instance of a surface-surface intersection problem, which leads to low-dimensional root finding in a system of equations. This approach produces an efficient algorithm for computing the distance between two ellipsoids, where the problem is reduced to finding a specific solution in a system of two equations in two variables. Similar algorithms can be designed for computing the distance between an ellipsoid and a simple surface (such as cylinder cone, or torus). In an experimental implementation (on a 500 MHz Windows PC), the distance between two ellipsoids was computed in less than 0.3 msec on average; and the distance between an ellipsoid and a simple convex surface was computed in less than 0.15 msec on average. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast and memory efficient view-dependent trimmed NURBS rendering

    Page(s): 204 - 213
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (739 KB) |  | HTML iconHTML  

    The problem of rendering large trimmed NURBS models at interactive frame rates is of great interest for industry, since nearly all their models are designed on the basis of this surface type. Most existing approaches first transform the NURBS surfaces into polygonal representation and subsequently build static levels of detail upon them, as current graphics hardware is optimized for rendering triangles. lit this work, we present a method for memory efficient, view-dependent rendering of trimmed NURBS surfaces that yields high-quality results at interactive frame rates. In contrast to existing algorithms, our approach needs not store hierarchies of triangles, since utilizing our special multiresolution seam graph data structure, we are able to generate required triangulations on the fly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Kernel-based HMMs for dynamic sequence synthesis

    Page(s): 87 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (738 KB) |  | HTML iconHTML  

    In this paper we present an approach that synthesizes a dynamic sequence from another related sequence, and apply it to a virtual conductor: to synthesize linked figure animation from an input music track. We propose that the mapping between two dynamic sequences can be modeled with a Kernel-based Hidden Markov model, or KHMM. A KHMM is an HMM for which the kernel-based functions are used to model the state observation density of the joint input and output distribution. Specifically, the state observation density is estimated by employing a likelihood-weighted sampling scheme. Our KHMM model is ideal for dynamic sequence synthesis because the global dynamics are learned by the HMM, and subtle details in the dynamic mapping are kept in the kernel-based state density. We demonstrate our virtual conductor by synthesizing extensive animation sequences from input music sequences with different styles and beat patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture mapping with a Jacobian-based spatially-variant filter

    Page(s): 460 - 461
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    In this paper we describe a new method to map a texture on a surface with a spatially-variant filter. Our filter takes into consideration the effects of anisotropy using a Jacobian approximation while computing the sampling rate, and the interpolation weights are computed with a sinc function. We also discuss how to do forward and backward mapping with the filter and extend our algorithms to 3D meshes. Our experimental results verify our analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lighting interpolation by shadow morphing using intrinsic lumigraphs

    Page(s): 58 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3006 KB) |  | HTML iconHTML  

    Densely-sampled image representations such as the light field or lumigraph have been effective in enabling photorealistic image synthesis. Unfortunately, lighting interpolation with such representations has not been shown to be possible without the use of accurate 3D geometry and surface reflectance properties. In this paper we propose an approach to image-based lighting interpolation that is based on estimates of geometry and shading from relatively few images. We decompose captured light fields at different lighting conditions into intrinsic images (reflectance and illumination images), and estimate view-dependent scene geometries using multi-view stereo. We call the resulting representation an intrinsic lumigraph. In the same way that the lumigraph uses geometry to permit more accurate view interpolation, the intrinsic lumigraph uses both geometry and intrinsic images to allow high-quality interpolation at different views and lighting conditions. Joint use of geometry and intrinsic images is effective in the computation of shadow masks for shadow prediction at new lighting conditions. We illustrate our approach with images of real scenes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Progress in collision detection and response techniques for cloth animation

    Page(s): 444 - 445
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    In the animation of deformable objects, collision detection and response are crucial for performance. Furthermore, a physically correct cloth simulation requires robust collision avoidance, since any overlapping is visible and often results in expensive correction procedures. Much progress has been achieved in improving the numerical solution, and therefore most animations employ large time steps for fast simulations. This even more demands for accurate collision detection and response. In this work we show how collision detection for deformable meshes can be extended to detect proximities in advance. Several heuristics are introduced to save computation time, and constraints ensure an accurate collision response. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simple method for modeling wrinkles on human skin

    Page(s): 166 - 175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1614 KB) |  | HTML iconHTML  

    Realism of rendered human skin can be strongly enhanced by taking into account skin wrinkles. However, modeling wrinkles is a difficult task, and considerable time and effort are necessary to achieve satisfactory results. This paper presents a simple method to easily model wrinkles on human skin, taking into account the properties of real wrinkles. Wrinkles are specified using intuitive parameters, and are generated over a triangle mesh representing a body part, such as a hand or a face. Wrinkled skin surfaces are rendered at an interactive frame rate, dynamically modulating wrinkle amplitude according to skin surface deformation while animating the body part. We demonstrate the ability of our method to model realistic wrinkle shapes by comparing them with real wrinkles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geometric deformation-displacement maps

    Page(s): 156 - 165
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1474 KB) |  | HTML iconHTML  

    Texture mapping, bump mapping, and displacement maps are central instruments in computer graphics aiming to achieve photo-realistic renderings. In all these techniques, the mapping is typically one-to-one and a single surface location is assigned a single texture color, normal, or displacement. Other specialized techniques have also been developed for the rendering of supplementary surface details such as fur hair, or scales. This work presents an extended view of these procedures and allows one to precisely assign a single surface location with few continuously deformed displacements, each with possibly different texture color or normal, employing trivariate functions in a similar way to freeform deformations. As a consequence, an arbitrary regular geometry could be employed as part of the presented scheme as supplementary surface texture details. This work also augments recent results on texturing and parameterization of surfaces of arbitrary topologies by providing more flexible control over the phase of texture modeling. By completely and continuously parameterizing the space above the surface of the object as a trivariate vector function, we are able, in this work, to not only control the mapping of the texture on the surface but also to control this mapping in the volume surrounding the surface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining 2D feature tracking and volume reconstruction for online video-based human motion capture

    Page(s): 96 - 103
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (557 KB) |  | HTML iconHTML  

    The acquisition of human motion data is of major importance for creating interactive virtual environments, intelligent user interfaces, and realistic computer animations. Today's performance of off-the-shelf computer hardware enables marker-free non-intrusive optical tracking of the human body. In addition, recent research shows that it is possible to efficiently acquire and render volumetric scene representations in real-time. This paper describes a system to capture human motion at interactive frame rates without the use of markers or scene-intruding devices. Instead, 2D computer vision and 3D volumetric scene reconstruction algorithms are applied directly to the image data. A person is recorded by multiple synchronized cameras, and a multilayer hierarchical kinematic skeleton is fitted to each frame in a two-stage process. We present results with a prototype system running on two PCs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive construction of multi-segment curved handles

    Page(s): 429 - 430
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    In this work, we present a method to interactively create multi-segment, curved handles between two star-shaped faces of an orientable 2-manifold mesh or to connect two 2-manifold meshes along such faces. The presented algorithm combines a very simple 2D morphing algorithm with Hermite interpolation to construct the handle. Based on the method, we have developed a user interface tool that allows users to simply and easily create multi-segment curved handles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mesh metamorphosis with topology transformations

    Page(s): 481 - 482
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    3D mesh morphing based on a metamesh has fundamental limitations of complicated in-between meshes and no topology (connectivity) changes in a metamorphosis. This paper presents a novel approach for 3D mesh morphing, which is not based on a metamesh. The approach simultaneously interpolates the topology and geometry of input meshes. In our approach, an in-between mesh contains only the vertices from the source and target meshes. Since no additional vertices are introduced, the in-between meshes are much simpler than those generated by previous techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ISpace: interactive volume data classification techniques using independent component analysis

    Page(s): 366 - 374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    This paper introduces an interactive classification technique for volume data, called ISpace, which uses Independent Component Analysis (ICA) and a multidimensional histogram of the volume data in a transformed space. Essentially, classification in the volume domain becomes equivalent to interactive clipping in the ICA space, which as demonstrated using several examples is more intuitive and direct for the user to classify data. The result is an opacity transfer function defined for rendering multivariate scalar volume data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.