By Topic

Computer Graphics and Applications, IEEE

Issue 6 • Date Nov.-Dec. 2002

Filter Results

Displaying Results 1 - 17 of 17
  • Into the abstract

    Page(s): 4 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1015 KB) |  | HTML iconHTML  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracking: how hard can it be?

    Page(s): 22 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (350 KB) |  | HTML iconHTML  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion tracking: no silver bullet, but a respectable arsenal

    Page(s): 24 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1883 KB) |  | HTML iconHTML  

    This article introduces the physical principles underlying the variety of approaches to motion tracking. Although no single technology will work for all purposes, certain methods work quite well for specific applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time rendering in curved spaces

    Page(s): 90 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2484 KB) |  | HTML iconHTML  

    A hypersphere surface provides a finite 3D world in which the user can fly freely without encountering boundaries, while hyperbolic space provides a spacious environment. The algorithm for rendering a scene in a hypersphere is identical to the standard algorithm for rendering a scene in ordinary flat 3D space. Indeed, the computations are so similar that off-the-shelf 3D graphics cards, when fed the correct matrices, will do real-time animations in a hypersphere just as easily and as quickly as they do in flat space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 119 - 122
    Save to Project icon | Request Permissions | PDF file iconPDF (242 KB)  
    Freely Available from IEEE
  • Subject index

    Page(s): 122 - 131
    Save to Project icon | Request Permissions | PDF file iconPDF (249 KB)  
    Freely Available from IEEE
  • Real-time fingertip tracking and gesture recognition

    Page(s): 64 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1285 KB) |  | HTML iconHTML  

    Augmented desk interfaces and other virtual reality systems depend on accurate, real-time hand and fingertip tracking for seamless integration between real objects and associated digital information. We introduce a method for discerning fingertip locations in image frames and measuring fingertip trajectories across image frames. We also propose a mechanism for combining direct manipulation and symbolic gestures based on multiple fingertip motions. Our method uses a filtering technique, in addition to detecting fingertips in each image frame, to predict fingertip locations in successive image frames and to examine the correspondences between the predicted locations and detected fingertips. This lets us obtain multiple complex fingertip trajectories in real time and improves fingertip tracking. This method can track multiple fingertips reliably even on a complex background under changing lighting conditions without invasive devices or color markers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual debugging

    Page(s): 6 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1180 KB) |  | HTML iconHTML  

    We developed an approach that uses our innate visual pattern recognition skills as part of the debugging process. Inspired by Huang's (1996) use of color to visualize energy distributions while untangling knots, we represented the particles graphically and color-coded them by energy value. Thus far, we've applied this approach to three domains: particle systems, cluster hardware configurations, and physics codes using finite element models. This debugging paradigm differs from software or program visualization in that we don't visualize software elements such as procedures, message passing between processors, or graph-based representations of data structures. In most application domains developers that use algorithm visualization tools must make decisions about what kind of visualization would best represent their code, and they must, in effect, code this visualization in addition to their application. For many developers, the time investment is too great compared to their perceived benefit, so they return to a traditional debugging approach. We believe that restricting the application domain increases the ease of use of visual debuggers. However, we go one step further by creating a, visual tool tailored to a particular application domain that can use either captured data or simulation outputs and requires no coding effort on the part of the user. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unwrapping and visualizing cuneiform tablets

    Page(s): 82 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1661 KB) |  | HTML iconHTML  

    Cuneiform inscriptions, which scholars consider the first written language, were made in moist, clay tablets. We've developed a semiautomatic method for concisely displaying the tablets' inscribed writing, thereby providing a clear visualization that can be printed on paper. We first scan the tablets with 3D range scanners and use the scan data to construct a high-resolution 3D model (at a resolution of 50 microns). Next, we unwrap and warp the tablet surface to form a set of flat rectangles, one per side or edge of the tablet. This process permits all the writing to be seen at once, although necessarily slightly distorted. Finally, we apply curvature coloring and accessibility coloring to the unwrapped text, thereby replacing raking illumination with a nonphotorealistic rendering technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real world teleconferencing

    Page(s): 11 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB) |  | HTML iconHTML  

    We've been exploring how augmented reality (AR) technology can create fundamentally new forms of remote collaboration for mobile devices. AR involves the overlay of virtual graphics and audio on reality. Typically, the user views the world through a handheld or head-mounted display (HMD) that's either see-through or overlays graphics on video of the surrounding environment. Unlike other computer interfaces that draw users away from the real world and onto the screen, AR interfaces enhance the real world experience. For example, with this technology doctors could see virtual ultrasound information superimposed on a patient's body. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Augmented reality camera tracking with homographies

    Page(s): 39 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1425 KB) |  | HTML iconHTML  

    To realistically integrate 3D graphics into an unprepared environment, camera position must be estimated by tracking natural image features. We apply our technique to cases where feature positions in adjacent frames of an image sequence are related by a homography, or projective transformation. We describe this transformation's computation and demonstrate several applications. First, we use an augmented notice board to explain how a homography, between two images of a planar scene, completely determines the relative camera positions. Second, we show that the homography can also recover pure camera rotations, and we use this to develop an outdoor AR tracking system. Third, we use the system to measure head rotation and form a simple low-cost virtual reality (VR) tracking solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid tracking for outdoor augmented reality applications

    Page(s): 54 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1054 KB) |  | HTML iconHTML  

    We've developed a fully mobile, wearable AR system that combines a vision-based tracker (primarily software algorithms) that uses natural landmarks, with an inertial tracker (custom hardware and firmware) based on silicon micromachined accelerometers and gyroscopes. Unlike other vision-based and hybrid systems, both components recover the full 6 DOF pose. Fusing the two tracking subsystems gives us the benefits of both technologies, while the sensors' complementary nature helps overcome sensor-specific deficiencies. Our system is tailored to affordable, lightweight, energy-efficient mobile AR applications for urban environments, especially the historic centers of European cities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting 3D facial animation parameters from multiview video clips

    Page(s): 72 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (881 KB) |  | HTML iconHTML  

    We propose an accurate and inexpensive procedure that estimates 3D facial motion parameters from mirror-reflected multiview video clips. We place two planar mirrors near a subject's cheeks and use a single camera to simultaneously capture a marker's front and side view images. We also propose a novel closed-form linear algorithm to reconstruct 3D positions from real versus mirrored point correspondences in an uncalibrated environment. Our computer simulations reveal that exploiting mirrors' various reflective properties yields a more robust, accurate, and simpler 3D position estimation approach than general-purpose stereo vision methods that use a linear approach or maximum-likelihood optimization. Our experiments show a root mean square (RMS) error of less than 2 mm in 3D space with only 20-point correspondences. For semiautomatic 3D motion tracking, we use an adaptive Kalman predictor and filter to improve stability and infer the occluded markers' position. Our approach tracks more than 50 markers on a subject's face and lips from 30-frame-per-second video clips. We've applied the facial motion parameters estimated from the proposed method to our facial animation system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The analysis and statistics of line distribution

    Page(s): 100 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (470 KB) |  | HTML iconHTML  

    We gathered more than a thousand different programs from various computer graphics applications on the Internet. We checked program source code one by one and compiled the statistical distributions of line lengths and orientations of the line drawing programs. This article presents our data collection methods, lists the statistical data in detail, and discusses analytical results. We believe that our work will help researchers better understand the properties of line drawing in real applications and improve line scan-conversion methods. Detailed, accurate knowledge of the drawing environment in which an interface or functional capability will be used is the best basis for comparing algorithms or making design decisions about hardware graphics accelerator interfaces and software device driver interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pose estimation for planar structures

    Page(s): 46 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1132 KB) |  | HTML iconHTML  

    We address the registration problem for interactive AR applications. Such applications require a real-time registration process. Although the registration problem has received a lot of attention in the computer vision community, it's far from being solved. Ideally, an AR system should work in all environments without the need to prepare the scene ahead of time, and users should be able to walk anywhere they want. In the past, several AR systems have achieved accurate and fast tracking and registration, putting dots over objects and tracking the dots with a camera. We can also achieve registration by identifying features in the scene that we can carefully measure for real-world coordinates. However, such methods restrict the system's flexibility. Hence, we need to investigate registration methods that work in unprepared environments and reduce the need to know the objects' geometry in the scene. We propose an efficient solution to real-time camera tracking for scenes that contain planar structures. We can consider many types of scene with our method. We show that our system is reliable and we can use it for real-time applications. We also present results demonstrating real-time camera tracking on indoor and outdoor scenes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital weaving. 1

    Page(s): 108 - 118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5842 KB) |  | HTML iconHTML  

    Woven cloth is so common these days that many of us take it for granted. But even a moment's examination of an everyday cloth like denim reveals some beautiful patterns. Weavers create cloth on a mechanical device called a loom. I describe the basics of weaving. My motivation is to discover new ways to create attractive visual patterns. Of course, nothing can beat actually going out and creating real, woven fabrics. The goal isn't to replace weaving, but to use the ideas of weaving to create software tools for making patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Biomechanics and the cyberhuman

    Page(s): 14 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1780 KB) |  | HTML iconHTML  

    The first modern-day studies of the human body's mechanics - biomechanics - were done at Wayne State University (WSU) in Detroit, Michigan, in 1939. By the late thirties cars were becoming common and so were accidents. To know how to make cars safer, engineers needed to know what the human body could take. At that time, engineers had detailed information about the mechanics of building materials like steel, wood, concrete, and glass but not the human body. Researchers dropped steel balls on the heads of cadavers to determine the amount of force necessary to crack the human skull. The methods were crude, but the resulting data were extremely useful and long lasting. In 1972, this data formed the basis for the Head Injury Criteria (HIC) adopted by the newly formed National Highway Traffic Safety Administration. Although new information is replacing the HIC, it's still this kind of biomechanical information that engineers use to determine the safety of car designs. More recently, researchers used information on the mechanical properties of the human body to validate finite-element models of the human body. These cyberhumans can give us more information than crash-test dummies about car design safety. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Computer Graphics and Applications bridges the theory and practice of computer graphics.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
L. Miguel Encarnação
University of Iowa