By Topic

Mixed and Augmented Reality, 2006. ISMAR 2006. IEEE/ACM International Symposium on

Date 22-25 Oct. 2006

Filter Results

Displaying Results 1 - 25 of 55
  • ISMAR 2006 - The Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (6488 KB)  
    Freely Available from IEEE
  • Proceedings - ISMAR 2006 - International Symposium on Mixed and Augmented Reality

    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • Message from the General Chairs

    Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Message from the Program Chairs

    Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Committees, Chairs & Additional Reviewers

    Page(s): xiii - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • Workshop - Industrial augmented reality

    Page(s): xx
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Tutorials - MAR Tutorial 1 (half day) & ISMAR Tutorial 2 (half day)

    Page(s): xxi
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Keynote speeches - The poor man's palace: special effects in the real world / Do we have six brains?

    Page(s): xxii - xxiii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (33 KB)  

    Provides an abstract for each of the keynote presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantification of visual capabilities using augmented reality displays

    Page(s): 3 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    In order to be able to perceive and recognize objects or surface properties of objects, one must be able to resolve the features. These perceptual tasks can be difficult for both graphical representations and real objects in augmented reality (AR) displays. This paper presents the results of objective measurements and two user studies. The first evaluation explores visual acuity and contrast sensitivity; the second explores color perception. Both experiments test users' capabilities with their natural vision against their capabilities using commercially-available AR displays. The limited graphical resolution, reduced brightness, and uncontrollable visual context of the merged environment demonstrably reduce users' visual capabilities. The paper concludes by discussing the implications for display design and AR applications, as well as outlining possible extensions to the current studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective control of a car driver's attention for visual and acoustic guidance towards the direction of imminent dangers

    Page(s): 13 - 22
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    In cars, augmented reality is becoming an interesting means to enhance active safety in the driving task. Guiding a driver's attention to an imminent danger somewhere around the car is a potential application. In a research project with the automotive industry, we are exploring different approaches towards alerting drivers to such dangers. First results were presented last year. We have extended two of these approaches. One uses AR to visualize the source of danger in the driver's frame of reference while the other one presents information in a bird's eye schematic map. Our extensions were the incorporation of a real head-up display, improved visual perception and acoustic support. Both schemes were evaluated both with and without 3D encoded sound. This paper reports on a user test in which 24 participants provided objective and subjective measurements. The results indicate that the AR-based three-dimensional presentation scheme with and without sound support systematically outperforms the bird's eye schematic map. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User evaluations on form factors of tangible magic lenses

    Page(s): 23 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (501 KB) |  | HTML iconHTML  

    Magic Lens is a small inset window embedded in a large context display, which provides an alternative view to the region of interest selected from the context view. This metaphor is used for 3D visualization in our Augmented Virtual Environment infrastructure, SCAPE (Stereoscopic Collaboration in Augmented and Projective Environments), which is composed of an immersive room display for a high level of detail, life-size virtual world and a workbench display for simplified god-like view to the world. A tangible Magic Lens is used on the workbench display to allow direct and intuitive selection of continuous levels of detail, bridging the gap between the two extreme levels of detail in SCAPE. This paper presents our first step to the user evaluations of tangible Magic Lens. We conducted two sets of user evaluations, one mainly testing the lens aspect ratio, and another for the lens size. For both of the tests, two types of tasks are conducted: information gathering and relating the detailed information with the context. We found that the aspect ratio of a lens plays more important role in user preference for smaller lenses than for larger ones. Meanwhile, the size of a lens is the most important factor that affects the user performance in the two types of tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of three input techniques for selection and annotation of physical objects through an augmented reality view

    Page(s): 33 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    This paper presents results from a study into the usability issues of two tasks (selection and annotation of a physical object) for users operating mobile augmented reality systems. The study compared the following three different modes of cursor manipulation: a handheld mouse, a head cursor, and an image-plane vision-tracked device. The selection task was evaluated based on number of mouse button clicks, completion time, and a subjective survey. The annotation task was evaluated based on accuracy of the annotation, completion time, and a subjective survey. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic online walls detection for immediate use in AR tasks

    Page(s): 39 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    This paper proposes a method to automatically detect and reconstruct planar surfaces for immediate use in AR tasks. Traditional methods for plane detection are typically based on the comparison of transfer errors of a homography, which make them sensitive to the choice of a discrimination threshold. We propose a very different approach: the image is divided into a grid and rectangles that belong to the same planar surface are clustered around the local maxima of a Hough transform. As a result, we simultaneously get clusters of coplanar rectangles and the image of their intersection line with a reference plane, which easily leads to their 3D position and orientation. Results are shown on both synthetic and real data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting and estimating the accuracy of n-occular optical tracking systems

    Page(s): 43 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (729 KB) |  | HTML iconHTML  

    Marker-based optical tracking systems are widely used in augmented reality, medical navigation and industrial applications. We propose a model for the prediction of the target registration error (TRE) in these kinds of tracking systems by estimating the fiducial location error (FLE) from two-dimensional errors on the image plane and propagating that error to a given point of interest. We have designed a set of experiments in order to estimate the actual parameters of the model for any given tracking system. We present the results of a study which we used to demonstrate the effect of different sources of error. The method is applied to real applications to show the usefulness for any kind of augmented reality system. We also present a set of tools that can be used to visualize the accuracy at design time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid tracking algorithms for planar and non-planar structures subject to illumination changes

    Page(s): 52 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (350 KB) |  | HTML iconHTML  

    Augmented reality (AR) aims to fuse a virtual world and a real one in an image stream. When considering only a vision sensor, it relies on registration techniques that have to be accurate and fast enough for on-line augmentation. This paper proposes a real-time, robust and efficient 3D model-based tracking algorithm monocular vision system. A virtual visual servoing approach is used to estimate the pose between the camera and the object. The integration of texture information in the classical non-linear edge-based pose computation provides a more reliable tracker. Several illumination models have been considered and compared to better deal with the illumination change in the scene. The method presented in this paper has been validated on several video sequences for augmented reality applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online camera pose estimation in partially known and dynamic scenes

    Page(s): 56 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    One of the key requirements of augmented reality systems is a robust real-time camera pose estimation. In this paper we present a robust approach, which does neither depend on offline pre-processing steps nor on pre-knowledge of the entire target scene. The connection between the real and the virtual world is made by a given CAD model of one object in the scene. However, the model is only needed for initialization. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes. Finally, validation results on synthetic as well as on real video sequences are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An all-in-one solution to geometric and photometric calibration

    Page(s): 69 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB) |  | HTML iconHTML  

    We propose a fully automated approach to calibrating multiple cameras whose fields of view may not all overlap. Our technique only requires waving an arbitrary textured planar pattern in front of the cameras, which is the only manual intervention that is required. The pattern is then automatically detected in the frames where it is visible and used to simultaneously recover geometric and photometric camera calibration parameters. In other words, even a novice user can use our system to extract all the information required to add virtual 3D objects into the scene and light them convincingly. This makes it ideal for Augmented Reality applications and we distribute the code under a GPL license. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A registration evaluation system using an industrial robot

    Page(s): 79 - 87
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    This paper describes an evaluation system using an industrial robot, constructed for the purpose of evaluating registration technology for Mixed Reality. In this evaluation system, the tip of the robot arm plays the role of the user's head, where a head- mounted display is mounted. By using an industrial robot, we can obtain the ground truth of the camera pose with a high level of accuracy and robustness. Additionally, we have the ability to play back the same specified operations repeatedly under identical conditions. In addition to the system implementation, we propose evaluation methods for motion robustness, relative orientation robustness, relative distance robustness, jitter, and an overall evaluation. We verify the validity of this system through some experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of an outdoor augmented reality tracking system that relies upon a few mobile beacons

    Page(s): 101 - 104
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (127 KB) |  | HTML iconHTML  

    We describe and evaluate a new tracking concept for outdoor Augmented Reality. A few mobile beacons added to the environment correct errors in head-worn inertial and GPS sensors. We evaluate the accuracy through detailed simulation of many error sources. The most important parameters are the errors in measuring the beacon and user's head positions, and the geometric configuration of the beacons around the point to augment. Using Monte Carlo simulations, we identify combinations of beacon configurations and error parameters that meet a specified goal of 1 m net error at 100 m range. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial relationship patterns: elements of reusable tracking and calibration systems

    Page(s): 88 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB) |  | HTML iconHTML  

    With tracking setups becoming increasingly complex, it gets more difficult to find suitable algorithms for tracking, calibration and sensor fusion. A large number of solutions exists in the literature for various combinations of sensors, however, no development methodology is available for systematic analysis of tracking setups. When modeling a system as a spatial relationship graph (SRG), which describes coordinate systems and known transformations, all algorithms used for tracking and calibration correspond to certain patterns in the graph. This paper introduces a formal model for representing such spatial relationship patterns and presents a small catalog of patterns frequently used in augmented reality systems. We also describe an algorithm to identify patterns in SRGs at runtime for automatic construction of data flows networks for tracking and calibration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A mobile markerless AR system for maintenance and repair

    Page(s): 105 - 108
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (367 KB) |  | HTML iconHTML  

    We present a solution for AR based repair guidance. This solution covers software as well as hardware related issues. In particular we developed a markerless CAD based tracking system which can deal with different illumination conditions during the tracking stage, partial occlusions and rapid motion. The system is also able to automatically recover from occasional tracking failures. On the hardware side the system is based on an off the shelf notebook, a wireless mobile setup consisting of a wide-angle video camera and an analog video transmission system. This setup has been tested with a monocular full-color video-see-through HMD and additionally with a monochrome optical-see-through HMD. Our system underwent several extensive test series under real industrial conditions and proved to be useful for different maintenance and repair scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Going out: robust model-based tracking for outdoor augmented reality

    Page(s): 109 - 118
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (909 KB) |  | HTML iconHTML  

    This paper presents a model-based hybrid tracking system for outdoor augmented reality in urban environments enabling accurate, realtime overlays for a handheld device. The system combines several well-known approaches to provide a robust experience that surpasses each of the individual components alone: an edge-based tracker for accurate localisation, gyroscope measurements to deal with fast motions, measurements of gravity and magnetic field to avoid drift, and a back store of reference frames with online frame selection to re-initialize automatically after dynamic occlusions or failures. A novel edge-based tracker dispenses with the conventional edge model, and uses instead a coarse, but textured, 3D model. This yields several advantages: scale-based detail culling is automatic, appearance-based edge signatures can be used to improve matching and the models needed are more commonly available. The accuracy and robustness of the resulting system is demonstrated with comparisons to map-based ground truth data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LightSense: enabling spatially aware handheld interaction devices

    Page(s): 119 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    The vision of spatially aware handheld interaction devices has been hard to realize. The difficulties in solving the general tracking problem for small devices have been addressed by several research groups and examples of issues are performance, hardware availability and platform independency. We present LightSense, an approach that employs commercially available components to achieve robust tracking of cell phone LEDs, without any modifications to the device. Cell phones can thus be promoted to interaction and display devices in ubiquitous installations of systems such as the ones we present here. This could enable a new generation of spatially aware handheld interaction devices that would unobtrusively empower and assist us in our everyday tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive laser-projection for programming industrial robots

    Page(s): 125 - 128
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    A method for intuitive and efficient programming of industrial robots based on Augmented Reality (AR) is presented, in which tool trajectories and target coordinates are interactively visualized and manipulated in the robot's environment by means of laser projection. The virtual information relevant for programming, such as trajectories and target coordinates, is projected into the robot's environment and can be manipulated interactively. For an intuitive and efficient user input to the system, spatial interaction techniques have been developed, which enable the user to virtually draw the desired motion paths for processing a work piece surface, directly onto the respective object. The discussed method has been implemented in an integrated AR-user interface and has been initially evaluated in an experimental programming scenario. The obtained results indicate that it enables significantly faster and easier programming of processing tasks compared to currently available shop-floor programming methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.