By Topic

Mixed and Augmented Reality (ISMAR), 2010 9th IEEE International Symposium on

Date 13-16 Oct. 2010

Filter Results

Displaying Results 1 - 25 of 102
  • [Title page]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (176 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Contents

    Page(s): iii - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (127 KB)  
    Freely Available from IEEE
  • Supporting organizations

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (653 KB)  
    Freely Available from IEEE
  • From the symposium general chairs

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE
  • From the Science & Technology program chairs

    Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee (VGTC)

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB)  
    Freely Available from IEEE
  • Task force on Human centered Computing (TFHCC)

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • Conference committee

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • International Program Committee and Reviewers

    Page(s): xi - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • Augmenting reality for medicine, training, presence and telepresence

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • Augmented dreams

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (73 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (7 KB)  
    Freely Available from IEEE
  • Science & technology papers [breaker page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (156 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): 2
    Save to Project icon | Request Permissions | PDF file iconPDF (7 KB)  
    Freely Available from IEEE
  • Perceptual issues in augmented reality revisited

    Page(s): 3 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effect of out-of-focus blur on visual discomfort when using stereo displays

    Page(s): 13 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (917 KB) |  | HTML iconHTML  

    Visual discomfort is a major problem for head-mounted displays and other stereo displays. One effect that is known to reduce visual comfort is double vision, which can occur due to high disparities. Previous studies suggest that adding artificial out-of-focus blur increases the fusional limits, where the left and right image can be fused without double vision. We investigate the effect of adding artificial out-of-focus blur on visual discomfort using two different setups. One uses a stereo monitor and an eye tracker to change the depth of focus based on the gaze of the user. The other one uses a video-see through head mounted display. A study involving 18 subjects showed that the viewing comfort when using blur is significantly higher in both setups for virtual scenes. However we can not confirm without doubt that the higher viewing comfort is only related to an increase of the fusional limits, as many subjects reported that double vision did not occur during the experiment. Results for additional photographed images that have been shown to the subjects were less significant. A first prototype of an AR system extracting a depth map from stereo images and adding artificial out-of-focus blur is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Blank page]

    Page(s): 18
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • Image-based ghostings for single layer occlusions in augmented reality

    Page(s): 19 - 26
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7245 KB) |  | HTML iconHTML  

    In augmented reality displays, X-Ray visualization techniques make hidden objects visible through combining the physical view with an artificial rendering of the hidden information. An important step in X-Ray visualization is to decide which parts of the physical scene should be kept and which should be replaced by overlays. The combination should provide users with essential perceptual cues to understand the relationship of depth between hidden information and the physical scene. In this paper we present an approach that addresses this decision in unknown environments by analyzing camera images of the physical scene and using the extracted information for occlusion management. Pixels are grouped into perceptually coherent image regions and a set of parameters is determined for each region. The parameters change the X-Ray visualization for either preserving existing structures or generating synthetic structures. Finally, users can customize the overall opacity of foreground regions to adapt the visualization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Augmented Reality X-Ray system based on visual saliency

    Page(s): 27 - 36
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1345 KB)  

    In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object. In this paper, we present the background, design, and implementation of our novel visualization technique that aims at providing users with richer context of the occluder object. While our previous visualization only employed one salient feature (edges) to determine which parts of the occluder to display, our novel visualization technique is an initial attempt to explore the design space of employing multiple salient features for this task. The prototype presented in this paper employs three additional salient features: hue, luminosity, and motion. We have conducted two evaluations with human participants to investigate the benefits and limitations of our prototype compared to our previous system. The first evaluation showed that although our novel visualization provides a richer context of the occluder object, it does not impede users to select objects in the occluded area; but, it also indicated problems in our prototype. In the second evaluation, we have investigated these problems through an online survey with systematically varied occluder and occluded scenes, focussing on the qualitative aspects of our visualizations. The results were encouraging, but pointed out that our novel visualization needs a higher level of adaptiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Determining the point of minimum error for 6DOF pose uncertainty representation

    Page(s): 37 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (641 KB) |  | HTML iconHTML  

    In many augmented reality applications, in particular in the medical and industrial domains, knowledge about tracking errors is important. Most current approaches characterize tracking errors by 6×6 covariance matrices that describe the uncertainty of a 6DOF pose, where the center of rotational error lies in the origin of a target coordinate system. This origin is assumed to coincide with the geometric centroid of a tracking target. In this paper, we show that, in case of a multi-camera fiducial tracking system, the geometric centroid of a body does not necessarily coincide with the point of minimum error. The latter is not fixed to a particular location, but moves, depending on the individual observations. We describe how to compute this point of minimum error given a covariance matrix and verify the validity of the approach using Monte Carlo simulations on a number of scenarios. Looking at the movement of the point of minimum error, we find that it can be located surprisingly far away from its expected position. This is further validated by an experiment using a real camera system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Blank page]

    Page(s): 46
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • Accurate real-time tracking using mutual information

    Page(s): 47 - 56
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4098 KB) |  | HTML iconHTML  

    In this paper we present a direct tracking approach that uses Mutual Information (MI) as a metric for alignment. The proposed approach is robust, real-time and gives an accurate estimation of the displacement that makes it adapted to augmented reality applications. MI is a measure of the quantity of information shared by signals that has been widely used in medical applications. Since then, and although MI has the ability to perform robust alignment with illumination changes, multi-modality and partial occlusions, few works propose MI-based applications related to object tracking in image sequences due to some optimization problems. In this work, we propose an optimization method that is adapted to the MI cost function and gives a practical solution for augmented reality application. We show that by refining the computation of the Hessian matrix and using a specific optimization approach, the tracking results are far more robust and accurate than the existing solutions. A new approach is also proposed to speed up the computation of the derivatives and keep the same optimization efficiency. To validate the advantages of the proposed approach, several experiments are performed. The ESM and the proposed MI tracking approaches are compared on a standard dataset. We also show the robustness of the proposed approach on registration applications with different sensor modalities: map versus satellite images and satellite images versus airborne infrared images within different AR applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Point-and-shoot for ubiquitous tagging on mobile phones

    Page(s): 57 - 64
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8825 KB) |  | HTML iconHTML  

    We propose a novel way to augment a real scene with minimalist user intervention on a mobile phone: The user only has to point the phone camera to the desired location of the augmentation. Our method is valid for vertical or horizontal surfaces only, but this is not a restriction in practice in man-made environments, and avoids to go through any reconstruction of the 3D scene, which is still a delicate process. Our approach is inspired by recent work on perspective patch recognition and we show how to modify it for better performances on mobile phones and how to exploit the phone accelerometers to relax the need for fronto-parallel views. In addition, our implementation allows to share the augmentations and the required data over peer-to-peer communication to build a shared AR space on mobile phones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Foldable augmented maps

    Page(s): 65 - 72
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6390 KB) |  | HTML iconHTML  

    This paper presents folded surface detection and tracking for augmented maps. For the detection, plane detection is iteratively applied to 2D correspondences between an input image and a reference plane because the folded surface is composed of multiple planes. In order to compute the exact folding line from the detected planes, the intersection line of the planes is computed from their positional relationship. After the detection is done, each plane is individually tracked by frame-by-frame descriptor update. For a natural augmentation on the folded surface, we overlay virtual geographic data on each detected plane. The user can interact with the geographic data by finger pointing because the finger tip of the user is also detected during the tracking. As scenario of use, some interactions on the folded surface are introduced. Experimental results show the accuracy and performance of folded surface detection for evaluating the effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.