By Topic

Virtual Reality Conference, 2008. VR '08. IEEE

Date 8-12 March 2008

Filter Results

Displaying Results 1 - 25 of 86
  • [Breaker page]

    Publication Year: 2008 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (93 KB)  
    Freely Available from IEEE
  • [Breaker page]

    Publication Year: 2008 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2008 , Page(s): iii - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • Contributor listings

    Publication Year: 2008 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (2648 KB)  
    Freely Available from IEEE
  • [Opinion]

    Publication Year: 2008 , Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • [Opinion]

    Publication Year: 2008 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • [Society related mateiral]

    Publication Year: 2008 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • [Society related mateiral]

    Publication Year: 2008 , Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • list-reviewer

    Publication Year: 2008 , Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • list-reviewer

    Publication Year: 2008 , Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • Awards

    Publication Year: 2008 , Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (2307 KB)  
    Freely Available from IEEE
  • Awards

    Publication Year: 2008 , Page(s): xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (5159 KB)  
    Freely Available from IEEE
  • [Opinion]

    Publication Year: 2008 , Page(s): xix
    Save to Project icon | Request Permissions | PDF file iconPDF (468 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Breaker pages: Papers

    Publication Year: 2008 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (59 KB)  
    Freely Available from IEEE
  • Providing a Wide Field of View for Effective Interaction in Desktop Tangible Augmented Reality

    Publication Year: 2008 , Page(s): 3 - 10
    Cited by:  Papers (6)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (15247 KB)  

    This paper proposes to generate and provide wide field of view (FOV) augmented reality (AR) imagery by mosaicing images from smaller fields of moving views in "desktop" tangible AR (DTAR) environments. AR systems usually offer a limited FOV into the interaction space, constrained by the FOV of the camera and/or the display, which causes serious usability problems especially when the interaction space is large and many tangible props/markers are used. This problem is more apparent in DTAR environments in which an upright frontal display is used, instead of a head mounted display. This can be solved partly by placing the camera at a relatively far location or by using multiple cameras and increasing the working FOV. However, as for the former solution, the large distance between the interaction space and the fixed camera decreases the tracking and recognition reliability of the tangible markers, and the latter solution introduces significant additional set-up, cost, and computational load. Thus, we propose to use a mosaiced image to provide wide FOV AR imagery. We experimentally compare our solution, i.e. to offer the entire view of the interaction space at once, to other nominal AR set-ups. The experimental results show that, despite some amounts of visual artifacts due to the imperfect mosaicing, the proposed solution can improve task performance and usability for a typical DTAR system. Our findings should contribute to making AR systems more practical and usable for the mass. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capturing Images with Sparse Informational Pixels using Projected 3D Tags

    Publication Year: 2008 , Page(s): 11 - 18
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2687 KB) |  | HTML iconHTML  

    In this paper, we propose a novel imaging system that enables the capture of photos and videos with sparse informational pixels. Our system is based on the projection and detection of 3D optical tags. We use an infrared (IR) projector to project temporally-coded (blinking) dots onto selected points in a scene. These tags are invisible to the human eye, but appear as clearly visible time-varying codes to an IR photosensor. As a proof of concept, we have built a prototype camera system (consisting of co-located visible and IR sensors) to simultaneously capture visible and IR images. When a user takes an image of a tagged scene using such a camera system, all the scene tags that are visible from the system's viewpoint are detected. In addition, tags that lie in the field of view but are occluded, and ones that lie just outside the field of view, are also automatically generated for the image. Associated with each tagged pixel is its 3D location and the identity of the object that the tag falls on. Our system can interface with conventional image recognition methods for efficient scene authoring, enabling objects in an image to be robustly identified using cheap cameras, minimal computations, and no domain knowledge. We demonstrate several applications of our system, including, photo-browsing, e-commerce, augmented reality, and objection localization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Envisor: Online Environment Map Construction for Mixed Reality

    Publication Year: 2008 , Page(s): 19 - 26
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6009 KB) |  | HTML iconHTML  

    One of the main goals of anywhere augmentation is the development of automatic algorithms for scene acquisition in augmented reality systems. In this paper, we present Envisor, a system for online construction of environment maps in new locations. To accomplish this, Envisor uses vision-based frame to frame and landmark orientation tracking for long-term, drift-free registration. For additional robustness, a gyroscope/compass orientation unit can optionally be used for hybrid tracking. The tracked video is then projected into a cubemap frame by frame. Feedback is presented to the user to help avoid gaps in the cubemap, while any remaining gaps are filled by texture diffusion. The resulting environment map can be used for a variety of applications, including shading of virtual geometry and remote presence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Mixed Reality Approach for Merging Abstract and Concrete Knowledge

    Publication Year: 2008 , Page(s): 27 - 34
    Cited by:  Papers (4)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1438 KB) |  | HTML iconHTML  

    Mixed reality's (MR) ability to merge real and virtual spaces is applied to merging different knowledge types, such as abstract and concrete knowledge. To evaluate whether the merging of knowledge types can benefit learning, MR was applied to an interesting problem in anesthesia machine education. The virtual anesthesia machine (VAM) is an interactive, abstract 2D transparent reality simulation of the internal components and invisible gas flows of an anesthesia machine. It is widely used in anesthesia education. However when presented with an anesthesia machine, some students have difficulty transferring abstract VAM knowledge to the concrete real device. This paper presents the augmented anesthesia machine (AAM). The AAM applies a magic-lens approach to combine the VAM simulation and a real anesthesia machine. The AAM allows students to interact with the real anesthesia machine while visualizing how these interactions affect the internal components and invisible gas flows in the real world context. To evaluate the AAM's learning benefits, a user study was conducted. Twenty participants were divided into either the VAM (abstract only) or AAM (concrete+abstract) conditions. The results of the study show that MR can help users bridge their abstract and concrete knowledge, thereby improving their knowledge transfer into real world domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Empirical Study of Hear-Through Augmented Reality: Using Bone Conduction to Deliver Spatialized Audio

    Publication Year: 2008 , Page(s): 35 - 42
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3404 KB) |  | HTML iconHTML  

    Augmented reality (AR) is the mixing of computer-generated stimuli with real-world stimuli. In this paper, we present results from a controlled, empirical study comparing three ways of delivering spatialized audio for AR applications: a speaker array, headphones, and a bone-conduction headset. Analogous to optical-see-through AR in the visual domain, hear-through AR allows users to receive computer-generated audio using the bone-conduction headset, and real-world audio using their unoccluded ears. Our results show that subjects achieved the best accuracy using a speaker array physically located around the listener when stationary sounds were played, but that there was no difference in accuracy between the speaker array and the bone-conduction device for sounds that were moving, and that both devices outperformed standard headphones for moving sounds. Subjective comments by subjects following the experiment support this performance data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User Boresighting for AR Calibration: A Preliminary Analysis

    Publication Year: 2008 , Page(s): 43 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2991 KB) |  | HTML iconHTML  

    The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments

    Publication Year: 2008 , Page(s): 47 - 50
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (718 KB) |  | HTML iconHTML  

    This paper describes the use of user's focus point to improve some visual effects in virtual environments (VE). First, we describe how to retrieve user's focus point in the 3D VE using an eye-tracking system. Then, we propose the adaptation of two rendering techniques which aim at improving users' sensations during first-person navigation in VE using his/her focus point: (1) a camera motion which simulates eyes movement when walking, i.e., corresponding to vestibulo-ocular and vestibulocollic reflexes when the eyes compensate body and head movements in order to maintain gaze on a specific target, and (2) a depth-of-field (DoF) blur effect which simulates the fact that humans perceive sharp objects only within some range of distances around the focal distance. Second, we describe the results of an experiment conducted to study users' subjective preferences concerning these visual effects during first-person navigation in VE. It showed that participants globally preferred the use of these effects when they are dynamically adapted to the focus point in the VE. Taken together, our results suggest that the use of visual effects exploiting users' focus point could be used in several VR applications involving first- person navigation such as the visit of architectural site, training simulations, video games, etc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object-Capability Security in Virtual Environments

    Publication Year: 2008 , Page(s): 51 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1011 KB) |  | HTML iconHTML  

    Access control is an important aspect of shared virtual environments. Resource access may not only depend on prior authorization, but also on context of usage such as distance or position in the scene graph hierarchy. In virtual worlds that allow user-created content, participants must be able to define and exchange access rights to control the usage of their creations. Using object capabilities, fine-grained access control can be exerted on the object level. We describe our experiences in the application of the object-capability model for access control to object-manipulation tasks common to collaborative virtual environments. We also report on a prototype implementation of an object-capability safe virtual environment that allows anonymous, dynamic exchange of access rights between users, scene elements, and autonomous actors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobile Group Dynamics in Large-Scale Collaborative Virtual Environments

    Publication Year: 2008 , Page(s): 59 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB) |  | HTML iconHTML  

    We have developed techniques called mobile group dynamics (MGDs), which help groups of people to work together while they travel around large-scale virtual environments. MGDs explicitly showed the groups that people had formed themselves into, and helped people move around together and communicate over extended distances. The techniques were evaluated in the context of an urban planning application, by providing one batch of participants with MGDs and another with an interface based on conventional collaborative virtual environments (CVEs). Participants with MGDs spent nearly twice as much time in close proximity (within 10m of their nearest neighbor), communicated seven times more than participants with a conventional interface, and exhibited real-world patterns of behavior such as staying together over an extended period of time and regrouping after periods of separation. The study has implications for CVE designers, because it shows how MGDs improves groupwork in CVEs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences

    Publication Year: 2008 , Page(s): 67 - 70
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2903 KB) |  | HTML iconHTML  

    Massively Multiplayer Online Worlds (MMOs) are persistent virtual environments where people play, experiment and socially interact. In this paper, we demonstrate that MMOs also provide a powerful platform for Augmented Reality (AR) applications, where we blend together locations in physical space with corresponding places in the virtual world. We introduce the notion of AR stages, which are persistent, evolving spaces that encapsulate AR experiences in online three-dimensional virtual worlds. We discuss the concepts and technology necessary to use an MMO for AR, including a novel set of design concepts aimed at keeping such a system easy to learn and use. By leveraging the features of the commercial MMO Second Life, we have created a powerful AR authoring environment accessible to a large, diverse set of users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symmetric Model of Remote Collaborative MR Using Tangible Replicas

    Publication Year: 2008 , Page(s): 71 - 74
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1507 KB) |  | HTML iconHTML  

    Research into collaborative mixed reality (MR) or augmented reality has recently been active. Previous studies showed that MR was preferred for collocated collaboration while immersive virtual reality was preferred for remote collaboration. The main reason for this preference is that the physical object in remote space cannot be handled directly. However, MR using tangible objects is still attractive for remote collaborative systems, because MR enables seamless interaction with real objects enhanced by virtual information with the sense of touch. Here we introduce "tangible replicas"(dual objects that have the same shape, size, and surface), and propose a symmetrical model for remote collaborative MR. The result of experiments shows that pointing and drawing functions on the tangible replica work well despite limited shared information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.