By Topic

Virtual Reality, 2005. Proceedings. VR 2005. IEEE

Date 12-16 March 2005

Filter Results

Displaying Results 1 - 25 of 89
  • IEEE Virtual Reality 2005 (IEEE Cat. No.05CH37649)

    Save to Project icon | Request Permissions | PDF file iconPDF (96 KB)  
    Freely Available from IEEE
  • Cover Image Credits

    Page(s): 0_2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (21 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Breaker page]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Copyright

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (61 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): iii - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (373 KB)  
    Freely Available from IEEE
  • Supporting organizations

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • Messages from the General Chairs and Program Chairs

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (182 KB)  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (144 KB)  
    Freely Available from IEEE
  • Conference Committee

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (81 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Keynote Address

    Page(s): xiii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (58 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Blank page]

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (11 KB)  
    Freely Available from IEEE
  • [Breaker page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): 2
    Save to Project icon | Request Permissions | PDF file iconPDF (11 KB)  
    Freely Available from IEEE
  • The hand is slower than the eye: a quantitative exploration of visual dominance over proprioception

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    Without force feedback, a head-mounted display user's avatar may penetrate virtual objects. Some virtual environment designers prevent visual interpenetration, making the assumption that prevention improves user experience. However, preventing visual avatar interpenetration causes discrepancy between visual and proprioceptive cues. We investigated users' detection thresholds for visual interpenetration (the depth at which they see that two objects have interpenetrated) and sensory discrepancy (the displacement at which they notice mismatched visual and proprioceptive cues). We found that users are much less sensitive to visual-proprioceptive conflict than they are to visual interpenetration. We present our plan for using this result to create a better technique for dealing with virtual object penetration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical user-based study of text drawing styles and outdoor background textures for augmented reality

    Page(s): 11 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (837 KB) |  | HTML iconHTML  

    A challenge in presenting augmenting information in outdoor augmented reality (AR) settings lies in the broad range of uncontrollable environmental conditions that may be present, specifically large-scale fluctuations in natural lighting and wide variations in likely backgrounds or objects in the scene. In this paper, we present a user-based study which examined the effects of outdoor background textures, changing outdoor illuminance values, and text drawing styles on user performance of a text identification task with an optical, see-through augmented reality system. We report significant effects for all of these variables, and discuss design guidelines and ideas for future work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of control/display ratio on the perception of mass of manipulated objects in virtual environments

    Page(s): 19 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (303 KB) |  | HTML iconHTML  

    This paper describes two psychophysical experiments which were conducted to evaluate the influence of the control/display (C/D) ratio on the perception of mass of manipulated objects in virtual environments (VE). In both experiments, a discrimination task was used in which participants were asked to identify the heavier object between two virtual balls. Participants could weigh each ball via a haptic interface and look at its synthetic display on a computer screen. Unknown to the participants, two parameters varied between each trial: the difference of mass between the balls and the C/D ratio used in the visual display when weighing the comparison ball. The data collected demonstrated that the C/D ratio significantly influenced the result of the mass discrimination task and sometimes even reversed it. The absence of gravity force largely increased this effect. These results suggest that if the visual motion of a manipulated virtual object is amplified when compared to the actual motion of the user's hand (i.e. if the C/D ratio used is smaller than 1), the user tends to feel that the mass of the object decreases. Thus, decreasing or amplifying the motions of the user in a VE can strongly modify the perception of haptic properties of objects that he/she manipulates. Designers of virtual environments could use these results for simplification considerations and also to avoid potential perceptual aberrations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting scalable peer to peer virtual environments using frontier sets

    Page(s): 27 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    We present a scalable implementation of a network partitioning scheme that we have called frontier sets. Frontier sets build on the notion of a potentially visible set (PVS). In a PVS, a world is sub-divided into cells and for each cell all the other cells that can be seen are computed. In contrast, a frontier set considers pairs of cells, A and B. For each pair, it lists two sets of cells, F BA and F BA. By definition, from no cell in F BA is any cell in F BA visible and vice-versa. Our initial use of frontier sets has been to enable scalability in distributed networking. In this paper we build on previous work by showing how to avoid pre-computing frontier sets. Our previous algorithm, required O(N 3) space in the number of cells, to store pre-computed frontier sets. Our new algorithm pre-computes an enhanced potentially visible set that requires only O(N 2) space and then computes frontiers only as needed. Network simulations using code based on the Quake II engine show that frontiers have significant promise and may allow a new class of scalable peer-to-peer game infrastructures to emerge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DICE: Internet delivery of immersive voice communication for crowded virtual spaces

    Page(s): 35 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (213 KB) |  | HTML iconHTML  

    This paper develops a scalable system design for the creation, and delivery over the Internet, of a realistic voice communication service for crowded virtual spaces. Examples of crowded spaces include virtual market places or battlefields in online games. A realistic crowded audio scene including spatial rendering of the voices of surrounding avatars is impractical to deliver over the Internet in a peer-to-peer manner due to access bandwidth limitations and cost. A brute force server model, on the other hand, will face significant computational costs and scalability issues. This paper presents a novel server-based architecture for this service that performs simple operations in the servers (including weighted mixing of audio streams) to cope with access bandwidth restrictions of clients, and uses spatial audio rendering capabilities of the clients to reduce the computational load on the servers. This paper then examines the performance of two components of this architecture: angular clustering and grid summarization. The impact of two factors, namely a high density of avatars and realistic access bandwidth limitations, on the quality and accuracy of the audio scene is then evaluated using simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic data exchange and synchronization for knowledge-based intelligent virtual environments

    Page(s): 43 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (818 KB) |  | HTML iconHTML  

    Advanced VR simulation systems are composed of several components with independent and heterogeneously structured databases. To guarantee a closed and consistent world simulation, flexible and robust data exchange between these components has to be realized. This multiple database problem is well known in many distributed application domains, but it is central for VR setups composed of diverse simulation components. Particularly complicated is the exchange between object-centered and graph-based representation formats, where entity attributes may be distributed over the graph structure. This article presents an abstract declarative attribute representation concept, which handles different representation formats uniformly and enables automatic data exchange and synchronization between them. This mechanism is tailored to support the integration of a central knowledge component, which provides a uniform representation of the accumulated knowledge of the several simulation components involved. This component handles the incoming-possibly conflicting-world changes propagated by the diverse components. It becomes the central instance for process flow synchronization of several autonomous evaluation loops. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible parametrization of scene graphs

    Page(s): 51 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Scene graphs have become an established tool for developing interactive 3D applications, but with the focus lying on support for multi-processor and multi-pipeline systems, for distributed applications and for advanced rendering effects. Contrary to these developments, this work focusses on the expressiveness of the scene graph structure as a central tool for developing 3D user interfaces. We present the idea of a context for the traversal of a scene graph which allows to parameterize a scene graph and reuse it for different purposes. Such context sensitive scene graphs improve the inherent flexibility of a scene graph acting as a template with parameters bound during traversal. An implementation of this concept using an industry standard scene graph library is described and its use in a set of applications from the area of mobile augmented reality is demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quick-CULLIDE: fast inter- and intra-object collision culling using graphics hardware

    Page(s): 59 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (715 KB) |  | HTML iconHTML  

    We present a fast collision culling algorithm for performing inter-and intra-object collision detection among complex models using graphics hardware. Our algorithm is based on CULLIDE (Govindaraju et al., 2003) and performs visibility queries on the GPUs to eliminate a sub-set of geometric primitives that are not in close proximity. We present an extension to CULLIDE to perform intra-object or self-collisions between complex models. Furthermore, we describe a novel visibility-based classification scheme to compute potentially-colliding and collision-free subsets of objects and primitives, which considerably improves the culling performance. We have implemented our algorithm on a PC with an NVIDIA GeForce FX 6800 Ultra graphics card and applied it to three complex simulations, each consisting of objects with tens of thousands of triangles. In practice, we are able to compute all the self-collisions for cloth simulation up to image-space precision at interactive rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analysis of orientation prediction and filtering methods for VR/AR

    Page(s): 67 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    To enable a user to perform virtual reality tasks as efficiently as possible, reducing tracking inaccuracies from noise and latency is crucial. Much work has been done to improve tracking performance by using predictive filtering methods. However, it is unclear what the benefits of each of these methods are in practice, which parameters influence their performance, and what the extent of this influence is. We present an analysis of various orientation prediction and filtering methods using various hand tasks and synthetic signals, and evaluate their performance in relation to each other. We identify critical parameters and analyse their influence on accuracy. Our results show that for the tested datasets, the use of an EKF is sufficient for orientation prediction in VR/AR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-calibrating optical motion tracking for articulated bodies

    Page(s): 75 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    Building intuitive user-interfaces for virtual reality applications is a difficult task, as one of the main purposes is to provide a "natural", yet efficient input device to interact with the virtual environment. One particularly interesting approach is to track and retarget the complete motion of a subject. Established techniques for full body motion capture like optical motion tracking exist. However, due to their computational complexity and their reliance on pre-specified models, they fail to meet the demanding requirements of virtual reality environments such as real-time response, immersion, and ad hoc configurability. Our goal is to support the use of motion capture as a general input device for virtual reality applications. In this paper we present a self-calibrating framework for optical motion capture, enabling the reconstruction and tracking of arbitrary articulated objects in real-time. Our method automatically estimates all relevant model parameters on-the-fly without any information on the initial tracking setup or the marker distribution, and computes the geometry and topology of multiple tracked skeletons. Moreover, we show how the model can make the motion capture phase robust against marker occlusions by exploiting the redundancy in the skeleton model and by reconstructing missing inner limbs and joints of the subject from partial information. Meeting the above requirements our system is well applicable to a wide range of virtual reality based applications, where unconstrained tracking and flexible retargeting of motion data is desirable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Hedgehog: a novel optical tracking method for spatially immersive displays

    Page(s): 83 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    Existing commercial technologies do not adequately meet the requirements for tracking in fully-enclosed VR displays. We present the Hedgehog, which overcomes several limitations imposed by existing sensors and tracking technology. The tracking system robustly and reliably estimates the 6DOF pose of the device with high accuracy and a reasonable update rate. The system is composed of several cameras viewing the display walls and an arrangement of laser diodes secured to the user. The light emitted from the lasers projects onto the display walls and the 2D centroids of the projections are tracked to estimate the 6DOF pose of the device. The system is able to handle ambiguous laser projection configurations, static and dynamic occlusions of the lasers, and incorporates an intelligent laser activation control mechanism that determines which lasers are most likely to improve the pose estimate. The Hedgehog is also capable of performing auto-calibration of the necessary camera parameters through the use of the SCAAT algorithm. A preliminary evaluation reveals that the system has an angular resolution of 0.01 degrees RMS and a position resolution of 0.2 mm RMS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.