By Topic

Virtual Reality Short Papers and Posters (VRW), 2012 IEEE

Date 4-8 March 2012

Filter Results

Displaying Results 1 - 25 of 111
  • IEEE Virtual Reality Conference 2012 [Title page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): iii - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • Supporters

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (667 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • A message from the program chairs

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee (VGTC)

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • Reviewers

    Page(s): xvi - xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (56 KB)  
    Freely Available from IEEE
  • Keynote presentation: Taking the “virtual” out of virtual reality

    Page(s): xviii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (223 KB)  

    Summary form only given. Today's graphics programs cannot only produce stunning photo-realistic images or convincingly real scene displays for interactive exploration, they can also produce physical output - thanks to the emergence of several different layered manufacturing technologies. For many design activities creating tangible models through some rapid-prototyping prcess is a new and crucial feedback loop for debugging the functionality or customer-appeal of a new product. Dr. Séquin has two decades of experience with creating mathematical visualization models and designs ranging from university buildings to abstract geometrical sculptures. Turning these virtual creations into physical realities, however, raises a whole new set of issues that are often overlooked in the initial virtual design phase. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Banquet presentation: What's next?: The third wave in computer graphics and interactive techniques

    Page(s): xix
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Capstone presentation: Isn't all reality really virtual?

    Page(s): xx
    Save to Project icon | Request Permissions | PDF file iconPDF (59 KB)  
    Freely Available from IEEE
  • Short papers

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Crowd simulation using Discrete Choice Model

    Page(s): 3 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    We present a new algorithm to simulate a variety of crowd behaviors using the Discrete Choice Model (DCM). DCM has been widely studied in econometrics to examine and predict customers' or households' choices. Our DCM formulation can simulate virtual agents' goal selection and we highlight our algorithm by simulating heterogeneous crowd behaviors: evacuation, shopping, and rioting scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial augmented reality for environmentally-lit real-world objects

    Page(s): 7 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (847 KB) |  | HTML iconHTML  

    One augmented reality approach is to use digital projectors to alter the appearance of a physical scene, avoiding the need for head-mounted displays or special goggles. Instead, spatial augmented reality (SAR) systems depend on having sufficient light radiance to compensate the surface's colors to those of a target visualization. However, standard SAR systems in dark room settings may suffer from insufficient light radiance causing bright colors to exhibit unexpected color shifts, resulting in a misleading visualization. We introduce a SAR framework which focuses on minimally altering the appearance of arbitrarily shaped and colored objects to exploit the presence of environment/room light as an additional light source to achieve compliancy for bright colors. While previous approaches have compensated for environment light, none have explicitly exploited the environment light to achieve bright, previously incompliant colors. We implement a full working system and compared our results to solutions achievable with standard SAR systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effects of navigational control and environmental detail on learning in 3D virtual environments

    Page(s): 11 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (449 KB) |  | HTML iconHTML  

    Studying what design features are necessary and effective for educational virtual environments (VEs), we focused on two design issues: level of environmental detail and method of navigation. In a controlled experiment, participants studied animal facts distributed among different locations in an immersive VE. Participants viewed the information as either an automated tour through the environment or with full navigational control. The experiment also compared two levels of environmental detail: a sparse environment with only the animal fact cards and a detailed version that also included landmark items and ground textures. The experiment tested memory and understanding of the animal information. Though neither environmental detail nor navigation type significantly affected learning outcomes, the results suggest that manual navigation may have negatively affected the learning activity. Also, learning scores were correlated with both spatial ability and video game usage, suggesting that educational VEs may not be an appropriate presentation method for some learners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Room-sized informal telepresence system

    Page(s): 15 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1183 KB) |  | HTML iconHTML  

    We present a room-sized telepresence system for informal gatherings rather than conventional meetings. Unlike conventional systems which constrain participants to sit in fixed positions, our system aims to facilitate casual conversations between people in two sites. The system consists of a wall of large flat displays at each of the two sites, showing a panorama of the remote scene, constructed from a multiplicity of color and depth cameras. The main contribution of this paper is a solution that ameliorates the eye contact problem during conversation in typical scenarios while still maintaining a consistent view of the entire room for all participants. We achieve this by using two sets of cameras - a cluster of ”Panorama Cameras” located at the center of the display wall and are used to capture a panoramic view of the entire room, and a set of ”Personal Cameras” distributed along the display wall to capture front views of nearby participants. A robust segmentation algorithm with the assistance of depth cameras and an image synthesis algorithm work together to generate a consistent view of the entire scene. In our experience this new approach generates fewer distracting artifacts than conventional 3D reconstruction methods, while effectively correcting for eye gaze. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Increasing agent physicality to raise social presence and elicit realistic behavior

    Page(s): 19 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3744 KB) |  | HTML iconHTML  

    The concepts of immersion and presence focus on the environment in a virtual environment. We instead focus on embodied conversational agents (ECAs). ECAs occupy the virtual environment as interactive partners. We propose that the ECA analogues of immersion and presence are physicality and social presence. We performed a study to determine the effect of an ECA's physicality on social presence and eliciting realistic behavior from the user. The results showed that increasing physicality can elicit realistic behavior and increase social presence but there was also an interaction effect with plausibility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of visual and force feedback in virtual assembly verifications

    Page(s): 23 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2690 KB) |  | HTML iconHTML  

    This work presents an evaluation study of two different collision feedback modalities for virtual assembly verification: visual and force feedback. Forty-three subjects performed several assembly tasks (peg-in-hole, narrow passage) designed with two levels of difficulty. The used haptic rendering algorithm is based on voxel and point data-structures. Both objective - time and collision performance - and subjective measures have been recorded and analyzed. The comparison of the feedback modalities revealed a clear and highly significant superiority of force feedback in virtual assembly scenarios. The objective data shows that whereas the assembly time is similar in most cases for both conditions, force collision feedback yields significantly smaller collision forces, which indicate higher assembly precision. The subjective ratings of the participants define the force feedback condition as the most appropriate for determining clearances and correcting collision configurations, being the best suited modality to predict mountability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Puzzle assembly training: Real world vs. virtual environment

    Page(s): 27 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6657 KB) |  | HTML iconHTML  

    While training participants to assemble a 3D wooden burr puzzle, we compared results of training in a stereoscopic, head tracked virtual assembly environment utilizing haptic devices and data gloves with real world training. While virtual training took participants about three times longer, the group that used the virtual environment was able to assemble the physical test puzzle about three times faster than the group trained with the physical puzzle. We present several possible cognitive explanations for these results and our plans for future exploration of the factors that improve the effectiveness of virtual process training over real world experience. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Can physical motions prevent disorientation in naturalistic VR?

    Page(s): 31 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3174 KB) |  | HTML iconHTML  

    Most virtual reality simulators have a serious flaw: Users tend to get easily lost and disoriented as they navigate. According to the prevailing opinion, this is because of the lack of actual physical motion to match the visually simulated motion: E.g., using HMD-based VR, Klatzky et al. [1] showed that participants failed to update visually simulated rotations unless they were accompanied by physical rotation of the observer, even if passive. If we use more naturalistic environments (but no salient landmarks) instead of just optic flow, would physical motion cues still be needed to prevent disorientation? To address this question, we used a paradigm inspired by Klatzky et al.: After visually displayed passive movements along curved streets in a city environment, participants were asked to point back to where they started. In half of the trials the visually displayed turns were accompanied by a matching physical rotation. Results showed that adding physical motion cues did not improve pointing performance. This suggests that physical motions might be less important to prevent disorientation if visuals are naturalistic enough. Furthermore, unexpectedly two participants consistently failed to update the visually simulated heading changes, even when they were accompanied by physical rotations. This suggests that physical motion cues do not necessarily improve spatial orientation ability in VR (by inducing obligatory spatial updating). These findings have noteworthy implications for the design of effective motion simulators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-motion illusions (vection) in VR — Are they good for anything?

    Page(s): 35 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (646 KB) |  | HTML iconHTML  

    When we locomote through real or virtual environments, self-to-object relationships constantly change. Nevertheless, in real environments we effortlessly maintain an ongoing awareness of roughly where we are with respect to our immediate surrounds, even in the absence of any direct perceptual support (e.g., in darkness or with eyes closed). In virtual environments, however, we tend to get lost far more easily. Why is that? Research suggests that physical motion cues are critical in facilitating this “automatic spatial updating” of the self-to-surround relationships during perspective changes. However, allowing for full physical motion in VR is costly and often unfeasible. Here, we demonstrated for the first time that the mere illusion of self-motion (“circular vection”) can provide a similar benefit as actual self-motion: While blindfolded, participants were asked to imagine facing new perspectives in a well-learned room, and point to previously-learned objects. As expected, this task was difficult when participants could not physically rotate to the instructed perspective. Performance was significantly improved, however, when they perceived illusory self-rotation to the novel perspective (even though they did not physically move). This circular vection was induced by a combination of rotating sound fields (“auditory vection”) and biomechanical vection from stepping along a carrousel-like rotating floor platter. In summary, illusory self-motion was shown to indeed facilitate perspective switches and thus spatial orientation. These findings have important implications for both our understanding of human spatial cognition and the design of more effective yet affordable VR simulators. In fact, it might ultimately enable us to relax the need for physical motion in VR by intelligently utilizing self-motion illusions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor-fusion walking-in-place interaction technique using mobile devices

    Page(s): 39 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    This paper describes a sensor-fusion-based wireless walking-in-place (WIP) interaction technique. We devised a new human-walking detection algorithm that is based on a sensor-fusion using both acceleration and magnetic sensors integrated within a smart phone. Our sensor-fusion approach can be useful for the cases when the detection capability of a single sensor is limited to a certain range of walking speeds, when a system power source is limited, and/or when computation power is limited. The proposed algorithm is versatile enough to handle possible data-loss and random delay in the wireless communication environment, resulting in reduced wireless communication load and computation overhead. The initial study demonstrated that the algorithm can detect dynamic speeds of human walking. The algorithm can be implemented on any mobile device equipped with magnetic and acceleration sensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A taxonomy for deploying redirection techniques in immersive virtual environments

    Page(s): 43 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB) |  | HTML iconHTML  

    Natural walking can provide a compelling experience in immersive virtual environments, but it remains an implementation challenge due to the physical space constraints imposed on the size of the virtual world. The use of redirection techniques is a promising approach that relaxes the space requirements of natural walking by manipulating the user's route in the virtual environment, causing the real world path to remain within the boundaries of the physical workspace. In this paper, we present and apply a novel taxonomy that separates redirection techniques according to their geometric flexibility versus the likelihood that they will be noticed by users. Additionally, we conducted a user study of three reorientation techniques, which confirmed that participants were less likely to experience a break in presence when reoriented using the techniques classified as subtle in our taxonomy. Our results also suggest that reorientation with change blindness illusions may give the impression of exploring a more expansive environment than continuous rotation techniques, but at the cost of negatively impacting spatial knowledge acquisition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.