Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 4 • Date April 2013

Filter Results

Displaying Results 1 - 25 of 36
  • IEEE Virtual Reality Conference 2013 [table of contents]

    Publication Year: 2013 , Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (137 KB)  
    Freely Available from IEEE
  • Message from the Editor-in-Chief

    Publication Year: 2013 , Page(s): v
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • Message from the Paper Chairs and Guest Editors

    Publication Year: 2013 , Page(s): vi
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee (VGTC)

    Publication Year: 2013 , Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Conference Committee

    Publication Year: 2013 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • International Program Committee and Steering Committee

    Publication Year: 2013 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • Paper reviewers

    Publication Year: 2013 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • Keynote Speaker: Virtual Reality: Current Uses in Medical Simulation and Future

    Publication Year: 2013 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (111 KB)  
    Freely Available from IEEE
  • Keynote Speaker: Welcome to the Future! Technology and Innovation at Disney

    Publication Year: 2013 , Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (129 KB)  
    Freely Available from IEEE
  • Keynote Speaker: Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution

    Publication Year: 2013 , Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • The 2012 Virtual Reality Career Award

    Publication Year: 2013 , Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (191 KB)  
    Freely Available from IEEE
  • The 2012 Virtual Reality Technical Achievement Award

    Publication Year: 2013 , Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (311 KB)  
    Freely Available from IEEE
  • The 2013 Virtual Reality Career Award

    Publication Year: 2013 , Page(s): xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (115 KB)  
    Freely Available from IEEE
  • The 2013 Virtual Reality Technical Achievement Award

    Publication Year: 2013 , Page(s): xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (154 KB)  
    Freely Available from IEEE
  • Validation of the MR Simulation Approach for Evaluating the Effects of Immersion on Visual Analysis of Volume Data

    Publication Year: 2013 , Page(s): 529 - 538
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (14939 KB)  

    In our research agenda to study the effects of immersion (level of fidelity) on various tasks in virtual reality (VR) systems, we have found that the most generalizable findings come not from direct comparisons of different technologies, but from controlled simulations of those technologies. We call this the mixed reality (MR) simulation approach. However, the validity of MR simulation, especially when different simulator platforms are used, can be questioned. In this paper, we report the results of an experiment examining the effects of field of regard (FOR) and head tracking on the analysis of volume visualized micro-CT datasets, and compare them with those from a previous study. The original study used a CAVE-like display as the MR simulator platform, while the present study used a high-end head-mounted display (HMD). Out of the 24 combinations of system characteristics and tasks tested on the two platforms, we found that the results produced by the two different MR simulators were similar in 20 cases. However, only one of the significant effects found in the original experiment for quantitative tasks was reproduced in the present study. Our observations provide evidence both for and against the validity of MR simulation, and give insight into the differences caused by different MR simulator platforms. The present experiment also examined new conditions not present in the original study, and produced new significant results, which confirm and extend previous existing knowledge on the effects of FOR and head tracking. We provide design guidelines for choosing display systems that can improve the effectiveness of volume visualization applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying Mixed Reality to Simulate Vulnerable Populations for Practicing Clinical Communication Skills

    Publication Year: 2013 , Page(s): 539 - 546
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5236 KB) |  | HTML iconHTML  

    Health sciences students often practice and are evaluated on interview and exam skills by working with standardized patients (people that role play having a disease or condition). However, standardized patients do not exist for certain vulnerable populations such as children and the intellectually disabled. As a result, students receive little to no exposure to vulnerable populations before becoming working professionals. To address this problem and thereby increase exposure to vulnerable populations, we propose using virtual humans to simulate members of vulnerable populations. We created a mixed reality pediatric patient that allowed students to practice pediatric developmental exams. Practicing several exams is necessary for students to understand how to properly interact with and correctly assess a variety of children. Practice also increases a student's confidence in performing the exam. Effective practice requires students to treat the virtual child realistically. Treating the child realistically might be affected by how the student and virtual child physically interact, so we created two object interaction interfaces - a natural interface and a mouse-based interface. We tested the complete mixed reality exam and also compared the two object interaction interfaces in a within-subjects user study with 22 participants. Our results showed that the participants accepted the virtual child as a child and treated it realistically. Participants also preferred the natural interface, but the interface did not affect how realistically participants treated the virtual child. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Effects of Visual Realism on Search Tasks in Mixed Reality Simulation

    Publication Year: 2013 , Page(s): 547 - 556
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8448 KB) |  | HTML iconHTML  

    In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Auditory Perception of Geometry-Invariant Material Properties

    Publication Year: 2013 , Page(s): 557 - 566
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8781 KB)  

    Accurately modeling the intrinsic material-dependent damping property for interactive sound rendering is a challenging problem. The Rayleigh damping model is commonly regarded as an adequate engineering model for interactive sound synthesis in virtual environment applications, but this assumption has never been rigorously analyzed. In this paper, we conduct a formal evaluation of this model. Our goal is to determine if auditory perception of material under Rayleigh damping assumption is 'geometryinvariant', i.e. if this approximation model is transferable across different shapes and sizes. First, audio recordings of same-material objects in various shapes and sizes are analyzed to determine if they can be approximated by the Rayleigh damping model with a single set of parameters. Next, we design and conduct a series of psychoacoustic experiments, in subjects evaluate if audio clips synthesized using the Rayleigh damping model are from the same material, when we alter the material, shape, and size parameters. Through both quantitative and qualitative evaluation, we show that the acoustic properties of the Rayleigh damping model for a single material is generally preserved across different geometries of objects consisting of homogeneous materials and is therefore a suitable, geometry-invariant sound model. Our study results also show that consistent with prior crossmodal expectations, visual perception of geometry can affect the auditory perception of materials. These findings facilitate the wide adoption of Rayleigh damping for interactive auditory systems and enable reuse of material parameters under this approximation model across different shapes and sizes, without laborious per-object parameter tuning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aural Proxies and Directionally-Varying Reverberation for Interactive Sound Propagation in Virtual Environments

    Publication Year: 2013 , Page(s): 567 - 575
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6577 KB)  

    We present an efficient algorithm to compute spatially-varying, direction-dependent artificial reverberation and reflection filters in large dynamic scenes for interactive sound propagation in virtual environments and video games. Our approach performs Monte Carlo integration of local visibility and depth functions to compute directionally-varying reverberation effects. The algorithm also uses a dynamically-generated rectangular aural proxy to efficiently model 2-4 orders of early reflections. These two techniques are combined to generate reflection and reverberation filters which vary with the direction of incidence at the listener. This combination leads to better sound source localization and immersion. The overall algorithm is efficient, easy to implement, and can handle moving sound sources, listeners, and dynamic scenes, with minimal storage overhead. We have integrated our approach with the audio rendering pipeline in Valve's Source game engine, and use it to generate realistic directional sound propagation effects in indoor and outdoor scenes in real-time. We demonstrate, through quantitative comparisons as well as evaluations, that our approach leads to enhanced, immersive multi-modal interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reflective and Refractive Objects for Mixed Reality

    Publication Year: 2013 , Page(s): 576 - 582
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2972 KB)  

    In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human Tails: Ownership and Control of Extended Humanoid Avatars

    Publication Year: 2013 , Page(s): 583 - 590
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1268 KB)  

    This paper explores body ownership and control of an 'extended' humanoid avatar that features a distinct and flexible tail-like appendage protruding from its coccyx. Thirty-two participants took part in a between-groups study to puppeteer the avatar in an immersive CAVETM -like system. Participantsa' body movement was tracked, and the avatara's humanoid body synchronously reflected this motion. However, sixteen participants experienced the avatara's tail moving around randomly and asynchronous to their own movement, while the other participants experienced a tail that they could, potentially, control accurately and synchronously through hip movement. Participants in the synchronous condition experienced a higher degree of body ownership and agency, suggesting that visuomotor synchrony enhanced the probability of ownership over the avatar body despite of its extra-human form. Participants experiencing body ownership were also more likely to be more anxious and attempt to avoid virtual threats to the tail and body. The higher task performance of participants in the synchronous condition indicates that people are able to quickly learn how to remap normal degrees of bodily freedom in order to control virtual bodies that differ from the humanoid form. We discuss the implications and applications of extended humanoid avatars as a method for exploring the plasticity of the braina's representation of the body and for gestural human-computer interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Evaluation of Self-Avatar Eye Movement for Virtual Embodiment

    Publication Year: 2013 , Page(s): 591 - 596
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (29197 KB) |  | HTML iconHTML  

    We present a novel technique for animating self-avatar eye movements in an immersive virtual environment without the use of eye-tracking hardware, and evaluate our technique via a two-alternative, forced-choice-with-confidence experiment that compares this simulated-eye-tracking condition to a no-eye-tracking condition and a real-eye-tracking condition in which the avatar's eyes were rotated with an eye tracker. Viewing the reflection of a tracked self-avatar is often used in virtual-embodiment scenarios to induce in the participant the illusion that the virtual body of the self-avatar belongs to them, however current tracking methods do not account for the movements of the participants eyes, potentially lessening this body-ownership illusion. The results of our experiment indicate that, although blind to the experimental conditions, participants noticed differences between eye behaviors, and found that the real and simulated conditions represented their behavior better than the no-eye-tracking condition. Additionally, no statistical difference was found when choosing between the real and simulated conditions. These results suggest that adding eye movements to selfavatars produces a subjective increase in self-identification with the avatar due to a more complete representation of the participant's behavior, which may be beneficial for inducing virtual embodiment, and that effective results can be obtained without the need for any specialized eye-tracking hardware. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Drumming in Immersive Virtual Reality: The Body Shapes the Way We Play

    Publication Year: 2013 , Page(s): 597 - 605
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2456 KB)  

    It has been shown that it is possible to generate perceptual illusions of ownership in immersive virtual reality (IVR) over a virtual body seen from first person perspective, in other words over a body that visually substitutes the person's real body. This can occur even when the virtual body is quite different in appearance from the person's real body. However, investigation of the psychological, behavioral and attitudinal consequences of such body transformations remains an interesting problem with much to be discovered. Thirty six Caucasian people participated in a between-groups experiment where they played a West-African Djembe hand drum while immersed in IVR and with a virtual body that substituted their own. The virtual hand drum was registered with a physical drum. They were alongside a virtual character that played a drum in a supporting, accompanying role. In a baseline condition participants were represented only by plainly shaded white hands, so that they were able merely to play. In the experimental condition they were represented either by a casually dressed dark-skinned virtual body (Casual Dark-Skinned - CD) or by a formal suited light-skinned body (Formal Light-Skinned - FL). Although participants of both groups experienced a strong body ownership illusion towards the virtual body, only those with the CD representation showed significant increases in their movement patterns for drumming compared to the baseline condition and compared with those embodied in the FL body. Moreover, the stronger the illusion of body ownership in the CD condition, the greater this behavioral change. A path analysis showed that the observed behavioral changes were a function of the strength of the illusion of body ownership towards the virtual body and its perceived appropriateness for the drumming task. These results demonstrate that full body ownership illusions can lead to substantial behavioral and possibly cognitive changes depending on the appearance of the virtu- l body. This could be important for many applications such as learning, education, training, psychotherapy and rehabilitation using IVR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smelling Screen: Development and Evaluation of an Olfactory Display System for Presenting a Virtual Odor Source

    Publication Year: 2013 , Page(s): 606 - 615
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9889 KB) |  | HTML iconHTML  

    We propose a new olfactory display system that can generate an odor distribution on a two-dimensional display screen. The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen. The position of this virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. Most users do not immediately notice the odor presentation mechanism of the proposed olfactory display system because the airflow and perceived odor come from the display screen rather than the fans. The airflow velocity can even be set below the threshold for airflow sensation, such that the odor alone is perceived by the user. We present experimental results that show the airflow field and odor distribution that are generated by the proposed system. We also report sensory test results to show how the generated odor distribution is perceived by the user and the issues that must be considered in odor presentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Immersive Group-to-Group Telepresence

    Publication Year: 2013 , Page(s): 616 - 625
    Cited by:  Papers (6)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6703 KB)  

    We present a novel immersive telepresence system that allows distributed groups of users to meet in a shared virtual 3D world. Our approach is based on two coupled projection-based multi-user setups, each providing multiple users with perspectively correct stereoscopic images. At each site the users and their local interaction space are continuously captured using a cluster of registered depth and color cameras. The captured 3D information is transferred to the respective other location, where the remote participants are virtually reconstructed. We explore the use of these virtual user representations in various interaction scenarios in which local and remote users are face-to-face, side-by-side or decoupled. Initial experiments with distributed user groups indicate the mutual understanding of pointing and tracing gestures independent of whether they were performed by local or remote participants. Our users were excited about the new possibilities of jointly exploring a virtual city, where they relied on a world-in-miniature metaphor for mutual awareness of their respective locations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Leila De Floriani
Department of Computer Science, Bioengineering, Robotics and Systems Engineering
University of Genova
16146 Genova (Italy)
ldf4tvcg@umiacs.umd.edu