By Topic

3D User Interfaces, 2007. 3DUI '07. IEEE Symposium on

Date 10-11 March 2007

Filter Results

Displaying Results 1 - 25 of 35
  • IEEE Symposium on 3D User Interfaces 2007

    Publication Year: 2007 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (431 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • Cover Image Credits

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Symposium on 3D User Interfaces 2007

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (295 KB)  

    The following topics are dealt with: virtual reality; 3D movement; sequences & gestures; devices; mixed & augmented reality; 3D selection; forces; and 3D navigation & entertainment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Copyright notice]

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee (VGTC)

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • Message from the Symposium Chairs

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Symposium Committe

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (16 KB)  
    Freely Available from IEEE
  • The Visual Appearance of User's Avatar Can Influence the Manipulation of Both Real Devices and Virtual Objects

    Publication Year: 2007
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (764 KB) |  | HTML iconHTML  

    This paper describes two experiments conducted to study the influence of visual appearance of user's avatar (or 3D cursor) on the manipulation of both interaction devices and virtual objects in 3D virtual environments (VE). In both experiments, participants were asked to pick up a virtual cube and place it at a random location in a VE. The first experiment showed that the visual appearance of a 3D cursor could influence the participants in the way they manipulated the real interaction device. The participants changed the orientation of their hand as function of the orientation suggested visually by the shape of the 3D cursor. The second experiment showed that one visual properly of the avatar (i.e., the presence or absence of a directional cue) could influence the way participants picked up the cube in the VE. When using avatars or 3D cursors with a strong directional cue (e.g., arrows pointing to the left or right), participants generally picked up the cube by a specific side (e.g., right or left side). When using 3D cursors with no main directional cue, participants picked up the virtual cube more frequently by its front or top side. Taken together our results suggest that some visual aspects (such as directional cues) of avatars or 3D cursors chosen to display the user in the VE could partially determine his/her behaviour during manipulation tasks. Such an influence could be used to prevent wrong uses or to favour optimal uses of manipulation interfaces such as haptic devices in virtual environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Exploration of Interaction-Display Offset in Surround Screen Virtual Environments

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2733 KB) |  | HTML iconHTML  

    We present a study exploring the effect of positional offset between the user's interaction frame-of-reference (the physical location of input) and the display frame-of-reference (where graphical feedback appears) in a surround-screen virtual environment (SSVE). Our research hypothesis states that, in such an environment, task performance improves given an offset between the two frames-of-reference. In our experiment, users were asked to match a target color using a 3D color widget under three different display-interaction offset conditions: no offset (i.e., collocation), a three inch offset, and a two foot offset. Our results suggest that collocation of the display and interaction frames-of-reference may degrade accuracy in widget-based tasks and that collocation does not necessarily lead the user to spend more time on the task. In addition, these results contrast with previous studies performed with head-mounted display (HMD) platforms, which have demonstrated significant performance advantages for collocation and the "direct manipulation" of virtual objects. Moreover, a previous study with a different task performed in a projector-based VE has also demonstrated that collocation is not detrimental to user performance. Our conclusion is that the most effective positional offset is dependent upon the specific display hardware and VE task View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring 3D Interaction in Alternate Control-Display Space Mappings

    Publication Year: 2007
    Cited by:  Papers (1)  |  Patents (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2332 KB) |  | HTML iconHTML  

    The desire to have intuitive, seamless 3D interaction fuels research exploration into new approaches to 3D interaction. However, within these explorations we continue to rely on Brunelleschi's perspective for display and map the interactive control space directly into it without much thought on the effect that this default mapping has. In contrast, there are many possibilities for creating 3D interaction spaces, thus making it important to run user studies to examine these possibilities. Options in mapping the control space to the display space for 3D interaction have previously focused on the manipulation of control-display ratio or gain. In this paper, we present a conceptual framework that provides a more general control-display description that includes mappings for flip, rotation, skew, as well as scale (gain). We conduct a user study to explore 3D selection and manipulation tasks in three of these different mappings in comparison to the commonly used mapping (perspective mapping of control space to a perspective display). Our results show interesting differences between interactions and user preferences in these mappings and indicate that all may be considered viable alternatives. Together this framework and study open the door to further exploration of 3D interaction variations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Empirical Comparison of Task Sequences for Immersive Virtual Environments

    Publication Year: 2007
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1573 KB) |  | HTML iconHTML  

    System control - the issuing of commands - is a critical, but largely unexplored task in 3D user interfaces (3DUIs) for immersive virtual environments (IVEs). The task sequence (the order of operations in a system control task) is an important aspect of the design of a system control interface (SCI), because it affects the way the user must think about accomplishing the task. Most command line interfaces are based on the action-object task sequence (e.g. "rm foo.txt"). Most graphical user interfaces (GUIs) are based on the object-action task sequence (e.g. click on an icon then select "delete" from a pop-up menu). An SCI for an IVE should be transparent and induce minimal cognitive load, but it is not clear which task sequences support this goal. We designed an experiment using an interior design application context to determine the cognitive loads induced by various task sequences in IVEs. By subtracting the expected time for a user to complete the task from the total time, we have estimated the cognitive time, dependent only on task sequence. Our experiment showed that task sequence has a significant effect on the cognitive loads induced in IVEs. The object-action sequence and similar task sequences induce smaller cognitive loads than those induced by the action-object sequence. These results can be used to create guidelines for creating 3DUIs for IVEs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Development of a Pose-Based Command Language for Triage Training in Virtual Reality

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5707 KB) |  | HTML iconHTML  

    Triage is a medical term that describes the process of prioritizing and delivering care to multiple casualties within a short time frame. Because of the inherent limitations of traditional methods of teaching triage, such as paper-based scenarios and the use of actors as standardized patients, computer-based simulations and virtual reality (VR) scenarios are being advocated. We present our system for VR triage, focusing on design and development of a pose and gesture based interface that allows a learner to navigate in a virtual space among multiple simulated casualties. The learner is also able to manipulate virtual instruments effectively in order to complete required training tasks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optical Sight Metaphor for Virtual Environments

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1962 KB) |  | HTML iconHTML  

    Optical sight is a new metaphor for selecting distant objects or precisely pointing at close objects in virtual environments. Optical sight combines ray-casting, hand based camera control, and variable zoom into one virtual instrument that can be easily implemented for a variety of virtual, mixed, and augmented reality systems. The optical sight can be modified into a wide family of tools for viewing and selecting objects. Optical sight scales well from desktop environments to fully immersive systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AutoEval mkII - Interaction Design for a VR Design Review System

    Publication Year: 2007
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1300 KB) |  | HTML iconHTML  

    This paper summarizes the experience drawn from designing and revising a design review application prototype interface using immersive virtual reality technology and putting it into context with previous research in the field of 3D human-computer interaction. AutoEval was originally developed in collaboration with a major car manufacturer to enable intuitive analysis and manipulation of 3D models for users without a CAD or computer science background. This paper introduces the system and discusses the 3D interaction design decisions taken based on the observation and informal feedback of a large number of users View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Exploration of Non-Isomorphic 3D Rotation in Surround Screen Virtual Environments

    Publication Year: 2007
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (16883 KB) |  | HTML iconHTML  

    Non-isomorphic rotational mappings have been shown to be an effective technique for rotation of virtual objects in 3D desktop environments. In this paper, we present an experimental study that explores the performance characteristics of isomorphic and non-isomorphic rotation techniques in a surround screen virtual environment. Our experiment compares isomorphic rotation with non-isomorphic rotation techniques utilizing three separate amplification factors, two different thresholds for task completion, and two different angular ranges for virtual object rotation. Our results show that a non-isomorphic mapping with an amplification factor of three is both optimal in terms of completion time and accuracy and is most preferred by our test subjects. In addition, our results suggest that, in a surround screen virtual environment, rotation tasks using both isomorphic and non-isomorphic rotational mappings can be completed faster and more accurately compared to previous studies exploring rotation in 3D user interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-section Projector: Interactive and Intuitive Presentation of 3D Volume Data using a Handheld Screen

    Publication Year: 2007
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8320 KB) |  | HTML iconHTML  

    A novel display system that presents cross-sectional image of 3D volume data in an intuitive and interactive way is proposed. A screen panel is manipulated by a user; the position and orientation of the screen are measured by sensors; cross-sectional image of 3D volume data with the screen plane is generated and projected on the screen panel. By supporting this interaction up to a relatively high frequency motion of the screen panel, volumetric image of the 3D data is provided to user when the screen panel is quickly moved. The integrated presentation of cross-sectional and volumetric images is thought to mutually complement drawbacks each other; the volumetric image provides holistic view on the spatial structure, while the cross-sectional image provides more precise information among the volume data. A sensing system to measure the motion of screen plane using laser displacement sensors is designed, and a method to cancel the delay time from measurement to projection by predicting the motion of the screen panel is devised. Through implementation of a prototype system, feasibility of our approach is demonstrated, and future works that are required to improve the system is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Family of New Ergonomic Harness Mechanisms for Full-Body Natural Constrained Motions in Virtual Environments

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5069 KB) |  | HTML iconHTML  

    A family of new virtual reality harness mechanisms has been developed by this investigator to constrain an immersed user within the field of view of a virtual locomotion sensing system while permitting natural motions such as twisting, turning, jogging in place, dropping to the knees or moving to a prone position. The author has also developed a generalized synthesis approach to the design of such harness systems. Unwanted rotational inertial loads felt by the user are minimized while compliant constraints have been tailored to provide natural feedback forces. These ergonomic forces enhance the experience of virtual motion by partially substituting for the missing real-world dynamic loads encountered in locomotion. They also provide subtle, natural cues to the immersed user that aid the user in remaining centered. Unlike some other virtual locomotion systems, these devices are passive, relatively low-cost, easy and natural to use, making them minimally intrusive on the process of learning the simulated task View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cascading Hand and Eye Movement for Augmented Reality Videoconferencing

    Publication Year: 2007
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (81227 KB) |  | HTML iconHTML  

    We have implemented an augmented reality videoconferencing system that inserts virtual graphics overlays into the live video stream of remote conference participants. The virtual objects are manipulated using a novel interaction technique cascading bimanual tangible interaction and eye tracking. User studies prove that our user interface enriches remote collaboration by offering hitherto unexplored ways for collaborative object manipulation such as gaze controlled raypicking of remote physical and virtual objects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Balloon Selection: A Multi-Finger Technique for Accurate Low-Fatigue 3D Selection

    Publication Year: 2007
    Cited by:  Papers (10)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5324 KB) |  | HTML iconHTML  

    Balloon selection is a 3D interaction technique that is modeled after the real world metaphor of manipulating a helium balloon attached to a string. Balloon selection allows for precise 3D selection in the volume above a tabletop surface by using multiple fingers on a multi-touch-sensitive surface. The 3DOF selection tasks is decomposed in part into a 2DOF positioning task performed by one finger on the tabletop in an absolute 2D Cartesian coordinate system and a 1DOF positioning task performed by another finger on the tabletop in a relative 2D polar coordinate system. We have evaluated balloon selection in a formal user study that compared it to two well-known interaction techniques for selecting a static 3D target: a 3DOF tracked wand and keyboard cursor keys. We found that balloon selection was significantly faster than using cursor keys and had a significantly lower error rate than the wand. The lower error rate appeared to result from the user's hands being supported by the tabletop surface, resulting in significantly reduced hand tremor and arm fatigue. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Usability of Hybrid, Physical and Virtual Objects for Basic Manipulation Tasks in Virtual Environments

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1901 KB) |  | HTML iconHTML  

    Integrating physical and virtual environments has been shown to improve usability of virtual reality (VR) applications. Objects within these mixed realities (MR (Milgram and Kishino, 1994)) can be hybrid physical/virtual objects that are physically manipulatable and have flexible shape and texture. We compare usability of hybrid objects for basic manipulation tasks (rotation, positioning) to physical and virtual objects. The results suggest that hybrid objects are manipulated faster than virtual objects, but not more accurately. Physical objects outperform both hybrid and virtual objects in terms of speed and accuracy. On the other hand, users felt most stimulated by the virtual objects, followed by the hybrid and physical objects. The study shows that hybrid objects "work" in virtual environments, but further investigations regarding the factors influencing their usability are needed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Character Interaction System with Autostereoscopic Display and Range Sensor

    Publication Year: 2007
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4098 KB) |  | HTML iconHTML  

    Many types of autostereoscopic displays have been developed. With these displays, users do not require any special glasses to view 3-D images, so they save users trouble and show 3-D images in a natural situation. We have been researching about an autostereoscopic display, which displays objects in 3-D as if they exist in the real world. We designed our display, which is based on integral photography (IP) (Lippman, 1908), to make autostereoscopic images more realistic. We extended the IP concept to display not only still images but also videos. We named our concept integral videography (IV) (Liao et al., 2002). To demonstrate a possible application of this concept, we developed a character interaction system consisting of a range sensor (for 3-D input) and an IV display (for 3-D output). In this system, any virtual 3-D object on the screen moves around and responds to user input interactively. User input should not involve pushing buttons but be achieved through natural body movement because the virtual 3-D object "exists" in the real world. So we use the ultrasonic range sensors to sense the body movements. We adopt a Tamagotchi character as 3-D moving object, which moves around in a 3-D park. When a user extends a hand toward it, it notices and moves to the front of the park to greet him or her. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual Pads: Decoupling Motor Space and Visual Space for Flexible Manipulation of 2D Windows within VEs

    Publication Year: 2007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4112 KB) |  | HTML iconHTML  

    The ability to access external 2D applications from within 3D worlds can greatly enhance the possibilities of many VE applications. In this paper we present a new interaction metaphor for fast, accurate and comfortable manipulation of external GUIs displayed as texture-mapped rectangles. The main idea is to decouple the motor space from the visual space so that the external application can be manipulated within a user-defined working volume whose location and size is completely independent from the application's visual representation. This decoupling is accomplished through a virtual pad which receives user actions and maps them into cursor movements. The main advantage of our approach is that both the working space and the visual space can be adjusted independently to suit user preferences. This allows users to seamlessly balance speed and accuracy without affecting the visual representation of the application's GUI. We have implemented an interaction technique adopting our metaphor in combination with a pointing technique and we have evaluated its effectiveness in terms of task performance and user preference. Our experiments indicate that the proposed technique increases user's comfort while providing dynamic management of speed/accuracy tradeoff View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.