By Topic

3D User Interfaces, 2009. 3DUI 2009. IEEE Symposium on

Date 14-15 March 2009

Filter Results

Displaying Results 1 - 25 of 52
  • IEEE symposium on 3D user interfaces 2009

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • Contents

    Page(s): iii - v
    Save to Project icon | Request Permissions | PDF file iconPDF (61 KB)  
    Freely Available from IEEE
  • Message

    Page(s): vi
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • IEEE visualization and graphics technical committee

    Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • Comittee

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • Keynote address

    Page(s): x
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (89 KB)  

    Provides an abstract for each of the keynote presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Papers

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (243 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): 2
    Save to Project icon | Request Permissions | PDF file iconPDF (4 KB)  
    Freely Available from IEEE
  • Measuring the effect of gaming experience on virtual environment navigation tasks

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1849 KB) |  | HTML iconHTML  

    Virtual environments are synthetic 3D worlds typically viewed from a first-person point of view with many potential applications within areas such as visualisation, entertainment and training simulators. To effectively develop and utilise virtual environments, user-centric evaluations are commonly performed. Anecdotal evidence suggests that factors such as prior experience with computer games may affect the results of such evaluations. This paper examines the effects of previous computer gaming experience, user perceived gaming ability and actual gaming performance on navigation tasks in a virtual environment. Two computer games and a virtual environment were developed to elicit performance metrics for use within a user study. Results indicated that perceived gaming skill and progress in a first-person-shooter (FPS) game were the most consistent metrics showing significant correlations with performance in time-based navigation tasks. There was also strong evidence that these relations were significantly intensified by the inclusion of participants who play FPS games. In addition, it was found that increased gaming experience decreased spatial perception performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A direct manipulation interface for time navigation in scientific visualizations

    Page(s): 11 - 18
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10211 KB) |  | HTML iconHTML  

    Scientific visualization tools are applied to gain understanding of time-varying simulations. When these simulations have a high temporal resolution or simulate a long time span, efficient navigation in the temporal dimension of the visualization is mandatory. For this purpose, we propose direct manipulation of visualization objects to control time. By dragging objects along their three-dimensional trajectory, a user can navigate in time by specifying spatial input. We propose two interaction techniques for different kinds of trajectories. In the design phase of these methods, we conducted expert evaluations. To show the benefits of the techniques, we compare them in a user study with the traditional slider-based interface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tech-note: ScrutiCam: Camera manipulation technique for 3D objects inspection

    Page(s): 19 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7806 KB) |  | HTML iconHTML  

    Inspecting a 3D object is a common task in 3D applications. However, such a camera movement is not trivial and standard tools do not provide an efficient and unique tool for such a move. ScrutiCam is a new 3D camera manipulation technique. It is based on the “click-and-drag” mouse move, where the user “drags” the point of interest on the screen to perform different camera movements such as zooming, panning and rotating around a model. ScrutiCam can stay aligned with the surface of the model in order to keep the area of interest visible. ScrutiCam is also based on the Point-Of-Interest (POI) approach, where the final camera position is specified by clicking on the screen. Contrary to other POI techniques, ScrutiCam allows the user to control the animation of the camera along the trajectory. It is also inspired by the “Trackball” technique, where the virtual camera moves along the bounding sphere of the model. However, ScrutiCam's camera stays close to the surface of the model, whatever its shape. It can be used with mice as well as with touch screens as it only needs a 2D input and a single button. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual multi-tools for hand and tool-based interaction with life-size virtual human agents

    Page(s): 23 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1289 KB) |  | HTML iconHTML  

    A common approach when simulating face-to-face interpersonal scenarios with virtual humans is to afford users only verbal interaction while providing rich verbal and non-verbal interaction from the virtual human. This is due to the difficulty in providing robust recognition of user non-verbal behavior and interpretation of these behaviors within the context of the verbal interaction between user and virtual human. To afford robust hand and tool-based non-verbal interaction with life-sized virtual humans, we propose virtual multi-tools. A single hand-held, tracked interaction device acts as a surrogate for the virtual multi-tools: the user's hand, multiple tools, and other objects. By combining six degree-of-freedom, high update rate tracking with extra degrees of freedom provided by buttons and triggers, a commodity device, the Nintendo Wii Remote, provides the kinesthetic and haptic feedback necessary to provide a high-fidelity estimation of the natural, unencumbered interaction provided by one's hands and physical hand-held tools. These qualities allow virtual multi-tools to be a less error-prone interface to social and task-oriented non-verbal interaction with a life-sized virtual human. This paper discusses the implementation of virtual multi-tools for hand and tool-based interaction with life-sized virtual humans, and provides an initial evaluation of the usability of virtual multi-tools in the medical education scenario of conducting a neurological exam of a virtual human. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiscale interaction technique for large, high-resolution displays

    Page(s): 31 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB) |  | HTML iconHTML  

    This paper explores the link between users' physical navigation, specifically their distance from their current object(s) of focus, and their interaction scale. We define a new 3D interaction technique, called multiscale interaction, which links users' scale of perception and their scale of interaction. The technique exploits users' physical navigation in the 3D space in front of a large high-resolution display, using it to explicitly control scale of interaction, in addition to scale of perception. Other interaction techniques for large displays have not previously considered physical navigation to this degree. We identify the design space of the technique, which other researchers can continue to explore and build on, and evaluate one implementation of multiscale interaction to begin to quantify the benefits of the technique. We show evidence of a natural psychological link between scale of perception and scale of interaction and that exploiting it as an explicit control in the user interface can be beneficial to users in problem solving tasks. In addition, we show that designing against this philosophy can be detrimental. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tech-note: Device-free interaction spaces

    Page(s): 39 - 42
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1779 KB) |  | HTML iconHTML  

    Existing approaches to 3D input on wall-sized displays include tracking users with markers, using stereo- or depth-cameras or have users carry devices like the Nintendo Wiimote. Markers makes ad hoc usage difficult, and in public settings devices may easily get lost or stolen. Further, most camera-based approaches limit the area where users can interact. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of tracking technology, latency, and spatial jitter on object movement

    Page(s): 43 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (875 KB) |  | HTML iconHTML  

    We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and 3D object movement tasks. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used as a baseline comparison. We then present an experiment based on ISO 9241-9, which measures performance characteristics of pointing devices. We artificially introduce latency and jitter to the mouse and compared the results to the 3D tracker. Results indicate that latency has a much stronger effect on human performance than low amounts of spatial jitter. In a second study, we use a subset of conditions from the first to test latency and jitter on 3D object movement. The results indicate that large, uncharacterized jitter ldquospikesrdquo significantly impact 3D performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selection performance based on classes of bimanual actions

    Page(s): 51 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    We evaluated four selection techniques for volumetric data based on the four classes of bimanual action: symmetric-synchronous, asymmetric-synchronous, symmetric-asynchronous, and asymmetric-asynchronous. The purpose of this study was to determine the relative performance characteristics of each of these classes. In addition, we compared two types of data representations to determine whether these selection techniques were suitable for interaction in different environments. The techniques were evaluated in terms of accuracy, completion times, TLX overall workload, TLX physical demand, and TLX cognitive demand. Our results suggest that symmetric and synchronous selection strategies both contribute to faster task completion. Our results also indicate that no class of bimanual selection was a significant contributor to reducing or increasing physical demand, while asynchronous action significantly increased cognitive demand in asymmetric techniques and decreased ease of use in symmetric techniques. However, for users with greater computer usage experience, accuracy performance differences diminished between the classes of bimanual action. No significant differences were found between the two types of data representations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The influence of input device characteristics on spatial perception in desktop-based 3D applications

    Page(s): 59 - 66
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (15346 KB) |  | HTML iconHTML  

    In desktop applications 3D input devices are mostly operated by the non-dominant hand to control 3D viewpoint navigation, while selection and geometry manipulations are handled by the dominant hand using the regular 2D mouse. This asymmetric bi-manual interface is an alternative to commonly used keyboard and mouse input, where the non-dominant hand assists the dominant hand with keystroke input to toggle modes. Our first study compared the keyboard and mouse interface to bi-manual interfaces using the 3D input devices SpaceTraveller and Globefish in a coarse spatial orientation task requiring egocentric and exocentric viewpoint navigation. The different interface configurations performed similarly with respect to task completion times, but the bi-manual techniques resulted in significantly less errors. This result is likely to be due to better workload balancing between the two hands allowing the user to focus on a single task for each hand. Our second study focused on a bi-manual 3D point selection task, which required the selection of small targets and good depth perception. The Globefish interface employing position control for rotations performed significantly better than the SpaceTraveller interface for this task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wayfinding techniques for multiScale virtual environments

    Page(s): 67 - 74
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5583 KB) |  | HTML iconHTML  

    Wayfinding in multiscale virtual environments can be rather complex, as users can and sometimes have to change their scale to access the entire environment. Hence, this work focuses on the understanding and classification of information needed for travel, as well as on the design of navigation techniques that provide this information. To this end, we first identified two kinds of information necessary for traveling effectively in this kind of environment: hierarchical information, based on the hierarchical structure formed by the levels of scale; and spatial information, related to orientation, distance between objects in different levels of scale and spatial localization. Based on this, we designed and implemented one technique for each kind of information. The developed techniques were evaluated and compared to a baseline set of travel and wayfinding aid techniques for traveling through multiple scales. Results show that the developed techniques perform better and provide a better solution for both travel and wayfinding aid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arch-Explore: A natural user interface for immersive architectural walkthroughs

    Page(s): 75 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1358 KB) |  | HTML iconHTML  

    In this paper we propose the Arch-Explore user interface, which supports natural exploration of architectural 3D models at different scales in a real walking virtual reality (VR) environment such as head-mounted display (HMD) or CAVE setups. We discuss in detail how user movements can be transferred to the virtual world to enable walking through virtual indoor environments. To overcome the limited interaction space in small VR laboratory setups, we have implemented redirected walking techniques to support natural exploration of comparably large-scale virtual models. Furthermore, the concept of virtual portals provides a means to cover long distances intuitively within architectural models. We describe the software and hardware setup and discuss benefits of Arch-Explore. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tech-note: Vtrail: Supporting trailblazing in virtual environments

    Page(s): 83 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (709 KB) |  | HTML iconHTML  

    Trails are a proven means of improving performance in virtual environments (VE) but there is very little understanding or support for the role of the trailblazer. The Use-IT Lab is currently designing a tool, the VTrail system, to support trailblazing in VE's. The objective of this document is to introduce the concept of trailblazing, present the initial prototype for a tool designed specifically to support trailblazing and discuss results from an initial usability study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A tactile distribution sensor which enables stable measurement under high and dynamic stretch

    Page(s): 87 - 93
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1853 KB) |  | HTML iconHTML  

    Recently, we have been studying various tactile distribution sensors based on electrical impedance tomography (EIT) which is a non-invasive technique to measure the resistance distribution of a conductive material only from a boundary, and needs no wiring inside the sensing area. In this paper, we present a newly developed conductive structure which is pressure sensitive but stretch insensitive and is based on the concept of contact resistance between (1)a network of stretchable wave-like conductive yarns with high resistance and (2)a conductive stretchable sheet with low resistance. Based on this newly developed structure, we have realized a novel tactile distribution sensor which enables stable measurement under dynamic and large stretch from various directions. Stable measurement of pressure distribution under dynamic and complex deformation cases such as pinching and pushing on a balloon surface are demonstrated. The sensor has been originally designed for implementation over interactive robots with soft and highly deformable bodies, but can also be used as novel user interface devices, or ordinary pressure distribution sensors. Some of the most remarkable specifications of the developed tactile sensor are high stretchability up to 140% and toughness under adverse load conditions. The sensor also has a realistic potential of becoming as thin and stretchable as stocking fabric. A goal of this research is to combine this thin sensor with stretch distribution sensors so that richer and more sophisticated tactile interactions can be realized. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Blank page]

    Page(s): 94
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • Tech-note: Multimodal feedback in 3D target acquisition

    Page(s): 95 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5598 KB) |  | HTML iconHTML  

    We investigated dynamic target acquisition within a 3D scene, rendered on a 2D display. Our focus was on the relative effects of specific perceptual cues provided as feedback. Participants were asked to use a specially designed input device to control the position of a volumetric cursor, and acquire targets as they appeared one by one on the screen. To compensate for the limited depth cues afforded by 2D rendering, additional feedback was offered through audio, visual and haptic modalities. Cues were delivered either as discrete multimodal feedback given only when the target was completely contained within the cursor, or continuously in proportion to the distance between the cursor and the target. Discrete feedback prevailed by improving accuracy without compromising selection times. Continuous feedback resulted in lower accuracy compared to discrete. In addition, reaction to the haptic stimulus was faster than for visual feedback. Finally, while the haptic modality helped decrease completion time, it led to a lower success rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.