By Topic

3D User Interfaces (3DUI), 2011 IEEE Symposium on

Date 19-20 March 2011

Filter Results

Displaying Results 1 - 25 of 49
  • [Title page]

    Publication Year: 2011 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (211 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2011 , Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (23 KB)  
    Freely Available from IEEE
  • Contents

    Publication Year: 2011 , Page(s): iii - v
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB)  
    Freely Available from IEEE
  • Supporting organizations

    Publication Year: 2011 , Page(s): vi
    Save to Project icon | Request Permissions | PDF file iconPDF (555 KB)  
    Freely Available from IEEE
  • Message from the symposium chairs

    Publication Year: 2011 , Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • IEEE Visualization and Graphics Technical Committee (VGTC)

    Publication Year: 2011 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (73 KB)  
    Freely Available from IEEE
  • Symposium Committee

    Publication Year: 2011 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • 3D spatial interaction for entertainment

    Publication Year: 2011 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (88 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Papers

    Publication Year: 2011 , Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (164 KB)  
    Freely Available from IEEE
  • A reusable library of 3D interaction techniques

    Publication Year: 2011 , Page(s): 3 - 10
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1275 KB) |  | HTML iconHTML  

    We present a library of reusable, abstract, low granularity components for the development of novel interaction techniques. Based on the InTml language and through an iterative process, we have designed 7 selection and 5 travel techniques from [5] as dataflows of reusable components. The result is a compact set of 30 components that represent interactive content and useful behavior for interaction. We added a library of 20 components for device handling, in order to create complete, portable applications. By design, we achieved a 68% of component reusability, measured as the number of components used in more than one technique, over the total number of used components. As a reusability test, we used this library to describe some interaction techniques in [1], a task that required only 2% of new components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A reconfigurable architecture for multimodal and collaborative interactions in Virtual Environments

    Publication Year: 2011 , Page(s): 11 - 14
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2268 KB) |  | HTML iconHTML  

    Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. This paper focuses on the reconfigurable aspect and the implementation of a multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of this supervisor is to ensure the merge of pieces of information from VR devices in order to control immersive multi-user applications through the main communication and sensorimotor channels of humans. Beyond the architectural aspect, we give indications on the modularity and the genericity of our system, implemented in C++, which could be embedded into different VR platforms. Moreover, its XML-based configuration system allows it to be easily applicable to many different contexts. The reconfigurable features are then illustrated via two scenarios: a cognitive oriented assembly task with single user multimodal interactions, and an industrial assembly task with multimodal and collaborative interactions in a co-located multi-user environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multimode immersive conceptual design system for architectural modeling and lighting

    Publication Year: 2011 , Page(s): 15 - 18
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (843 KB) |  | HTML iconHTML  

    We present a new immersive system which allows initial conceptual design of simple architectural models, including lighting. Our system allows the manipulation of simple elements such as windows, doors and rooms while the overall model is automatically adjusted to the manipulation. The system runs on a four-sided stereoscopic, head-tracked immersive display. We also provide simple lighting design capabilities, with an abstract representation of sunlight and its effects when shining through a window. Our system provides three different modes of interaction, a miniature-model table mode, a fullscale immersive mode and a combination of table and immersive which we call mixed mode. We performed an initial pilot user test to evaluate the relative merits of each mode for a set of basic tasks such as resizing and moving windows or walls, and a basic light-matching task. The study indicates that users appreciated the immersive nature of the system, and found interaction to be natural and pleasant. In addition, the results indicate that the mean performance times seem quite similar in the different modes, opening up the possibility for their combined usage for effective immersive modeling systems for novice users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joyman: A human-scale joystick for navigating in virtual worlds

    Publication Year: 2011 , Page(s): 19 - 26
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9705 KB) |  | HTML iconHTML  

    In this paper, we propose a novel interface called Joyman, designed for immersive locomotion in virtual environments. Whereas many previous interfaces preserve or stimulate the users proprioception, the Joyman aims at preserving equilibrioception in order to improve the feeling of immersion during virtual locomotion tasks. The proposed interface is based on the metaphor of a human-scale joystick. The device has a simple mechanical design that allows a user to indicate his virtual navigation intentions by leaning accordingly. We also propose a control law inspired by the biomechanics of the human locomotion to transform the measured leaning angle into a walking direction and speed - i.e., a virtual velocity vector. A preliminary evaluation was conducted in order to evaluate the advantages and drawbacks of the proposed interface and to better draw the future expectations of such a device. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of navigation design on Contextualized Video Interfaces

    Publication Year: 2011 , Page(s): 27 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1816 KB) |  | HTML iconHTML  

    Real-time monitoring and responding to events observed across multiple surveillance cameras can pose an overwhelmingly high mental workload. Contextualized Video Interfaces (which place the surveillance videos within their spatial context) can be used to support these tasks. In order for users to integrate information from the videos and the spatial context as the events progress in real time, navigation interfaces are required. However, different tasks seem to favor different navigation techniques. In this paper, we describe the formal evaluation of four navigation designs for Contextualized Video Interfaces. The four designs arise from the consideration of two important factors of navigation techniques: navigation mode (manual or semi-automatic) and navigation context (overview or detailed view). To avoid a piecemeal understanding of the navigation techniques, we evaluated them using three tasks that have different information requirements. While semi-automatic navigation was generally preferable, low-DOF manual navigation techniques were found to be useful in certain situations. The choice between overview navigation and detailed-view navigation depends primarily on the user's information requirement in the task. Based on the findings, we provide guidelines on how to select designs according to task features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of redirection on spatial orientation in real and virtual environments

    Publication Year: 2011 , Page(s): 35 - 38
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6122 KB) |  | HTML iconHTML  

    We report a user study that investigated the effect of redirection in an immersive virtual environment on spatial orientation relative to both real world and virtual stimuli. Participants performed a series of spatial pointing tasks with real and virtual targets, during which they experienced three within-subjects conditions: rotation-based redirection, change blindness redirection, and no redirection. Our results indicate that when using the rotation technique, participants spatially updated both their virtual and real world orientations during redirection, resulting in pointing accuracy to the targets' recomputed positions that was strikingly similar to the control condition. While our data also suggest that a similar spatial updating may have occurred when using a change blindness technique, the realignment of targets appeared to be more complicated than a simple rotation, and was thus difficult to measure quantitatively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of the bimanual frame of reference with haptics for unimanual interaction tasks in virtual environments

    Publication Year: 2011 , Page(s): 39 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2449 KB) |  | HTML iconHTML  

    In this paper, we present the results of a user study with a bimanual haptic setup. The goal of the experiment was to evaluate if Guiard's theory of the bimanual frame of reference can be applied to interaction tasks in virtual environments (VE) with haptic rendering. This theory proposes an influence of the non-dominant hand (NDH) on the dominant hand (DH). The experiment was conducted with multiple trials under two different conditions: bimanual and unimanual. The interaction task in this scenario was a sequence of pointing, alignment and docking sub-tasks for the dominant hand. In the bimanual condition, an asynchronous pointing task was added for the non-dominant hand. This additional task was primarily designed to bring the non-dominant hand closer to the other hand and thus enable the creation of a frame of reference. Our results show the potential of this task design extension (with NDH utilization). Task completion times are significantly lower in the bimanual condition compared to the unimanual case, without significant impact on overall precision. Furthermore, the bimanual condition shows better mean accuracy over several measures, e.g., lateral displacement and penetration depth. Additionally, subject performance was not only compared for all participants, but also between subgroups: medical vs. non-medical and gamer vs. non-gamer. User preference for a bimanual system over a unimanual system has been indicated with a post-test questionnaire. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing robot teleoperator situation awareness and performance using vibro-tactile and graphical feedback

    Publication Year: 2011 , Page(s): 47 - 54
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1140 KB) |  | HTML iconHTML  

    Most of the feedback received by operators of current robot-teleoperation systems is graphical. When a large variety of robot data needs to be displayed however, this may lead to operator overload. The research presented in this paper focuses on off-loading part of the feedback to other human senses, specifically to the sense of touch, to reduce the load due to the interface, and as a consequence, to increase the level of operator situation awareness. Graphical and vibro-tactile versions of feedback delivery for collision interfaces were evaluated in a search task using a virtual teleoperated robot. Parameters measured included task time, number of collisions between the robot and the environment, number of objects found and the quality of post-experiment reports through the use of sketch maps. Our results indicate that the combined use of both graphical and vibro-tactile feedback interfaces led to an increase in the quality of sketch maps, a possible indication of increased levels of operator situation awareness, but also a slight decrease in the number of robot collisions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enabling multi-point haptic grasping in virtual environments

    Publication Year: 2011 , Page(s): 55 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (705 KB) |  | HTML iconHTML  

    Haptic interaction has received increasing research interest in recent years. Currently, most commercially available haptic devices provide the user with a single point of interaction. Multi-point haptic devices present a logical progression in device design and enable the operator to experience a far wider range of haptic interactions, particularly the ability to grasp via multiple fingers. This is highly desirable for various haptically enabled applications including virtual training, telesurgery and telemanipulation. This paper presents a gripper attachment which utilises two low-cost commercially available haptic devices to facilitate multi-point haptic grasping. It provides the ability to render forces to the user's fingers independently and using Phantom Omni haptic devices offers several benefits over more complex approaches such as low-cost, reliability, and ease of programming. The workspace of the gripper attachment is considered and in order to haptically render the desired forces to the user's fingers, kinematic analysis is discussed and necessary formulations presented. The integrated multi-point haptic platform is presented and exploration of a virtual environment using CHAI 3D is demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dropping the ball: Releasing a virtual grasp

    Publication Year: 2011 , Page(s): 59 - 66
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6053 KB) |  | HTML iconHTML  

    We present a method for improved release of whole-hand virtual grasps. It addresses the problem of objects “sticking” during release after the user's (real) fingers interpenetrate virtual objects due to the lack of physical motion constraints. This problem may be especially distracting for grasp techniques that introduce mismatches between tracked and visual hand configurations to prevent visual interpenetration. Our method includes heuristic analysis of finger motion and a transient incremental motion metaphor to manage a virtual hand during grasp release. We incorporate the method into a spring model for whole-hand virtual grasping. We show that the new spring model improves speed and accuracy for a targeted ball-drop task, and users report a subjective preference for the new behavior. In contrast to a standard spring-based grasping method, measured release quality does not depend notably on object size. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid and accurate 3D selection by progressive refinement

    Publication Year: 2011 , Page(s): 67 - 74
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9571 KB) |  | HTML iconHTML  

    Issues such as hand and tracker jitter negatively affect user performance with the ray-casting selection technique in 3D environments. This makes it difficult for users to perform tasks that require them to select objects that have a small visible area, since small targets require high levels of precision. We introduce an approach to address this issue that uses progressive refinement of the set of selectable objects to reduce the required precision of the task. We present a design space of progressive refinement techniques and an exemplar technique called Sphere-casting refined by QUAD-menu (SQUAD). We explore the tradeoffs between progressive refinement and immediate selection techniques in an evaluation comparing SQUAD to ray-casting. Both an analytical evaluation based on a distal pointing model and an empirical evaluation demonstrate that progressive refinement selection can be better than immediate selection. SQUAD was much more accurate than ray-casting, and SQUAD was faster than ray-casting with small targets and less cluttered environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-touch RST in 2D and 3D spaces: Studying the impact of directness on user performance

    Publication Year: 2011 , Page(s): 75 - 78
    Cited by:  Papers (4)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7797 KB) |  | HTML iconHTML  

    The RST multi-touch technique allows one to simultaneously control Rotations, Scaling, and Translations from multi-touch gestures. We conducted a user study to better understand the impact of directness on user performance for a RST docking task, for both 2D and 3D visualization conditions. This study showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. The study also showed that users' trajectories are comparable for all conditions (2D/3D and direct/indirect). This tends to show that indirect RST control may be valuable for interactive visualization of 3D content. To illustrate this finding, we present a demo application that allows novice users to arrange 3D objects on a 2D virtual plane in an easy and efficient way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Squeeze me and i'll change: An exploration of frustration-triggered adaptation for multimodal interaction

    Publication Year: 2011 , Page(s): 79 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1909 KB) |  | HTML iconHTML  

    Complex 3D interaction in virtual environments may inhibit user interaction and cause frustration. Supporting adaptivity based on the detected user frustration can be considered as one promising solution to enhance user interaction. Our work proposes to provide adaptive assistance to users who are frustrated during their interaction with 3D user interfaces in virtual environments. The obtrusiveness of physiological measurements to detect frustration inspired us to investigate the pressure patterns exerted on a 3D input device for this purpose. The experiment presented in this paper has shown a great potential on utilizing the finger pressure measures as an alternative to physiological measures to indicate user frustration during interaction. Furthermore, the findings in this particular context showed that adaptation of haptic interaction was effective in increasing the user's performance and making users feel less frustrated in performing their tasks in the 3D environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pointing at 3D targets in a stereo head-tracked virtual environment

    Publication Year: 2011 , Page(s): 87 - 94
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (722 KB) |  | HTML iconHTML  

    We present three experiments that systematically examine pointing tasks in fish tank VR using the ISO 9241-9 standard. All experiments used a tracked stylus for a both direct touch and ray-based technique. Mouse-based techniques were also studied. Our goal was to investigate means of comparing 2D and 3D pointing techniques. The first experiment used a 2D task constrained to the display surface, allowing direct validation against other 2D studies. The second experiment used targets stereoscopically presented above and parallel to the display, i.e., the same task, but without tactile feedback afforded by the screen. The third experiment used targets varying in all three dimensions. Results of these studies suggest that the conventional 2D formulation of Fitts' law works well for planar pointing tasks even without tactile feedback, and with stereo display. Fully 3D motions using the ray and mouse based techniques are less well modeled. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and evaluation of methods to prevent frame cancellation in real-time stereoscopic rendering

    Publication Year: 2011 , Page(s): 95 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (779 KB) |  | HTML iconHTML  

    Frame cancellation comes from the conflict between two depth cues: stereo disparity and occlusion with the screen border. When this conflict occurs, the user suffers from poor depth perception of the scene. It also leads to uncomfortable viewing and eyestrain due to problems in fusing left and right images. In this paper we propose a novel method to avoid frame cancellation in real-time stereoscopic rendering. To solve the disparity/frame occlusion conflict, we propose rendering only the part of the viewing volume that is free of conflict by using clipping methods available in standard real-time 3D APIs. This volume is called the "Stereo Compatible Volume" (SCV) and the method is named "Stereo Compatible Volume Clipping" (SCVC). Black Bands, a proven method initially designed for stereoscopic movies is also implemented to conduct an evaluation. Twenty two people were asked to answer open questions and to score criteria for SCVC, Black Bands and a Control method with no specific treatment. Results show that subjective preference and user's depth perception near screen edge seem improved by SCVC, and that Black Bands did not achieve the performance we expected. At a time when stereoscopic capable hardware is available from the mass consumer market, the disparity/frame occlusion conflict in stereoscopic rendering will become more noticeable. SCVC could be a solution to recommend. SCVC's simplicity of implementation makes the method able to target a wide range of rendering software from VR application to game engine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Posters

    Publication Year: 2011 , Page(s): 99 - 100
    Save to Project icon | Request Permissions | PDF file iconPDF (164 KB)  
    Freely Available from IEEE