By Topic

Artificial Reality and Telexistence, 17th International Conference on

Date 28-30 Nov. 2007

Filter Results

Displaying Results 1 - 25 of 65
  • 17th International Conference on Artificial Reality and Telexistence - Cover

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (676 KB)  
    Freely Available from IEEE
  • 17th International Conference on Artificial Reality and Telexistence - Title page

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (1566 KB)  
    Freely Available from IEEE
  • 17th International Conference on Artificial Reality and Telexistence - Copyright

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (104 KB)  
    Freely Available from IEEE
  • 17th International Conference on Artificial Reality and Telexistence - TOC

    Page(s): v - x
    Save to Project icon | Request Permissions | PDF file iconPDF (879 KB)  
    Freely Available from IEEE
  • Welcome from the Organizing Committee Chairs

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (198 KB)  
    Freely Available from IEEE
  • Foreword: Esbjerg - Gateway to Scandinavia

    Page(s): xii - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (221 KB)  
    Freely Available from IEEE
  • Organizing Committees

    Page(s): xiv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (217 KB)  
    Freely Available from IEEE
  • Sponsors

    Save to Project icon | Request Permissions | PDF file iconPDF (633 KB)  
    Freely Available from IEEE
  • Syncretic Fields: Art, Mind, and the Many Realities

    Page(s): 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    In the late 20th century, the formative issues in digital art were about connectivity and interaction. Now at the start of the 3rd millennium, our post-digital objectives will increasingly be technoetic and syncretic. During the previous two centuries, there was much ado about e pluribus unum: out of many, one: a unified culture, unified self, unified mind, unity of time and space. Now at the start of this century, the reverse applies. E unum pluribus, out of one, many: many selves, many presences, many locations, many levels of consciousness. The many realities we inhabit-material, virtual, and spiritual, for example-are accompanied by our sense of being present simultaneously in many worlds: physical presence in ecospace, apparitional presence in spiritual space, telepresence in cyberspace, and vibrational presence in nanospace. In this respect, Second Life is the rehearsal room for future scenarios in which we will endlessly re-invent our many selves. As artists, we deal with the complexities of media that are at once immaterial and moist, numinous and grounded; and the complexity of the technoetic mind that both inhabits the body and is distributed across time and space. Where all these differences could be at odds with each other, we are in fact developing a capacity, mostly unconsciously, to syncretise. That is, to analogise and reconcile contradictions, while melding differences, such that art and reality are becoming syncretic. What today we build in the immateriality of cyberspace will tomorrow be realised concretely with nano technology. Our syncretic reality will emerge partly through the cultural coherence that intensive interconnectivity elicits, partly through the nano and quantum coherence at the base of our world-building, and through the spiritual coherence that informs the field of our multi-layered consciousness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    Page(s): 7 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3564 KB) |  | HTML iconHTML  

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during virtual reality sessions and the gaze-map chromatic gradient coding allows to collect and use these important information on the subject's gaze avoidance behavior. We presents the technological solution and its relevance for therapeutic needs, as well as the experiments performed to demonstrate its usability in a medical context. Results show that the gaze-map technique is fully compatible with different VR exposure systems and provides clinically meaningful data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Framework for Tracking by Maintaining Multiple Global Hypotheses for Augmented Reality

    Page(s): 15 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1160 KB) |  | HTML iconHTML  

    Several tracking techniques for augmented reality have been proposed. In feature point tracking, a pose is computed by minimizing the error between the observed 2D feature points and the back-projected feature points from the 3D scene model. This minimization problem is usually solved by nonlinear optimization. The main advantage of this approach is its accuracy. However, it is difficult to compute the correct pose unless an appropriate initial value is used. In addition, when an observation contains some errors, this approach does not guarantee a correct pose even if it converges to the global minimum. Therefore, once an incorrect pose is computed in a frame, either the tracking in the next frame may fail or the result will deviate from the correct pose. In this paper, we propose a new tracking framework for augmented reality. The proposed method tracks features as multiple local hypotheses based on not just one pose but multiple poses that are computed from pose estimation in the previous frame. Since multiple poses are maintained as global hypotheses, as long as the correct pose is contained in the hypotheses, tracking can be continued even in difficult situations such as a simple iterative scene with high-speed movements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Recognition of Body Motion for Virtual Dance Collaboration System

    Page(s): 23 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (681 KB) |  | HTML iconHTML  

    A method of real-time recognition of body motion for virtual dance collaboration system is described. Fourteen feature values are extracted from motion captured body motion data, and the dimension of data is reduced by using principal component analysis (PCA). In the training phase, templates for motion recognition are constructed from training samples of several types of motion. In the recognition phase, feature values obtained from a real dancer's motion data are projected to the subspace obtained by PCA, and the system recognizes the real dancer's motion by comparing with the motion templates. In this paper, the method and the experiments using seven kinds of basic motions are presented. The recognition experiment proved that the method could be used for motion recognition. A preliminary experiment in which a real dancer and a virtual dancer collaborate with body motion was also carried out. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Display-Based Tracking System: Display-Based Computing for Measurement Systems

    Page(s): 31 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2455 KB) |  | HTML iconHTML  

    In this paper, we introduce a two dimensional display-based tracking system. The system consists of a regular display device and simple photo sensors. It measures the position and direction of a receiver using fiducial graphics. The result of the measurement can be acquired in the same coordinate system as the graphics. Thus, this system no longer needs the measurement devices to be calibrated to the display devices. This is beneficial for mixed reality applications that synthesize virtual and real environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Adaptable Rear-Projection Screen Using Digital Pens And Hand Gestures

    Page(s): 49 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1941 KB) |  | HTML iconHTML  

    INTOI is a rear-projection setup which combines accurate pen tracking with hand gesture recognition. The hardware consists of an Anoto pattern printed on a special rear-projection foil and an infrared tracking system. INTOI is a low-cost system that is scalable and provides highly accurate input (to less than 1mm). Finally, our setup supports a novel multi-user interaction that combines simultaneous interaction of both hand and pen gesture input. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Volumetric Display for Augmented Reality

    Page(s): 55 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3617 KB) |  | HTML iconHTML  

    In our previous paper, we proposed an augmented reality display based on the pepper's ghost configuration that was able to display two-dimensional images on different image plane at different physical depths. In this paper, we propose the next generation of the display. Our latest display is able to display images at different physical depths simultaneously, thus it is able to display virtual objects with real depth, binocular parallax and motion parallax without the use of special glasses. Using the pepper's ghost setup, we are able to display real world objects and virtual objects in the same space. Furthermore, since the rendered virtual objects have real physical depth, our system does not suffer from accommodation and convergence mismatch problem. We will describe the hardware setup, software system, and follow with two user evaluation experiments that evaluated the result of our system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality

    Page(s): 63 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (915 KB) |  | HTML iconHTML  

    LUMAR is a hybrid system for spatial displays, allowing cell phones to be tracked in 2D and 3D through combined egocentric and exocentric techniques based on the LightSense and UMAR frameworks. LUMAR differs from most other spatial display systems based on mobile phones with its three-layered information space. The hybrid spatial display system consists of printed matter that is augmented with context-sensitive, dynamic 2D media when the device is on the surface, and with overlaid 3D visualizations when it is held in mid-air. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of an Active Display

    Page(s): 71 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (883 KB) |  | HTML iconHTML  

    In this paper, a novel display system named "active display" is proposed for bi-directional telexistence system. It consists of a 5-bar spherical parallel mechanism, a LCD and 3 actuators. The LCD is moved by the parallel mechanism along a spherical surface whose center is at the head of the operator. Its motions are synchronized with a remote camera system. As a result, the operator can see anywhere in the remote environment with realistic sensation similar to a head-mounted display (HMD). In addition, the facial information of the operator can be easily acquired because the monitor is separated from his/her face. This feature yields a big advantage compared with the HMD. Furthermore, it allows the operator to access peripheral devices very easily. The concept of the active display, details of the developed mechanisms, the control system and the human interface are discussed. In addition, to confirm the feasibility, several experiments are executed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Room-sized Immersive Projection Display for Tele-immersion Environment

    Page(s): 79 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (709 KB) |  | HTML iconHTML  

    Although an immersive projection display provides a high quality of presence, it requires a large space and a high cost to equip the facility. This paper proposes a room-sized immersive projection display system named CC room. In this system, the rounded corner of an ordinary room was used for the screen, and the projectors equipped with the fish-eye lenses were used to project wide angle images. By using this system, an immersive virtual environment that covers the user's view can be generated using one PC and one projector system. This system was connected to the JGN2 network and it was applied to the tele-immersive communication using video avatar. This paper describes the system construction of the CC room, the distortion correction method, the evaluation experiments and the tele-immersion applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extensible Virtual Environment Systems Using System of Systems Engineering Approach

    Page(s): 89 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    The development of Virtual Environment (VE) systems is a challenging endeavor with a complex problem domain. The experience in the past decade has helped contribute significantly to various measures of software quality of the resulting VE systems. However, the resulting solutions remain monolithic in nature without addressing successfully the issue of system interoperability and software aging. This paper argues that the problem resides in the traditional system centric approach and that an alternative approach based on system of systems engineering is necessary. As a result, the paper presents a reference architecture based on layers, where only the core is required for deployment and all others are optional. The paper also presents an evaluation methodology to assess the validity of the resulting architecture, which was applied to the proposed core layer and involving individual sessions with 12 experts in developing VE systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Presentation Technique of Scent to Avoid Olfactory Adaptation

    Page(s): 97 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1418 KB) |  | HTML iconHTML  

    Trials on the transmission of olfactory information together with audio/visual information are currently being conducted in the field of multimedia. However, continuous emission of a scent creates problems of human adaptation to the lingering olfactory stimuli. During long movie scenes, viewers can not detect an emitted scent continuously. To overcome this problem we applied pulse ejection to repeatedly emit scent for short periods of time to ensure the olfactory stimuli do not remain in the air to cause adaptation. This study presents the decision procedure for the ejection interval Deltat required while considering the olfactory characteristics of subjects. The developed method provided the user with an olfactory experience over a long duration, avoiding adaptation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Direct-Projected AR Based Interactive User Interface for Medical Surgery

    Page(s): 105 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1029 KB) |  | HTML iconHTML  

    In the field of computer aided surgery, augmented reality (AR) technology has been successfully used for enhancing accuracy of surgery and making surgeons convenient by visually assisting them in performing a number of complicated and time-consuming medical operations. However, there are still medical operations that do not receive the benefit of AR technology. As a representative one, surgeons still use an ink pen when they mark surgical targets for scheduling an operation. The ink pen is inconvenient because the mark drawn by the foreign matter is not easily modified or deleted. And the ink pen is also unlikely to be sanitary. In this paper, we propose an interactive user interface based on direct-projected augmented reality (DirectAR) technology for handling all these problems with the ink pen and its validity is shown in experimental results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A PDA-based See-through Interface within an Immersive Environment

    Page(s): 113 - 118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1247 KB) |  | HTML iconHTML  

    With the use of an immersive display system such as the CAVE, the user is able to perceive a 3D environment realistically. However, the interaction on such systems faces inherent difficulties such as inaccurate tracking, lack of depth cues, and unstable spatial manipulation without the sense of touch. In this paper, we propose a see-through lens interface using a PDA (Personal Digital Assistant) for supporting spatial manipulation within an immersive display. Compared to related techniques, a PDA-based see-through lens interface includes the following advantages, which we believe will provide more natural and flexible manipulation and interaction schemes in an immersive environment: physical contact of the screen surface for easy 2D manipulation in 3D space, built-in controls for immediate command execution, and a built-in display for a flexible GUI (graphical user interface). This paper describes the basic ideas and implementation details of the proposed system and some functionalities provided with the interface, such as image-based selection of a 3D object. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Force/Shape Reappearance of MSD Rheology Model Calibrated by Force/Shape Sequence

    Page(s): 121 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (925 KB) |  | HTML iconHTML  

    In this paper, we firstly push or pull real and virtual rheology objects by the same displacement or force sequence. Then by minimizing the difference between shape or force sequences of real and virtual objects, we calibrate many parameters of rheology MSD (mass-spring-damper) model, pull-off force model and friction force model between a rigid object and its pushing or pulling rheology one. The calibration is done by a probabilistic search (genetic algorithm). In a few years, we have deeply investigated "pushed, calibrated, and evaluated by shape sequence" and also "pushed, calibrated, and evaluated by force sequence". In this paper, we completely compare all the eight possibilities. Consequently, we get the best visual reality under push and calibration by shape sequence, and also the wonderful tactile reality under push and calibration by force sequence. Moreover, we ascertain reappearance of force or shape sequence when sensing data (pushing operation) in calibration differ from sensing data (pushing operation) in evaluation. Finally, we find a practical defective point for us to manipulate a virtual deformable object in a 3D CG environment. A human operator cannot push or pull a rheology object by a rigid body via force sequence because any force sensor is not equipped in his hand. To overcome this problem in future, we should build some collision model between rigid and rheology objects, which always transforms pushing displacement and velocity into pushing force in the 3D CG environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Haptic Navigation for Broad Social Applications by Kinesthetic Illusion of Pulling Sensation

    Page(s): 129 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1152 KB) |  | HTML iconHTML  

    This paper discusses the potential of force perception technologies for realizing hand-held devices in the field of social systems. We introduce an interactive system based on force perception technology called "come over here, or catch you!", which is a force-sensation-based navigation system for waiters. It consists of our new hand-held haptic interface which can provide perceptually continuous and translational force, and a position and posture identification system. Since the proposed compact haptic interface does not require an external ground, it can be used outside the laboratory and does not interrupt human behavior. We verify the feasibility of the system in trials. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.