System Maintenance:
There may be intermittent impact on performance while updates are in progress. We apologize for the inconvenience.
By Topic

Mixed and Augmented Reality, 2004. ISMAR 2004. Third IEEE and ACM International Symposium on

Date 5-5 Nov. 2004

Filter Results

Displaying Results 1 - 25 of 65
  • [Cover page]

    Publication Year: 2004 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (468 KB)  
    Freely Available from IEEE
  • [Title page]

    Publication Year: 2004 , Page(s): i - iv
    Save to Project icon | Request Permissions | PDF file iconPDF (336 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2004 , Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (171 KB)  
    Freely Available from IEEE
  • Message from the General Chairs

    Publication Year: 2004 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Message from the Program Chairs

    Publication Year: 2004 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2004 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2004 , Page(s): xii - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Additional reviewers

    Publication Year: 2004 , Page(s): xiv - xv
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • The transcendent Greek [keynote speech abstract]

    Publication Year: 2004 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (361 KB) |  | HTML iconHTML  

    Summary form only given, as follows. Ever wish you were better at getting a date? Or just remembering names? Have trouble getting a fair shake at your annual job review? Are you the last one to hear about the corporate reorg? Computers are now becoming socially aware, and that means we can begin to augment our social reality. I will describe a series of machine perception tools that sense social signals and map social networks, and then use AR interfaces that may someday help you get a date, get a job, and get a raise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Augmenting This ... Augmented That: Maximizing Human Performance

    Publication Year: 2004 , Page(s): 1
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (361 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • OSGAR: a scene graph with uncertain transformations

    Publication Year: 2004 , Page(s): 6 - 15
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1280 KB) |  | HTML iconHTML  

    An important problem for augmented reality is registration error. No system can be perfectly tracked, calibrated or modeled. As a result, the overlaid graphics are not aligned perfectly with objects in the physical world. This can be distracting, annoying or confusing. In this paper, we propose a method for mitigating the effects of registration errors that enables application developers to build dynamically adaptive AR displays. Our solution is implemented in a programming toolkit called OSGAR. Built upon OpenSceneGraph (OSG), OSGAR statistically characterizes registration errors, monitors those errors and, when a set of criteria are met, dynamically adapts the display to mitigate the effects of the errors. Because the architecture is based on a scene graph, it provides a simple, familiar and intuitive environment for application developers. We describe the components of OSGAR, discuss how several proposed methods for error registration can be implemented, and illustrate its use through a set of examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A compact optical see-through head-worn display with occlusion support

    Publication Year: 2004 , Page(s): 16 - 25
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    We are proposing an optical see-through head-worn display that is capable of mutual occlusions. Mutual occlusion is an attribute of an augmented reality display where real objects can occlude virtual objects and virtual objects can occlude real objects. For a user to achieve the perception of indifference between the real and the virtual images superimposed on the real environment, mutual occlusion is a strongly desired attribute for certain applications. This paper presents a breakthrough in display hardware from a mobility (i.e. compactness), resolution, and a switching speed based criteria. Specifically, we focus on the research that is related to virtual objects being able to occlude real objects. The core of the system is a spatial light modulator (SLM) and polarization-based optics which allow us to block or pass certain parts of a scene which is viewed through the head-worn display. An objective lens images the scene onto the SLM and the modulated image is mapped back to the original scene via an eyepiece. We are combining computer generated imagery with the modulated version of the scene to form the final image a user would see. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Projected augmentation - augmented reality using rotatable video projectors

    Publication Year: 2004 , Page(s): 26 - 35
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1088 KB) |  | HTML iconHTML  

    In this paper, we propose a new way of augmenting our environment with information without making the user carry any devices. We propose the use of video projection to display the augmentation on the objects directly. We use a projector that can be rotated and in other ways controlled remotely by a computer, to follow objects carrying a marker. The main contribution of this paper is a system that keeps the augmentation displayed in the correct place while the object or the projector moves. We describe the hardware and software design of our system, the way certain functions such as following the marker or keeping it in focus are implemented and how to calibrate the multitude of parameters of all the subsystems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor fusion and occlusion refinement for tablet-based AR

    Publication Year: 2004 , Page(s): 38 - 47
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    This paper presents a set of technologies which enable robust, accurate, high resolution augmentation of live video, delivered via a tablet PC to which a video camera has been attached. By combining several technologies, this is achieved without the use of contrived markers in the environment: An outside-in tracker observes the tablet to generate robust, low-accuracy pose estimates. An inside-out tracker running on the tablet observes the video feed from the tablet-mounted camera and provides high-accuracy pose estimates by tracking natural features in the environment. Information from both of these trackers is combined in an extended Kalman filter. Finally, to maximise the quality of the augmented imagery, boundaries where the real world occludes the virtual imagery are identified and another tracker is used to refine the boundaries between real and virtual imagery so that their synthesis is as convincing as possible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining edge and texture information for real-time accurate 3D camera tracking

    Publication Year: 2004 , Page(s): 48 - 56
    Cited by:  Papers (40)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB) |  | HTML iconHTML  

    We present an effective way to combine the information provided by edges and by feature points for the purpose of robust real-time 3-D tracking. This lets our tracker handle both textured and untextured objects. As it can exploit more of the image information, it is more stable and less prone to drift that purely edge or feature-based ones. We start with a feature-point based tracker we developed in earlier work and integrate the ability to take edge-information into account. Achieving optimal performance in the presence of cluttered or textured backgrounds, however, is far from trivial because of the many spurious edges that bedevil typical edge-detectors. We overcome this difficulty by proposing a method for handling multiple hypotheses for potential edge-locations that is similar in speed to approaches that consider only single hypotheses and therefore much faster than conventional multiple-hypothesis ones. This results in a real-time 3-D tracking algorithm that exploits both texture and edge information without being sensitive to misleading background information and that does not drift over time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handling uncertain sensor data in vision-based camera tracking

    Publication Year: 2004 , Page(s): 58 - 67
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    A hybrid approach for real-time markerless tracking is presented. Robust and accurate tracking is obtained from the coupling of camera and inertial sensor data. Unlike previous approaches, we use sensor information only when the image-based system fails to track the camera. In addition, sensor errors are measured and taken into account at each step of our algorithm. Finally, we address the camera/sensor synchronization problem and propose a method to resynchronize these two devices online. We demonstrate our method in two example sequences that illustrate the behavior and benefits of the new tracking method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Display-relative calibration for optical see-through head-mounted displays

    Publication Year: 2004 , Page(s): 70 - 78
    Cited by:  Papers (7)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated initialization for marker-less tracking: a sensor fusion approach

    Publication Year: 2004 , Page(s): 79 - 88
    Cited by:  Papers (6)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (968 KB) |  | HTML iconHTML  

    We introduce a sensor fusion approach for automated initialization of marker-less tracking systems. It is not limited in tracking range and working environment, given a 3D model of the objects or the real scene. This is achieved based on a statistical analysis and probabilistic estimation of the uncertainties of the tracking sensors. The explicit representation of the error distribution allows the fusion of different sensor data. This methodology was applied to an augmented reality system, using a mobile camera and several stationary tracking sensors, and can be easily extended to the case of any additional sensor. In order to solve the initialization problem, we adapt, modify and integrate advanced techniques such as plenoptic viewing, intensity-based registration, and ICP. Thereby, the registration error is minimized in 3D object space rather than in 2D image. Experimental results show how complex objects can be registered efficiently and accurately to a single image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A marker calibration method utilizing a priori knowledge on marker arrangement

    Publication Year: 2004 , Page(s): 89 - 98
    Cited by:  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    This paper describes a calibration method of markers which are used for registration in MR applications. There have been many vision-based approaches proposed as registration methods in MR. When multiple markers are utilized in a vision-based method, it is necessary that the geometric information of the marker arrangement such as their positions and orientations be known in advance. In this paper, we propose a hybrid method combining the "bundle adjustment method," which is a photogrammetric technique that calculates the geometric information from a set of images, with some constraints on the marker arrangement which are obtained a priori (e.g. the multiple markers are located on a single plane). After considering marker arrangements seen in many MR systems, we summarize some constraints seen in these arrangements. Then, we explain the basic framework of this method as well as a solution method under some practical constraints. We further describe several experiments and their results in order to show the effectiveness of this method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedding imperceptible patterns into projected images for simultaneous acquisition and display

    Publication Year: 2004 , Page(s): 100 - 109
    Cited by:  Papers (20)  |  Patents (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1384 KB) |  | HTML iconHTML  

    We introduce a method to imperceptibly embed arbitrary binary patterns into ordinary color images displayed by unmodified off-the-shelf digital light processing (DLP) projectors. The encoded images are visible only to cameras synchronized with the projectors and exposed for a short interval, while the original images appear only minimally degraded to the human eye. To achieve this goal, we analyze and exploit the micro-mirror modulation pattern used by the projection technology to generate intensity levels for each pixel and color channel. Our real-time embedding process maps the user's original color image values to the nearest values whose camera-perceived intensities are the ones desired by the binary image to be embedded. The color differences caused by this mapping process are compensated by error-diffusion dithering. The non-intrusive nature of our approach allows simultaneous (immersive) display and acquisition under controlled lighting conditions, as defined on a pixel level by the binary patterns. We therefore introduce structured light techniques into human-inhabited mixed and augmented reality environments, where they previously often were too intrusive. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scene modelling, recognition and tracking with invariant image features

    Publication Year: 2004 , Page(s): 110 - 119
    Cited by:  Papers (38)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object, and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for designing marker-based tracking probes

    Publication Year: 2004 , Page(s): 120 - 129
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    Many tracking systems utilize collections of fiducial markers arranged in rigid configurations, called tracking probes, to determine the pose of objects within an environment. In this paper, we present a technique for designing tracking probes called the viewpoints algorithm. The algorithm is generally applicable to tracking systems that use at least three fiduciary marks to determine the pose of an object. The algorithm is used to create an integrated, head-mounted display tracking probe. The predicted accuracy of this probe was 0.032 ± 0.02 degrees in orientation and 0.09 ± 0.07 mm in position. The measured accuracy of the probe was 0.028 ± 0.01 degrees in orientation and 0.11 ± 0.01 mm in position. These results translate to a predicted, static positional overlay error of a virtual object presented at 1m of less than 0.5 mm. The algorithm is part of a larger framework for designing tracking probes based upon performance goals and environmental constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative mixed reality visualization of an archaeological excavation

    Publication Year: 2004 , Page(s): 132 - 140
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    We present VITA (visual interaction tool for archaeology), an experimental collaborative mixed reality system for offsite visualization of an archaeological dig. Our system allows multiple users to visualize the dig site in a mixed reality environment in which tracked, see-through, head-worn displays are combined with a multi-user, multi-touch, projected table surface, a large screen display, and tracked hand-held displays. We focus on augmenting existing archaeological analysis methods with new ways to organize, visualize, and combine the standard 2D information available from an excavation (drawings, pictures, and notes) with textured, laser range-scanned 3D models of objects and the site itself. Users can combine speech, touch, and 3D hand gestures to interact multimodally with the environment. Preliminary user tests were conducted with archaeology researchers and students, and their feedback is presented here. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agents that talk and hit back: animated agents in augmented reality

    Publication Year: 2004 , Page(s): 141 - 150
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    AR puppet is a hierarchical animation framework for augmented reality agents, which is a research area combining augmented reality (AR), sentient computing and autonomous animated agents into a single coherent human-computer interface paradigm. While sentient computing systems use the physical environment as an input channel, AR outputs virtual information superimposed on real world objects. To enhance man-machine communication with more natural and efficient information presentation, this framework adds animated agents to AR applications that make autonomous decisions based on their perception of the real environment. These agents are able to turn physical objects into interactive, responsive entities collaborating with both anthropomorphic and non-anthropomorphic virtual characters, extending AR with a previously unexplored output modality. AR puppet explores the requirements for context-aware animated agents concerning visualization, appearance, behavior, in addition to associated technologies and application areas. A demo application with a virtual repairman collaborating with an augmented LEGO® robot illustrates our concepts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Outdoor see-through vision utilizing surveillance cameras

    Publication Year: 2004 , Page(s): 151 - 160
    Cited by:  Papers (7)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1320 KB) |  | HTML iconHTML  

    This paper presents a new outdoor mixed-reality system designed for people who carry a camera-attached small handy device in an outdoor scene where a number of surveillance cameras are embedded. We propose a new functionality in outdoor mixed reality that the handy device can display live status of invisible areas hidden by some structures such as buildings, walls, etc. The function is implemented on a camera-attached, small handy subnotebook PC (HPC). The videos of the invisible areas are taken by surveillance cameras and they are precisely overlapped on the video of HPC camera, hence a user can notice objects in the invisible areas and see directly what the objects do. We utilize surveillance cameras for two purposes. (1) They obtain videos of invisible areas. The videos are trimmed and warped so as to impose them into the video of the HPC camera. (2) They are also used for updating textures of calibration markers in order to handle possible texture changes in real outdoor world. We have implemented a preliminary system with four surveillance cameras and proved that our system can visualize invisible areas in real time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.