By Topic

Robot and Human Interactive Communication, 2008. RO-MAN 2008. The 17th IEEE International Symposium on

Date 1-3 Aug. 2008

Filter Results

Displaying Results 1 - 25 of 133
  • Welcome message

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • Committees

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Sponsors

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (189 KB)  
    Freely Available from IEEE
  • Memory and learning for social robots

    Page(s): iv
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    While most research in social robotics embraces the challenge of designing and studying the interaction between robots and humans itself, this talk will discuss the utility of social interaction in order to facilitate for more flexible robotics. What can a robot gain with respect to learning and adaptation from being able to sociably interact? What are basic learning-enabling behaviors? And how do inexperienced human tutor robots a sociable way? In order to answer these question we consider the challenge of learning by interaction as a systemic one, comprising appropriate perception, system design, and feedback. Basic abilities of robots will be outlined which resemble concepts of developmental learning in infants, apply linguistic models of interaction management, and take tutoring as a joint task of a human and a robot. However, in order to tackle the challenge of learning by interaction the robot has to couple and coordinate these behaviors in a very flexible and adaptive manner. The active memory as an architectural concept in particular suitable for learning-enabled robots will be briefly discussed as a foundation for coordination and integration of such interactive robotic systems. The talk will build a bridge from the construction of integrated robotic systems to their evaluation, analysis, and way back. It will outline why we intend to enable our robots to learn by interacting and how this paradigm impacts the design of systems and interaction behaviors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human-robot interaction - What we learned from robot helpers and dance partner robots

    Page(s): v
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (127 KB)  

    This talk addresses issues relating to human-robot interaction based on our developments of robot helpers and dance partner robots. First, Mobile Robot Helper and Distributed Robot Helpers are introduced as assistive systems for handling an object in coordination with a human. These robots are controlled passively based on intentional force/moment applied to the object by its user. Through several past experiments, the limitation of the coordination based on intentional-force -based-control was unveiled, although the concept could be applied to some kinds of tasks. A dance partner robot is then introduced as a research platform for human-robot interaction. A Dance Partner Robot, PBDR (Partner Ballroom Dance Robot), has been developed for Aichi Expo in 2005. It dances a waltz as a female dancer together with a human male dancer. A waltz, a ballroom dance, consists of several steps, and the step transition is controlled by a male dancer based on a transition rule. The transition rule allows the male dancer to select a step from a class of steps determined for the current step, and the female dance partner estimates the following step through physical interactions with the male dancer. The dance partner robot has a database about the waltz and its transition rule, which is used to estimate the following dance step and to generate an appropriate step motion. The step estimation is done based on the time-series data of force/torque applied by the male dancer to the upper body of the robot. The robot motion is generated for the estimated step using the step motion in the database compliantly against the interface force/moment between the human dancer and the robot in real time. We are continuing the development of the robot, and the current version could watch the humanpsilas dance step all the time during the dance and if the step is different from the estimated one, the step is corrected according to the humanpsilas step. The development of the dance partner robot sugge- - sts us important issues for future robots having interaction with a human. Finally, why we are developing the dance partner robot and how the concept will be applied to other systems will be also discussed in the talk. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cognition, control and learning for everyday manipulation tasks in human environments

    Page(s): vi
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (65 KB)  

    Summary form only given. In recent years we have seen tremendous advances in the mechatronic, sensing and computational infrastructure of robots, enabling them to act faster, stronger and more accurately than humans do. Yet, when it comes to accomplishing manipulation tasks in everyday settings, robots often do not even reach the sophistication and performance of young children. This is partly due to humans having developed their brains into computational and control devices that facilitate knowledge-informed decision making, perspective taking, envisioning activities and their consequences, and predictive control. Brains orchestrate these learning and reasoning mechanisms in order to produce flexible, adaptive, and reliable behavior in real-time. Household chores are an activity domain where the superiority of the cognitive mechanisms in the brain and their role in competent activity control is particularly evident. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Manipulation strategies and Imitation learning in humanoid robots

    Page(s): vii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    Summary form only given. The development and emergence of cognition relies on artificial embodiments having complex and rich perceptual and motor capabilities. The impressive advance of research and development in robotics over the past years has led to the development of humanoid robots that are rich in sensory and motor capabilities and hence provide a suitable framework for studying cognition. Currently, the different disciplines related to the development of cognitive humanoids have usually been explored independently, leading to significant results within each discipline. However, the big challenge is how different pieces of results fit together to achieve complete processing models and an integrative system architecture, and how to evaluate results at system level rather than focusing on the performance of component algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robotic musicianship

    Page(s): viii - ix
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (170 KB)  

    Inspired and motivated by the prospect of innovating the core of the musical experience, the author have explored a number of research directions in which digital technology bears the promise of revolutionizing the medium. The research directions identified - gestural expression, collaborative networks, and constructionist learning - aimed at creating musical experiences that cannot be facilitated by traditional means. The first direction builds on the notion that through novel sensing and mapping techniques, new expressive musical gestures can be discovered that are not supported by current acoustic instruments. Such gestures, unconstrained by the physical limitation of acoustic sound production, can provide infinite possibilities for expressive and creative musical experiences for novice as well as trained musicians. The second research direction utilizes the digital network in an effort to create new collaborative experiences, allowing players to take an active role in determining and influencing not only their own musical output but also that of their co-performers. By using the network to interdependently share and control musical materials in a group, musicians can combine their musical ideas into a constantly evolving collaborative musical activity that is novel and inspiring. The third research direction utilizes constructionist learning, which bears the promise of revolutionizing music education by providing hands-on access to programmable music making. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Imitation and robotics - background, theories, and practice

    Page(s): x - xiii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (159 KB)  

    Imitation is a powerful mechanism that allows agents to learn via their interactions within a social context. An artificial system that is capable of exploiting this imitative learning capability would be able to acquire new skills and tasks from interaction with another agent (typically a human or another robot). Imitative social learning therefore presents a very interesting paradigm in robotics and computer science and within this paradigm robotics researchers are heavily influenced from interdisciplinary studies typically in biology, ethology and psychology. This tutorial takes such an interdisciplinary approach and aims to present the background and theories of imitation from biology, ethology and psychology together with some of their practical implementations in robotics. The aim of tutorial is to disseminate this research field to a wider audience. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Session overview

    Page(s): xiv - xx
    Save to Project icon | Request Permissions | PDF file iconPDF (131 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Book of abstracts

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    Summary form only given. Provides an abstract for each of the presentations of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vision system for wearable and robotic uses

    Page(s): 53 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1111 KB) |  | HTML iconHTML  

    Visual perception is thought to provide us with the illusion of a stable visual world that is seamless in time and space while it is continuously explored with saccades. The oculomotor system ensures retinal image stabilization during head, object, and surround motion. Prior to manipulation, objects are fixated with topdown driven look-ahead saccades, and similarly, the locomotion path is visually inspected about two steps ahead. In human-human interaction tasks gaze is not only crucial for motor intention recognition but it is also essential in detecting the direction of social attention. A new prototype of a camera motion control unit was developed that provides a sufficiently short latency and a light-weight setup for both a wearable gaze-controlled and a humanoid stereo camera system. The camera system will serve as a binocular eye plant for a humanoid active vision system. The long-term aim is to integrate eye tracking capabilities into the vision system that will equip the humanoid with the ability to infer the target of gaze of a human in human-machine cooperation scenarios. The eye tracking technology has been improved by extending it into the direction of a calibration-free operation. The antropomorphic camera motion control system was integrated into the humanoid JOHNNIE. Thereby, a new experimental tool was created that will help to evaluate the relevance of gaze and look-ahead fixations in the interaction of humans with humanoids in social contexts or during (humanoid) locomotion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Assistive Kitchen — A demonstration scenario for cognitive technical systems

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (775 KB) |  | HTML iconHTML  

    This paper introduces the assistive kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the assistive kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Force skill training with a hybrid trainer model

    Page(s): 9 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    In this work, we present novel VR training strategies that incorporate a hybrid trainer model to train force. For modeling the trainer skill, weighted k-means algorithm in parameter space with LS optimization is implemented. The efficiency of the training strategies is verified via user tests in frame of a bone drilling training application. An objective evaluation method based on n dimensional Euclidean distances is introduced to assess user tests results. It is shown that the proposed strategies improve the student skill and accelerate force learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Framework for haptic interaction with virtual avatars

    Page(s): 15 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB) |  | HTML iconHTML  

    In this paper we present an integrative frame work centered on haptic interaction with virtual avatars. This framework is devised for general prototyping and collaborative scenario studies with haptic feedback. First we present the software architecture of the framework and give details on some of its components. Then we show how this framework can be used to derive in a short time a virtual reality simulation. In this simulation, a user directly interacts with a virtual avatar to collaboratively manipulate a virtual object, with haptic feedback and using fast dynamics computation and constraint based methods with friction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Surface perception in a large workspace encounter interface

    Page(s): 21 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (821 KB) |  | HTML iconHTML  

    Haptic interaction with virtual objects is typically tool mediated, or in alternative it constraints userpsilas body in someway, like it happens in exoskeletons that cannot be totally transparent. Encounter type haptic interfaces aim at hands free haptic interaction, that is more natural and can be applied in contexts in which the user moves in the space around the interface. This paper presents a system that allows a palm based haptic interaction in a large workspace using the principle of encountered haptics. The system is evaluated in a surface exploration task and compared against the same task performed with a standard haptic interface. In this type of task this type of interface is better suited, providing a smoother feedback to the hand during the movement over the surface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anticipative generation and in-situ adaptation of maneuvering affordance in naturally complex scene

    Page(s): 27 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1041 KB) |  | HTML iconHTML  

    A fractal representation of the maneuvering affordance has been introduced for anticipative decision making through satellite-vehicle-roadway network. Based on the randomness ineluctably distributed in naturally complex scenes, a probability for capturing not-yet-identified fractal attractor is estimated and applied to anticipative generation of roadway models. Through anticipative adaptation, the fractal code is transferable for cooperative road following process where the boundary of open spaces is defined through in-situ adaptation of the maneuvering affordance to observed scenes. The coding and adaptation scheme was applied to typical scenes to demonstrate the randomness-based fractal coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A barebones communicative robot based on social contingency and Infomax Control

    Page(s): 33 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB) |  | HTML iconHTML  

    In this paper, we present a barebones robot which is capable of interacting with humans based on social contingency. It expands the previous work of a contingency detector into having both human-model updating (developmental capability) and policy improvement (learning capability) based on the framework of Infomax control. The proposed new controller interacts with humans in both active and responsive ways handling the turn-taking between them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ITACO: Constructing an emotional relationship between human and robot

    Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (374 KB) |  | HTML iconHTML  

    In this paper we describe an ITACO system that is able to construct an emotional relationship between humans and interactive systems by a migratable agent. The agent in the ITACO system can migrate between interactive systems within an environment. We conducted psychological experiments to verify whether the ITACO system can construct an emotional relationship between humans and interactive systems such as a robot and a table lamp. The experimental results show that emotional relationships were constructed between humans and interactive systems. Thus emotional relationships gave an influence to humanpsilas behavior and cognitive abilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • “Good robot”, “bad robot” —Analyzing users’ feedback in a human-robot teaching task

    Page(s): 41 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1587 KB) |  | HTML iconHTML  

    This paper describes an experimental study in which we analyze how users give multimodal positive and negative feedback by speech, gesture and touch when teaching easy game-tasks to a pet robot. The tasks are designed to allow the robot to freely explore and provoke human reward behavior. By choosing game-based tasks, we ensure that the training can be carried out without stressing or boring the user. This way, we can observe natural, situated reward behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smoothing human-robot speech interaction with blinking-light expressions

    Page(s): 47 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    We propose a method to enable smooth speech interactions between a user and a robot. Our method is based on subtle expression whereby a robot blinks a small LED attached to its chest. We performed experiments in which participants played a last-and-first games and counted the number of repetitions made by the participants and analyzed their impression of the game and the robot. The experimental results suggested that the blinking-light could prevent utterance collisions between a user and a robot and could create familiar and attentive impressions about the game on users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3D gaze tracking with easy calibration using stereo cameras for robot and human communication

    Page(s): 59 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (661 KB) |  | HTML iconHTML  

    This paper presents a method to estimate the optic and visual axes of an eye and the Point Of Gaze (POG) on the bases of Listingpsilas law using stereo cameras for three-dimensional (3D) gaze tracking. By using two cameras and two light sources, the optic axis of the eye can be estimated on the basis of a spherical model of a cornea. A one-point calibration is required to estimate the angle of the visual axis from the optic axis. However, a real cornea has an aspheric shape and therefore it is difficult to estimate the POG in all the directions accurately. We have used three light sources to improve the estimation of the POG. Two light sources near the center of the pupil in the camera image are used for estimating the POG. The experimental results show that the accuracy of the estimation of the visual axis is under about 1deg. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pose-robust face recognition based on texture mapping

    Page(s): 65 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1899 KB) |  | HTML iconHTML  

    A human face provides a variety of different communicative functions such as identification, the perception of emotional expression, and lip-reading. Many applications in robotics require recognizing a human face. A face recognition system should be able to deal with various changes in face images, such as pose, illumination, and expression, among which pose variation is the most difficult one to deal with. For this reason, face registration is the key of face recognition. If we can register face images into frontal views, the recognition task would be much easier. To align a face image into a canonical frontal view, we need to know the pose information of a human head. A human head can be approximately modeled as a 3D texture mapped ellipsoid. Then, any face image can be considered as a 2D image projection of a 3D ellipsoid at a certain pose. In this paper, both training and test face images are back projected to the surface of a 3D ellipsoid according to their estimated poses and registered into canonical frontal view images. Then, simple and efficient frontal face recognition can be carried out in the texture map domain instead of the original input image domain. To evaluate the feasibility of the proposed approach, several recognition experiments are conducted using subspace-based face recognition methods such as PCA, PCA+LAD, and DCV. By conducting experiments on our laboratory database and the Yale Face Database B, we show that the proposed algorithm provides good performance even when large out-of-plane rotations occur. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real time facial feature points tracking with Pyramidal Lucas-Kanade algorithm

    Page(s): 71 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1064 KB) |  | HTML iconHTML  

    In this paper, we present a detection and tracking feature points algorithm in real time camera input environment. To trace and extract a face image, we use a modified face detector based on the Haar-like features. For feature points detection, we use good features to track of Shi and Thomasi. In order to track the facial feature points, pyramidal Lucas-Kanade feature tracker algorithm is used. Results on the real time indicate that the proposed algorithm can accurately extract facial features points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Are emotional robots more fun to play with?

    Page(s): 77 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    In this paper we describe a robotic game buddy whose emotional behaviour is influenced by the state of the game. Using the iCat robot and chess as the game scenario, an architecture for incorporating emotions as a result of a heuristic evaluation of the state of the game was developed. The game buddy was evaluated in two ways. First, we investigated the effects of the characterpsilas emotional behaviour on the userpsilas perception of the game state. And secondly we compared a robotic with a screen based version of the iCat in terms of their influence on userpsilas enjoyment. The results suggested that userpsilas perception of the game increases with the iCatpsilas emotional behaviour, and that the enjoyment is higher when interacting with the robotic version. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.