By Topic

Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on

Date 31 Oct.-2 Nov. 2003

Filter Results

Displaying Results 1 - 25 of 70
  • Roaming stripes: smooth reactive navigation in a partially known environment

    Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (577 KB) |  | HTML iconHTML  

    Service mobile robots must be able to operate within human populated environments to carry out different tasks, such as surveillance of banks and warehouses, transportation of things, escorting people in exhibitions and museums, etc. The paper describes a novel hybrid approach to mobile robot navigation, which integrates a priori knowledge of the environment with local perceptions in order to carry out the assigned tasks efficiently and safely. Moreover, the system is able to generate smooth trajectories which take into account the kinematic and dynamic constraints on the robot's motion due to its geometrical and physical characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based walking support system with wearable walking helper

    Page(s): 61 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    In this paper, a wearable walking support system for people who have difficulties in walking because of weakened lower extremities is proposed. We propose a wearable walking support device, referred to as wearable walking helper, for supporting antigravity muscles on lower extremities, and a model-based control algorithm for the device without using biological signals, that is, supporting knee joint moment is calculated based on the antigravity term of necessary knee joint moment, which is estimated based on a human model. The control algorithm is implemented in the. wearable walking helper and experimental results illustrate the potential of the proposed system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gait control of human and humanoid on irregular terrain considering interaction with environment

    Page(s): 277 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (597 KB) |  | HTML iconHTML  

    Humanoid is expected to move in various environments including outdoor. In the present condition, however, the humanoid can move only in known environment with little factor of disturbance, owing to poor ability to recognize the environment and the lack of robustness for disturbance. One of the solutions for these shortages is to refer to human's strategy. Since human can walk on various environments with adapting to each situation, this approach can be effective. In this research we focus on a terrain whose surface is sunk by the load, such as the sands and the marshy place among real irregular terrain. The purpose of this research is to realize the humanoid's walk suitable for such an irregular terrain by referring to the human's walk strategy. To obtain the strategy, we measured the human's walk. The measured data is floor reaction force, angle of joints, and COG position. We analyzed how human control a body to cope with disturbance. As it turned out, we obtained two main strategies, which were a statically stable walk (static walk) and ankle control for disturbance. By applying the human's strategy to humanoid, we realized humanoid's walk on irregular terrain in the simulation. Consequently, it was shown the effectiveness of the approach: measurement of human motion, strategy extraction and applying the human's strategy to humanoid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive experiments between creature and robot as a basic research for coexistence between human and robot

    Page(s): 347 - 352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (517 KB) |  | HTML iconHTML  

    The goal of our research is to clarify factors that are necessary for coexistence between human and robot. However, evaluating communications between humans and robots is quite difficult due to the existence of several uncertain factors that exist in human communication and variety of personalities. Therefore, as a first step, a more previous species, rats were selected. We conducted interactive experiments between rats and robots. We investigate basic factors that are necessary for symbiotic relationship between living creatures and robots. To conduct such experiments, we developed a new rat-robot (WM-6) and experiment systems. Three experiments have been conducted: "calling robot", "recognizing behavior", and "choosing reward". The results of these experiments show that we succeeded in making a rat to act on the robot by itself. Moreover, the rat could recognize two behavioral patterns of the robot ("translation" and "rotation"). Finally, the rat could change its interaction with the robot based on its need. Consequently, we succeeded in creating various interactions between rat and robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human walk pitch extraction by robot vision - towards human robot synchronized walking based on neural oscillator entrainment

    Page(s): 285 - 290
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (610 KB) |  | HTML iconHTML  

    This paper presents a method to extract human walk pitch by robot vision. It is intended to be used for robot-human synchronized walking. Needless to say, in human-robot collaboration and interaction, synchronization between robot and human is crucial for safety, comfort, and sense of compatibility. It must be true even when a robot follows or accompanies its human master. In this work, the authors plan to realize the synchronization by utilizing entrainment of neural oscillator. A legged robot walks based upon self oscillation pattern generated by the oscillator without any stimulus. When the visually extracted human walk pitch data is fed into the neural oscillator as an external stimulus, the oscillator is entrained by the pitch data. In this paper the authors present both method and experiment to extract the walk pitch by tracking human heel. Experimental results support effectiveness of the method. Simulation on the entrainment of neural oscillator is also presented. In the simulation, experimentally extracted human walk pitch data is used as external stimulus. Simulation result demonstrates feasibility of the work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic integration of audiovisual information to localize sound source in human-robot interaction

    Page(s): 229 - 234
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    This paper proposes a method to estimate a sound source position by fusing the auditory and visual information with Bayesian network in human-robot interaction. We firstly integrate multi-channel audio signals and a depth image about the environment to generate a likelihood map for sound source localization. However, this integration, denoted by "MICs", does not always lead to locate a sound source correctly. For correcting the failure in localization, we integrate the likelihood values generated from "MICs" and the skin-color distribution in an image according to the result of classifying audio signal into speech/non-speech categories. The audio classifier is based on the support vector machine(SVM) and the skin-color distribution is modeled with GMM. With the evidences given by MICs, SVMs and GMM, we infer whether pixels in images correspond to sound source or not according to the trained Bayesian network. Finally, experimental results are presented to show the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive trajectory generation using evolutionary programming for a partner robot

    Page(s): 335 - 340
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (535 KB) |  | HTML iconHTML  

    This paper proposes an integrated method for generating a human-friendly trajectory. First of all, the robot detects the position of the facing human, and then, the robot generates the trajectory realizing a hand-to-hand behavior by using evolutionary programming. Basically, human evaluation is very important for generating robotic behavior, but the structure of human evaluation is not clear beforehand. Therefore, a fuzzy state-value function is used for estimating the structure of human evaluation. We apply a profit sharing plan using the human evaluation to update the fuzzy state-value function. Furthermore, we propose a temperature scheduling method of a Boltzmann selection dependent on the time-series of human evaluation in the interactive evolutionary programming. Several experimental results show the proposed method can generate a human-friendly trajectory with few human evaluation times. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward gaze-based proactive support for Web readers

    Page(s): 121 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    In this paper, we describe an ongoing work to create a gaze-based support application for Web readers; the support represents a proactive translation of difficult words for non-natives. The basic idea of the application reflects the concept of interactive interfaces mediating the redistribution of cognitive tasks between people and machines. As the proactive principle advocate computers toward monitoring the user instead of waiting for his/her commands, we present a framework for the realization of this system by analyzing gaze patterns with the help of time series analysis techniques and using a safe gaze tracking system that does not affect the user's eyes if used during long hours. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • To help or not to help a service robot

    Page(s): 379 - 384
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB) |  | HTML iconHTML  

    This paper reports an experimental study in which people who had never encountered our service robot before were requested to assist it with a task. We call these visiting users "bystanders" to differentiate them from people who belong to the social setting and group in which the robot is operated in and thus are familiar with the robot. In our study 32 subjects were exposed to our robot and requested by it to provide a cup of coffee as part of a delivery mission. We anticipated that people in general would help the robot, dependent upon whether they were busy or had received a demonstration of the robot as introduction. Our results indicate that the willingness of bystanders to help a robot not only is a consequence of the robot initiated interaction, but equally depends on the situation and state of occupation people are in when requested to interact with and assist the robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual recognition of gestures using dynamic naive Bayesian classifiers

    Page(s): 133 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    Visual recognition of gestures is an important field of study in human-robot interaction research. Although there exist several approaches in order to recognize gestures, on-line learning of visual gestures does not have received the same special attention. For teaching a new gesture, a recognition model that can be trained with just a few examples is required. In this paper we propose an extension to naive Bayesian classifiers for gesture recognition that we call dynamic naive Bayesian classifiers. The observation variables in these combine motion and posture information of the user's right hand. We tested the model with a set of gestures for commanding a mobile robot, and compare it with hidden Markov models. When the number of training samples is high, the recognition rate is similar with both types of models; but when the number of training samples is low, dynamic naive classifiers have a better performance. We also show that the inclusion of posture attributes in the form of spatial relationships between the right hand and other parts of the human body improves the recognition rate in a significant way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CDBMS: database management system for a communication robot

    Page(s): 205 - 210
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB) |  | HTML iconHTML  

    This paper proposes a new database management system that is called CDBMS (communication database management system) for a communication robot. DBMS for a communication robot has to satisfy following three conditions. The first condition is to make the internal states of a robot depend on the robot's action. The second condition is to give priority to the execution of a reflective action. The third condition is to deal with the influence of past robot's actions. CDBMS satisfies the three conditions with C-index and EDF scheduler. C-index represents internal states as a directed graph which reflects a robot's current action and the sequence of past actions. Also, EDF scheduler achieves the prior execution of a reflective action. This paper conducted experiments which confirmed that CDBMS satisfies the first condition and the second condition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Skill transfer in a simulated underactuated dynamic task

    Page(s): 315 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    Machine-mediated teaching of dynamic task completion is typically implemented with passive intervention via virtual fixtures or active assist by means of record and replay strategies. During interaction with a real dynamic system however, the user relies on both visual and haptic feedback in order to elicit desired motions. This work investigates skill transfer from assisted to unassisted modes for a Fitts' type targeting task with an underactuated dynamic system. Performance, in terms of between target tap times, is measured during an unassisted baseline session and during various types of assisted training sessions. It is hypothesized that passive and active assist modes that are implemented during training of a dynamic task could improve skill transfer to a real environment or unassisted simulation of the task. Results indicate that transfer of skill is slight but significant for the assisted training modes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Previous notice method of three dimensional robotic arm motion for suppressing threat to humans

    Page(s): 353 - 357
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    This paper describes previous notice method of three-dimensional robotic arm motion for suppressing threat to human using a device in which LED markers are arranged like a coordinate frame. Firstly, a previous notice method for the three dimensional position of the endpoint of the arm is shown. Secondly, relationship between the dimensional number of the motion and the feeling of the human is investigated using the device of the previous notice. Finally, the effectiveness of the notice device is evaluated for the three dimensional motion of the arm. This device is effective for all motions and, especially, is much effective for the motion of the upper level from the eyes of human. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Active teaching for an interactive learning robot

    Page(s): 181 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    We have proposed a fast learning method that enables a mobile robot to acquire autonomous behaviors from interaction between human and robot. In this research we develop a behavior learning method ICS (interactive classifier system) using interactive evolutionary computation considering an operator's teaching cost. As a result, a mobile robot is able to quickly learn rules by directly teaching from an operator. ICS is a novel evolutionary robotics approach using classifier system. In this paper, we investigate teacher's physical and mental load and proposed a teaching method based on timing of instruction using ICS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-creative communication in musical performance

    Page(s): 241 - 246
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    There is a communication between players in a musical cooperative performance, and players create relationship or new musical expressions with the communication. Such communication is called "co-creation" and has been analyzed. The object of this study is to analyze co-creative communication of musical performance and establish a design principle of co-creative communication system between human and artificial agents. The experiment results (a) where musical difficulty is high, musical rhythms did not relatively synchronize, however respiration rhythms relatively synchronized, (b) there was a musical interaction between players, and new music tempo pattern emerged, (c) where musical difficulty is high, musical rhythm coupled with respiration rhythm strongly. To interpret these results, we hypothesize that players pay more attention in the difficult music part, and propose the new musical communication model, and discuss the design principle of co-creative communication system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • KITARO: Kyutech intelligent and tender autonomous robot

    Page(s): 341 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (618 KB) |  | HTML iconHTML  

    The interaction between robots and humans is a research topic, which has been given much attention recently. At our laboratory, we have developed an interactive robot that works in a human living space. This robot has been named KITARO (Kyutech intelligent and tender autonomous robot). KITARO is designed to collect and understand several types of human information by moving around in a human living space. Additionally, KITARO has the ability to imitate human behavior based on time-series information. In this paper, we describe the hardware and software, which work in KITARO. In order to realize the system for collecting human visual information around KITARO's working area, we have associated certain modules into KITARO. The first module is HeadFinder, which has the capability of detecting and tracking a human head. The second module is HeadClassifier, which estimates several categories, e.g., the gender and the age of human based on head image information from HeadFinder. By using these modules, the KITARO can understand and abstract the trajectory of a person walking in front of its. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for the coupling of belief systems through human-robot language interaction

    Page(s): 385 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    This paper describes a method of multi-modal language processing that reflects experiences shared by the user and the robot. Through incremental online optimization in the process of interaction, the robot's system of beliefs, which is represented by a stochastic model, is formed coupling with that of the user. The belief system of the robot consists of belief modules, a confidence that each belief is shared by the user (local confidence), and a confidence that all the belief modules and the local confidence are identical to those of the user (global confidence). Based on the system of beliefs, the robot can interpret even fragmental and ambiguous utterances, and can act and generate utterances appropriate for a given situation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Translucent view for robot tele-operation

    Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (703 KB) |  | HTML iconHTML  

    This paper proposes a presentation method of translucent view of occluding objects in working environment for robot tele-operation. Though vision is important and useful for tele-operators to obtain working environmental information, target objects in performing tasks, especially in manipulation tasks, may be occluded by the body or the robot arm. If the operators can see the target objects through the occluding objects, tele-operation using vision becomes easier. The translucent view is generated from the cameras on the robot and/or around the robot's working environment. The translucent view is applicable in a presentation method, object-centered view (OV), where a target object in performing tasks is always centered at the view. Tele-operators can perform tasks efficiently using translucent view in the object-centered view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating animatable 3D virtual faces from scan data

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    In this paper a new adaptation-based approach is presented to reconstruct animatable facial models of individual people from scan data with minimum user intervention. A generic control model that represents both the face shape and layered biomechanical structure serves as the starting point for our face adaptation algorithm. After a minimum set of anthropometric landmarks have been specified on the 2D images, the algorithm automatically recovers their 3D positions on the face surface using a projection-mapping approach. Based on a series of measurements between the 3D landmarks, a global adaptation is carried out to align the generic control model to the measured surface data using affine transformations. A local adaptation then deforms the geometry of the generic model to fit all of its vertices to the scanned surface. The reconstructed model accurately represents the shape of the individual face and can synthesize various expressions using transferred muscle actuators. Key features of our method are near-automated reconstruction process, no restriction on the position and orientation of the generic model and scanned surface, and efficient framework to animate any human data-set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of character robots for human-robot mutual communication

    Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB) |  | HTML iconHTML  

    This paper describes a robot-head system as a communication device for human robot interaction. Most robotic systems with natural modalities have facial expression functions, since facial expressiveness is regarded as a key component to develop personal attachment along with prosodic expressiveness. Most of the conventional facial robots has adopted Ekman's FACS. However, due to some mechanical constraints, there are some limitations to adopt FACS model completely in facial robots. In the first part of this paper, we introduce a character robot - CRF2 that have richer facial expressions than the first prototype character robot -CRFl since the facial design of CRF2 implements eyelids' mechanism and it is based on the 3D deformation model. However due to the fact that the recognition rate of some expressions such as disgust and fear was not improved very much, we concluded that character robot needs other communication channels such as voice to convey robot's emotional states properly to the user in addition to renovating mechanical facial part. As a result we developed a renovated character robot -CRF3. In addition to facial expressiveness, CRF3 is implemented speech synthesis and neck motions. Further CRF3 has visual, auditory and tactile sensors and expanded configuration that can connect additional sensor or actuator modules to the system. To evaluate the applications of CRF, we applied CRF3 to home environments as a home network manager which selects a task based on robot mood. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attention coupling as a prerequisite for social interaction

    Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB) |  | HTML iconHTML  

    This paper proposes, "attention coupling", that is spatio-temporal coordination of each other's attention, as a prerequisite for human-robot social interaction, where the human interactant attributes mental states to the robot, and possibly vice versa. As a realization of attention coupling we implemented on our robots the capability of eye-contact (mutually looking into each other's eyes) and joint attention (looking at a shared target together). Observation of the interaction with human babies/children showed that the robots with the attention coupling capability facilitated in the babies/children social behavior, including showing, giving, and verbal interactions like asking questions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-creation in man-machine interaction

    Page(s): 321 - 324
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (422 KB) |  | HTML iconHTML  

    The purpose of our research group is to realize a "co-creation system" [Miyake, Y., 1997] [Shimizu, H. et al., 2000]. Co-creation means co-emergence of real-time coordination by sharing cognitive space and time. Human communication with emergent reality like this needs two kinds of processing at the same time [Miyake, Y. et al., 2001]. One is explicit communication such as the exchange of messages and the other is implicit embodied interaction such as sympathy and direct experience. Using this dual-processing complementarity, we are developing co-creative man-machine interfaces and interactive media [Miyake, Y. et al, 2001; Muto, T. and Miyake, Y., 2002; Yamamoto, T. and Miyake, Y., 2002; Takanashi, H. and Miyake, Y., 2003] [Miyake, Y. and Miyagawa, T., 1999]. This new technology is effective for recovering human linkage, social ethics and mutual-reliability that has been lost in the IT society. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grasping a waving object for a humanoid robot using a biologically-inspired active vision system

    Page(s): 115 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    Grasping a waving object is a very useful behavior for a humanoid robot. This paper presents how a humanoid robot performs this behavior utilizing an active vision system. The gaze control of the active vision system is highly inspired by biological systems, and is implemented in four basic behaviors: saccade, smooth pursuit, vergence, and vestibulo-ocular reflex. This gaze control method also simplifies the visual servoing task after waving motion is detected. Waving motions are detected by optical flow with background motion detected by either motion histogram or phase correlation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subjective evaluation of a seal robot at the National Museum of Science and Technology in Stockholm

    Page(s): 397 - 402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    This paper describes research on mental commit robot that seeks a different direction from industrial robot, and that is not so rigidly dependent on objective measures such as accuracy and speed. The main goal of this research is to explore a new area in robotics, with an emphasis on human-robot interaction. In the previous research, we categorized robots into four categories in terms of appearance. Then, we introduced a cat robot and a seal robot, and evaluated them by interviewing many people. The results showed that physical interaction improved subjective evaluation. Moreover, a priori knowledge of a subject has much influence into subjective interpretation and evaluation of mental commit robot. In this paper, 133 subjects evaluated the seal robot, Paro by questionnaires in an exhibition at the National Museum of Science and Technology in Stockholm, Sweden. This paper reports the results of statistical analysis of evaluation data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An embodied agent that sends nonverbal conversational signals consistent with those of the partner during a dialogue

    Page(s): 247 - 252
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (635 KB) |  | HTML iconHTML  

    In this article, we discuss how to control nonverbal conversational signals (NCS) including eye gazes, nods and facial expressions displayed by a pair of embodied agents in dialogue. Works on information agent proposes to employ embodied agents for presenting information to the viewers. Since embodied agents have faces and bodies in order to be "embodied", the viewers read various meanings in the NCS on the faces and the bodies of the agents, even if the agents are not actually designed to send NCS to their viewers but only to speak. In order to cope with this problem, it has been proposed in some previous work to make the NCS appropriate for the speech utterances or the task of the agent. However, when two embodied agents are in dialogue, we also need to consider interdependences between NCS displayed by those agents. We discuss how to maintain those interdependencies for producing dialogues between a pair of embodied agents for information presentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.