By Topic

Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on

Date 31 Oct.-2 Nov. 2003

Filter Results

Displaying Results 1 - 25 of 70
  • Multi-scale imaging for non-symbolic visualization of maneuvering affordance

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB) |  | HTML iconHTML  

    An imaging scheme is presented for maneuvering affordance in noisy imagery. Under the constraint of mental maneuvering process, image features are associated with affordance. By representing on directional scale image, the expansion of the maneuvering affordance is visualized on observed imagery. The detectability of affordance patterns has been verified through experimental studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ultrasonic sensor disk for detecting muscular force

    Page(s): 291 - 295
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (701 KB) |  | HTML iconHTML  

    Many researchers are studying and developing various kinds of man-machine systems. Especially, a wearable robot, such as an exoskeleton power suit, is one of the most remarkable fields. In this field, more accurate and reliable sensing system for detecting human motion intention is strongly required. In most of conventional man-machine systems, torque sensors, tactile pressure sensors and EMG sensors are utilized in a man-machine interface to detect human motion intention. These sensors, however, have some limitations. For example, it is hard to install and secure torque sensors on the joints of a human body. It is not easy to correlate the data from a tactile pressure sensor to the human motion intention. Although the EMG sensor can detect human motion intention, the sensor system is complex and expensive, and suffers from electric noise. We have been developing an innovative sensor suit which, just like a wet suit, can be conveniently put on by an operator to detect his or her motion intention by non-invasively monitoring his or her muscle conditions such as the shape, the stiffness and the density. This sensor suit is made of soft and elastic fabrics embedded with arrays of MEMS sensors such as strain gauges, ultrasonic sensors and optical fiber sensors, to measure different kinds of human muscle conditions. In the previous paper, the muscle stiffness sensor for detecting muscular force was developed according to the fact that the muscle gains its stiffness as it is activated. Its superior performance was reported through experiments in which the sensor was applied for the assisting device for the disable. In this paper, the ultrasonic sensor disk is proposed as one of the sensor disks embedding the sensor suit. This sensor is based on an original principle and non-invasively detects activity of specific muscle. It is clear that the square of ultrasonic transmission speed is in proportion to the elasticity of the object and in inverse proportion to the density. It is estimated that the elasticity and density of the muscle increase or decrease as the muscle is energized. Then, it is hereby expected that the muscular activity is measured by the ultrasonic sensor. In this study, the feasibility of an ultrasonic sensor for detecting muscular force is shown thr- ough experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embodied vision - perceiving objects from actions

    Page(s): 365 - 371
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    For the purposes of visual manipulation of objects by a robot, the latter has to learn, about object properties, as well as actions that the robot may apply on it. This paper presents strategies to acquire such competencies based on human-robot interactions. Perception is driven by manipulation from an actor, either human or robotic. Interaction with human teachers facilitates robot learning of new objects and their functionality, or the acquisition of new competencies as an actor. Self-exploration of the world extends the robot's knowledge concerning object properties, and consolidates the execution of learned tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subjective evaluation of a seal robot at the National Museum of Science and Technology in Stockholm

    Page(s): 397 - 402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    This paper describes research on mental commit robot that seeks a different direction from industrial robot, and that is not so rigidly dependent on objective measures such as accuracy and speed. The main goal of this research is to explore a new area in robotics, with an emphasis on human-robot interaction. In the previous research, we categorized robots into four categories in terms of appearance. Then, we introduced a cat robot and a seal robot, and evaluated them by interviewing many people. The results showed that physical interaction improved subjective evaluation. Moreover, a priori knowledge of a subject has much influence into subjective interpretation and evaluation of mental commit robot. In this paper, 133 subjects evaluated the seal robot, Paro by questionnaires in an exhibition at the National Museum of Science and Technology in Stockholm, Sweden. This paper reports the results of statistical analysis of evaluation data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic work space surveillance for mobile robot assistants

    Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (966 KB) |  | HTML iconHTML  

    Intelligent mobile robot assistants are important components for increasing flexibility in future production processes. These robot assistants must be able to work autonomously but must also have the capability to interactively learn from and cooperate with the human worker in a common (shared) work space. They may under no circumstances be a subject of danger to the human worker wherefore new methods for work space surveillance are called for. In this paper we present the current state of the DaimlerChrysler manufacturing Assistant and the safety concept for the dynamic sensor-based surveillance of its work space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion control of omni-directional type walking support system "Walking Helper"

    Page(s): 85 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (631 KB) |  | HTML iconHTML  

    In this paper, we develop a prototype of an intelligent walking support system referred to as walking helper and propose a motion control algorithm for it. Walking Helper consists of an omni-directional mobile base, a body force sensor, a support frame and a cover around the mobile base. By using the omni-directional mobile base and body force sensor, the good maneuverability and the high safety of walking helper are realized. In addition, we propose a motion control algorithm referred to as adaptive caster action to utilize walking helper effectively in an environment such as a home, an office, a hospital, etc. The proposed control algorithm is experimentally applied to the developed walking helper, and the validity of the proposed control algorithm is illustrated by the experimental results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A control algorithm and preliminary user studies for a bone drilling medical training system

    Page(s): 153 - 158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    Bone drilling procedures require a high surgeon skill. The required core skills are: recognizing the drilling end-point, ability of applying constant, sufficient, but non-excessive feeding velocity and thrust force. Although several simulators and training systems were developed for different surgery, a bone drilling medical training system does not exist yet. In this paper, a bone drilling medical training system is proposed and a novel control algorithm for the problem is presented. A graphical user interface is developed to complete a medical training system structure. Experimental results for controller performance are satisfactory. Additional experiments are performed to check if the developed system improves the skill of trainees or not. Early results suggest that training in the developed medical training system is a promising way to teach drilling into a bone to medical students. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Can we feel a gaze pressure from a robot? Development of an eye-contact robot

    Page(s): 103 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (663 KB) |  | HTML iconHTML  

    We are developing an eye-contact robot, a head robot that can make an eye-contact with humans. The robot can move its head and eyes (gaze directions) freely in pan and tilt directions. Each eye of the robot has two cameras - wide-range and telephoto. With an image from the wide-range camera, it can follow searching objects (such as human faces) in real time. With an image from the telephoto camera, it can detect a head movement and a gaze direction of a human and mimic his behaviors. The gaze interaction experiment (between the robot and human subjects) was conducted. Some interesting results are obtained concerning the effect of gaze manipulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of dental training system with haptic display

    Page(s): 159 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    This paper discusses the development of a dental training system with haptic display capability. The system architecture is proposed firstly concerning two typical operations, probing and cutting, in dental surgery. Triangle mesh model is used for the tooth to reduce the computation time. Real time collision detection is realized between the tooth and a spherical tool. The operation force is determined from the penetration between the tool and the tooth. Material removal from the tooth is realized using vertex deformation method. Force filtering approach is proposed to eliminate vibration of the haptic device. Experiment results show that the system can provide stable simulation of probing and cutting operation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching robot appearance and behavior to tasks to improve human-robot cooperation

    Page(s): 55 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB) |  | HTML iconHTML  

    A robot's appearance and behavior provide cues to the robot's abilities and propensities. We hypothesize that an appropriate match between a robot's social cues and its task improve the people's acceptance of and cooperation with the robot. In an experiment, people systematically preferred robots for jobs when the robot's humanlikeness matched the sociability required in those jobs. In two other experiments, people complied more with a robot whose demeanor matched the seriousness of the task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Roaming stripes: smooth reactive navigation in a partially known environment

    Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (577 KB) |  | HTML iconHTML  

    Service mobile robots must be able to operate within human populated environments to carry out different tasks, such as surveillance of banks and warehouses, transportation of things, escorting people in exhibitions and museums, etc. The paper describes a novel hybrid approach to mobile robot navigation, which integrates a priori knowledge of the environment with local perceptions in order to carry out the assigned tasks efficiently and safely. Moreover, the system is able to generate smooth trajectories which take into account the kinematic and dynamic constraints on the robot's motion due to its geometrical and physical characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of user's attention based on gaze and environment measurements for robotic wheelchair

    Page(s): 97 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (543 KB) |  | HTML iconHTML  

    In this paper, we describe a robotic wheelchair system as a guide robot. This system detects the head pose and gaze direction of the user, and recognizes its position and the surrounding environment using a range sensor and a map. Since the system can detect where the user is looking from the measurements, it can estimate the attention of the user on the wheelchair by the duration of gaze. Experimental results indicate the validity of the speed control assistance using the estimation of user's attention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward programming of assembly tasks by demonstration in virtual environments

    Page(s): 309 - 314
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (795 KB) |  | HTML iconHTML  

    Service robots require simple programming techniques allowing users with little or no technical expertise to integrate new tasks in a robotic platform. A promising solution for automatic acquisition of robot behaviours is the programming by demonstration (PbD) paradigm. Its aim is to let robot systems learn new behaviours from a human operator demonstration. This paper describes a PbD system able to deal with assembly operation in a 3D block world. The main objective of the research is to investigate the benefits of a virtual demonstration environment. Overcoming some difficulties of real world demonstrations, a virtual environment can improve the effectiveness of the instruction phase. Moreover, the user can also supervise and validate the learned task by means of a simulation module, thereby reducing errors in the generation process. Some experiments involving the whole set of system components demonstrate the viability and effectiveness of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of exoskeletal robotic orthoses concerning possibility of assistance and user's safety

    Page(s): 73 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (437 KB) |  | HTML iconHTML  

    We discuss an analytical method to judge whether an exoskeletal robotic orthosis has the possibility of producing assisting forces while theoretically guaranteeing the user's safety. Our basic idea on how to make this judgment is described. Then a theoretical method to simultaneously evaluate the possibility of assistance and the user's safety is proposed, and its validity is investigated with a numerical example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Validating a skill transfer system based on reactive robots technology

    Page(s): 175 - 180
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    Up to now, the skill transfer capabilities of haptics have not only been at an initial stage of development, but also their evolution has been under-investigated in terms of users' impact and achievable results. The present paper is concerned with the concept of reactive robots (RR) system. A RR is a bi-directional system capable of understanding the meaning of motions and transferring skills among users and interfaces according to the interpreted motions. In this paper, the reactive robot control was used for replicating Japanese characters and the recognition system was used for evaluating the stochastic user's performance based on the hidden Markov model. This system may help to understand better the learning processes. Using such a kind of system, an application for learning Japanese handwriting has been developed and tested. We evaluated the skill transfer capabilities when the haptic feedback is used to improve the learning process based on some of the RR system features. Findings from this study indicate that the user's performance is enhanced considerably when both visual and haptic information are provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for the coupling of belief systems through human-robot language interaction

    Page(s): 385 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    This paper describes a method of multi-modal language processing that reflects experiences shared by the user and the robot. Through incremental online optimization in the process of interaction, the robot's system of beliefs, which is represented by a stochastic model, is formed coupling with that of the user. The belief system of the robot consists of belief modules, a confidence that each belief is shared by the user (local confidence), and a confidence that all the belief modules and the local confidence are identical to those of the user (global confidence). Based on the system of beliefs, the robot can interpret even fragmental and ambiguous utterances, and can act and generate utterances appropriate for a given situation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based walking support system with wearable walking helper

    Page(s): 61 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    In this paper, a wearable walking support system for people who have difficulties in walking because of weakened lower extremities is proposed. We propose a wearable walking support device, referred to as wearable walking helper, for supporting antigravity muscles on lower extremities, and a model-based control algorithm for the device without using biological signals, that is, supporting knee joint moment is calculated based on the antigravity term of necessary knee joint moment, which is estimated based on a human model. The control algorithm is implemented in the. wearable walking helper and experimental results illustrate the potential of the proposed system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time lag effects of utterance to communicative actions on robot-human greeting interaction

    Page(s): 217 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (587 KB) |  | HTML iconHTML  

    The timing to generate communicative action and utterance in face-to-face greeting interaction is analyzed by synthesis for applying to a robot-human interaction support. The analysis by synthesis is performed by using an embodied robot system and confirms that the variation of the pause and lags of utterance to communicative actions brings different communicative effects, i.e. about 0.3 sec lag is desirable for familiar greetings, and the longer lag is for polite greetings. This result demonstrates the importance of timing in robot-human embodied interaction and the applicability in advanced human-robot communication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On proposing the concept of robot anxiety and considering measurement of it

    Page(s): 373 - 378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    This paper proposes a conceptual definition of anxiety which prevents humans from interaction with communication robots in daily life, named with "robot anxiety", by taking into account computer anxiety and communication apprehension. Then, it discusses construction of a psychological scale for measurement of robot anxiety and reports the current situation of our research on it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward gaze-based proactive support for Web readers

    Page(s): 121 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    In this paper, we describe an ongoing work to create a gaze-based support application for Web readers; the support represents a proactive translation of difficult words for non-natives. The basic idea of the application reflects the concept of interactive interfaces mediating the redistribution of cognitive tasks between people and machines. As the proactive principle advocate computers toward monitoring the user instead of waiting for his/her commands, we present a framework for the realization of this system by analyzing gaze patterns with the help of time series analysis techniques and using a safe gaze tracking system that does not affect the user's eyes if used during long hours. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive experiments between creature and robot as a basic research for coexistence between human and robot

    Page(s): 347 - 352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (517 KB) |  | HTML iconHTML  

    The goal of our research is to clarify factors that are necessary for coexistence between human and robot. However, evaluating communications between humans and robots is quite difficult due to the existence of several uncertain factors that exist in human communication and variety of personalities. Therefore, as a first step, a more previous species, rats were selected. We conducted interactive experiments between rats and robots. We investigate basic factors that are necessary for symbiotic relationship between living creatures and robots. To conduct such experiments, we developed a new rat-robot (WM-6) and experiment systems. Three experiments have been conducted: "calling robot", "recognizing behavior", and "choosing reward". The results of these experiments show that we succeeded in making a rat to act on the robot by itself. Moreover, the rat could recognize two behavioral patterns of the robot ("translation" and "rotation"). Finally, the rat could change its interaction with the robot based on its need. Consequently, we succeeded in creating various interactions between rat and robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • KITARO: Kyutech intelligent and tender autonomous robot

    Page(s): 341 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (618 KB) |  | HTML iconHTML  

    The interaction between robots and humans is a research topic, which has been given much attention recently. At our laboratory, we have developed an interactive robot that works in a human living space. This robot has been named KITARO (Kyutech intelligent and tender autonomous robot). KITARO is designed to collect and understand several types of human information by moving around in a human living space. Additionally, KITARO has the ability to imitate human behavior based on time-series information. In this paper, we describe the hardware and software, which work in KITARO. In order to realize the system for collecting human visual information around KITARO's working area, we have associated certain modules into KITARO. The first module is HeadFinder, which has the capability of detecting and tracking a human head. The second module is HeadClassifier, which estimates several categories, e.g., the gender and the age of human based on head image information from HeadFinder. By using these modules, the KITARO can understand and abstract the trajectory of a person walking in front of its. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-creation in man-machine interaction

    Page(s): 321 - 324
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (422 KB) |  | HTML iconHTML  

    The purpose of our research group is to realize a "co-creation system" [Miyake, Y., 1997] [Shimizu, H. et al., 2000]. Co-creation means co-emergence of real-time coordination by sharing cognitive space and time. Human communication with emergent reality like this needs two kinds of processing at the same time [Miyake, Y. et al., 2001]. One is explicit communication such as the exchange of messages and the other is implicit embodied interaction such as sympathy and direct experience. Using this dual-processing complementarity, we are developing co-creative man-machine interfaces and interactive media [Miyake, Y. et al, 2001; Muto, T. and Miyake, Y., 2002; Yamamoto, T. and Miyake, Y., 2002; Takanashi, H. and Miyake, Y., 2003] [Miyake, Y. and Miyagawa, T., 1999]. This new technology is effective for recovering human linkage, social ethics and mutual-reliability that has been lost in the IT society. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attention coupling as a prerequisite for social interaction

    Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB) |  | HTML iconHTML  

    This paper proposes, "attention coupling", that is spatio-temporal coordination of each other's attention, as a prerequisite for human-robot social interaction, where the human interactant attributes mental states to the robot, and possibly vice versa. As a realization of attention coupling we implemented on our robots the capability of eye-contact (mutually looking into each other's eyes) and joint attention (looking at a shared target together). Observation of the interaction with human babies/children showed that the robots with the attention coupling capability facilitated in the babies/children social behavior, including showing, giving, and verbal interactions like asking questions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using augmented reality to improve walking in stroke survivors

    Page(s): 79 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (417 KB) |  | HTML iconHTML  

    In stroke survivors walking ability is often compromised. In particular, gait velocity and walking distance are reduced as a result of a shorter stride length and slower cadence. The goal of this project is to apply and evaluate the performance of subjects who are poststroke and are given the task of stepping over obstacles. Virtual objects were presented to subjects walking on a treadmill. Performance comparisons were made with real obstacles in an overground-walking environment. The virtual object method is potentially more beneficial because it allows individuals to practice stepping over objects of any height and length combination, in a safe and therapeutic environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.