By Topic

Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on

Date Sept. 27 2009-Oct. 2 2009

Filter Results

Displaying Results 1 - 25 of 206
  • Homotopy-based controller for physical human-robot interaction

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (453 KB) |  | HTML iconHTML  

    This paper presents a model that describes physical interactions during dyadic collaborative tasks. This model is based on a homotopy between two controllers and defines the behavior of each partner as the result of a time-varying balance between two roles: the leader role, which consists in acting according to a plan without considering the other partner's intentions; and the follower role, which conversely consists in acting only based on the intentions of the other partner. The continuous switch between these two attitudes is described by two variables whose time-profile can define a task signature. After a brief presentation of the model, two illustrative scenarios are detailed to give more insights on how the homotopy parameter can be used to describe different situations that can occur in collaborative tasks between two partners. We especially focus on how some recent results in the human-human interaction can be encompassed by our proposed model. Experiments are performed to assess the usability of the model as a control scheme to implement advanced collaborative behaviors on a robotic platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adding tactile reaction to hologram

    Page(s): 7 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (510 KB) |  | HTML iconHTML  

    In this paper, a hologram with tactile reactions is presented. The developed system consists of three components; a holographic display, a hand tracker, and a tactile display. The tactile display, which is our original device, produces force on user's bare hand without any contact by using radiation pressure of airborne ultrasound. It adds the sense of touch to optical images floating in mid-air. In order to represent the feeling of impact, some improvements are added to the tactile display. As a result, the tactile display has an ability to produce up to 4.8 gf without air flow. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vib-Touch: Virtual Active Touch interface for handheld devices

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    Haptic interaction with handheld devices is limited by space and size constraints that inhibit free hand exploration. We developed a compact haptic interface called Vib-Touch, which is operated by fingertip via a pointing-stick input device containing a tactile feedback. A cursor on the screen could perform virtual exploration as a substitute for the finger movement. We call this technology virtual active touch. We also propose a tactile stimulation method to represent not only tactile sensations, but the whole touch experience, including kinesthetic senses and a sense of shapes perceived by a fingertip. This study reports on the first prototype of the vib-touch interface for handheld devices. We confirmed that the prototype could provide friction sensation and geometric shape information using the proposed friction display method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eternal sharpener — A rotational haptic display that records and replays the sensation of sharpening a pencil

    Page(s): 18 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (730 KB) |  | HTML iconHTML  

    This paper proposes a simple 1 DOF rotational haptic display that achieves endless sharpening of a pencil. Many haptic applications have been proposed to present shapes, elasticity, viscosity, and other physical properties of the environment. While these are important to support many tasks such as teleoperation and computer aided design, we focus on the use of haptic display for amusement. We paid special attention to the feeling of ldquoaddictive comfortablenessrdquo induced by haptic stimulation. We take ldquopencil sharpenerrdquo as a typical example, because it is obviously comfortable and has addictiveness. Moreover, the mechanism is relatively simple because the sensation could be generated through 1 DOF haptic display. In this paper, we recorded force and sound of the real pencil sharpener and replayed the sharpening sensation through a haptic display. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of spatial and temporal characteristic between reflection-type tactile sensor and human cutaneous sensation

    Page(s): 22 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1418 KB) |  | HTML iconHTML  

    Today there are many tactile sensors in the world. Though there is few sensor whose characteristics are similar to cutaneous sensation of human. If we want to measure the tactile information of human we have to use human-like tactile sensor. Here we have developed a reflection-type tactile sensor by using imaging device. This sensor employs a simple total reflection characteristic, so it has many degrees of freedom of selecting sensing units, and designing layer structures. This paper describes the design of the sensing unit and the structure of layers which has similar characteristics of human. First we compare the temporal frequency of the proposed sensor and human skin. Second we estimate the required spatial density of sensing units by simulation. Then we compare the total ability of proposed reflection-type sensor and cutaneous sensation of human. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compact Braille display using SMA wire array

    Page(s): 28 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB) |  | HTML iconHTML  

    A novel micro-vibration actuator using a shape-memory alloy (SMA) wire is being developed for the presentation of tactile sensations to a human skin. In this study, the SMA wire is arranged two dimensionally to construct a compact Braille display for the blind. The display is driven by electric current pulse for mobile use to present Braille information. The display was evaluated in a school for the blind, and the results are presented in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of a contact width sensor for tactile tele-presentation of softness

    Page(s): 34 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (495 KB) |  | HTML iconHTML  

    This paper presents a tele-presentation system for tactile softness. Tactile softness was estimated at a remote site, which was then reproduced at a master site, by focusing on change of contact width between a fingertip and an object. To realize the tele-presentation system, a new contact width sensor was developed based on an optical principle. The sensor was designed to have a soft structure to resemble human fingers. The prototype sensor was evaluated for five different samples. The result showed that the sensor could discriminate five samples from the contact width variation, although one pair of samples showed only a slight difference. The tele-presentation system that integrates the developed contact width sensor and a newly designed softness display was developed and evaluated. The evaluation result showed that a user can discriminate softness of a remote object using this system, although the perceived softness feelings did not perfectly match the real feelings of the remote objects, which would be due to insufficient optimization of the contact width sensor. This would be improved by improving the sensor to have a similar dimensions and stiffness as real human fingers, which will be addressed in our future work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multi-level collaborative driving framework for autonomous vehicles

    Page(s): 40 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    This paper proposes a multi-level collaborative driving framework (MCDF) for human-autonomous vehicle interaction. There are three components in MCDF; the mission-behavior-motion block diagram, the functionality-module relationship and the human participation level table. Through integration of the three components, a human driver can cooperate with the vehicle's intelligence to achieve better driving performance, robustness and safety. The MCDF is successfully implemented in TROCS, a real-time autonomous vehicle control system developed by the Tartan Racing Team for the 2007 DARPA Urban Challenge. The performance of MCDF is analyzed and a preliminary test in TROCS simulation mode shows that MCDF is effective in improving the driving performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Choregraphe: a graphical tool for humanoid robot programming

    Page(s): 46 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    In this paper, we present Choregraphe: a graphical environment developed by Aldebaran Robotics for programming its humanoid robot, Nao. Choregraphe is a very powerful tool that allows macroscopic connection of high level behaviors to easily develop complex software for this 25 degrees of freedom robot. But it offers as well the ability to perform fine tuning of complex joint or Cartesian motions. At the lowest level, Choregraphe allows programming in Python. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of emotional synchronization in human-robot KANSEI communications

    Page(s): 52 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1205 KB) |  | HTML iconHTML  

    Human-robot communication is an important subject for housekeeping, elderly care and entertainment robots. To make a natural communication entrainment between human and robot, emotion plays a vital role. From this view point we have developed a KANSEI communication system based on emotional synchronization. The robotic emotion was entrained to human emotion by using a vector field of dynamics, and then the robot made a facial expression to express the robot emotion. In this paper we investigate the effect of the emotional synchronization in human-robot KANSEI communications. We conducted experiments to evaluate the effects of the proposed system based on emotional synchronization. In the experiments of human-robot interaction using the emotional synchronization, we found that human feeling became comfortable when the robot made the synchronized facial expression to human emotion. Then it was confirmed that emotional synchronization in human-robot interaction could be effective to keep a comfortable state. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online object modeling method for occlusion-robust tracking

    Page(s): 58 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    Object tracking is often disturbed by visual occlusion. To handle this problem, we have previously proposed the tracking method by the particle filter, which switches tracking targets autonomously. This method enables the tracker to track the occluded target indirectly by switching its target to the occluder effectively. However, the color-based target model used in this method often causes inaccurate tracking because the model with only one color distribution is not necessarily sufficient. In this paper, we propose a method for online object modeling using a set of color distributions and a set of SIFT features. Since the proposed model has more color information and local texture information, it enables the tracker to recognize the target more robustly. Furthermore, this model can be created dynamically and updated in an online fashion using the graph cuts technique during tracking. Consequently it can be applied to the previously proposed tracking method with autonomous switching of targets. Experimental results show the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of visual appearance on the attribution of applications in social robotics

    Page(s): 64 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    This paper investigates the influence of visual appearance of social robots on judgments about their potential applications. 183 participants rated the appropriateness of thirteen categories of applications for twelve social robots in an online study. The ratings were based on videos displaying the appearance of the robot combined with basic information about the robots' general functions. The results confirmed the hypothesis that the visual appearance of robots is a significant predictor for the estimation of applications in the eye of the beholder. Furthermore, the ratings showed an attractiveness bias: robots being judged as more attractive by the users also received more positive evaluations (i.e., ldquolikingrdquo). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robots as animals: A framework for liability and responsibility in human-robot interactions

    Page(s): 72 - 77
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB) |  | HTML iconHTML  

    As robots become more common across society, there is a pressing need to deal with questions of moral responsibility and legal liability in accidents involving semi-autonomous and autonomous machines. Previous attempts to address these questions have assumed machines with either minimal autonomy or full intelligence, and thus have not adequately considered the current and likely future state of the art in robotics and artificial intelligence. In this paper, we offer general principles to make sense of the foregoing issues, and propose a framework for addressing questions of responsibility and liability in human-robot interaction. This approach is based on the premise that robots can be analogized to animals for the purpose of assigning responsibility and liability when robots are involved in accidents. We provide justification for this approach, consider its implications, and discuss several of its advantages in analyzing human-robot interactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A joystick type car drive interface for wheelchair users

    Page(s): 78 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (971 KB) |  | HTML iconHTML  

    This paper presents a joystick car drive system for a handicapped person using a wheelchair. The joystick drive system enables handicapped person to drive a car with his/her single hand. The joystick operation in back and force direction controls acceleration or deceleration of a car while that in left and right direction turns a steering wheel. Therefore a person, who has disabilities in legs together with some disabilities such as not enough force nor move their arms in wide area, can drive a car by oneself. Additionally, a wheelchair driver does not have to change seat from a wheelchair to a car seat. The developed van equips with a lift on the back and wheelchair user can access to the drivers position with propelling a wheelchair inside of a van. For maintaining driving safety of a van in case of system fault, battery fault, etc., gas and brake pedals are physically moved by mechanical linkage which is connected to a joystick lever. Therefore a van can be stopped by manual operation in any case. A steering wheel is driven by an electric motor which is controlled by a micro-computer system. A steering angle is controlled by PID feedback to track the reference angle given by a joystick. To realize the joystick drive system, we design a steering drive mechanism which includes a DC motor, a magnetic clutch, and a potentiometer for steering angle detection, and transmission gears. Most of cars manufactured in these days equip with power steering systems which reduce steering operation power of a human driver. Therefore the developed system can utilize the power steering system for realizing the joystick car drive system with minimum capacity of a motor. The prototype mechanism is mounted on a actual van type car and tested a realizability of the proposed joystick car drive system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control of a power assist robot for lifting objects based on human operator's perception of object weight

    Page(s): 84 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (829 KB) |  | HTML iconHTML  

    An object lifted with a power assist robot is always perceived lighter than its actual weight. But, the human operator cannot differentiate between the power assisted weight and the actual weight and eventually applies load force (vertical lifting force) according to the actual weight of the object. This faulty force programming (excessive load force) gives faulty motions to the power assist robot and jeopardizes its operability, maneuverability, ease of use, naturalness, human-friendliness, safety etc. In this paper we assume that these problems still exist with the power assist robots because human's weight perception is not included in the design and control of the conventional power assist robots .We hypothesize that human's perception of weight due to inertial force may be different from the perceived weight due to gravitational force for lifting object with a power assist robot. Based on this hypothesis, we designed a 1 DOF power assist robot and established a psychophysical relationship between the actual weights and the power assisted weights for the objects lifted with the robot. We also determined the excess of the load forces that humans applied. Then, we modified the control system of the power assist robot based on the psychophysical relationship and the load force characteristics. The modification of the control system reduced the peak load forces applied by humans and thus enhanced maneuverability, naturalness, ease of use, stability, safety etc. of the robot system significantly. Finally, we proposed using the findings to design human-friendly power assist robots for carrying heavy objects in various industries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of wearable master-slave training device constructed with pneumatic rubber muscles

    Page(s): 91 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB) |  | HTML iconHTML  

    A target of this study is to rehabilitate an elbow flexing contracture patient caused by a brain infarction and so on. In rehabilitation for the patient, a trainer flexes the patient's elbow by a hand until the patient feels a little pain, and then the trainer keeps the applied force for a while. In this training, it is important that the trainer regulates the applied force depending on the patient's condition. In the developed device, a reaction torque of the slave device is estimated by a disturbance observer. By feeding back the estimated torque to the master device, a master user can feel the reaction torque. In this paper, the structure of the developed device is discussed, and then the control system is described. Finally, the validity of the proposed device is evaluated from the experiments that assume rehabilitation of an elbow. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of Intelligent Passive Cane controlled by servo brakes

    Page(s): 97 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1865 KB) |  | HTML iconHTML  

    In this paper, we propose an intelligent cane developed based on passive robotics concept for supporting the elderly and the disabled persons who have difficulty walking. The intelligent passive cane (IP Cane) is controlled by servo brakes attached to the wheels and intrinsically safe for humans, because it cannot move unintentionally, i.e., it has no driving actuators. In addition, the IP cane provides many kinds of functions by appropriately controlling the torque of wheels with servo brakes. In this paper, we propose an environmentally adaptive motion control algorithm that provides path following function, and a human adaptive motion control algorithm that changes motion characteristic of IP cane to adapt to user difficulty and states. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The role of physical embodiment of a therapist robot for individuals with cognitive impairments

    Page(s): 103 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    This research focuses on studying the possible role of a socially interactive robot as a tool for monitoring and encouraging cognitive activities of the elderly and/or individuals suffering from dementia. One of the aims of this work is to show the benefits of the robot's physical embodiment in human-robot social interactions. The social therapist robot tries to provide customized cognitive stimulation by playing a music game with the user. The results of the 8-month pilot study depict a more efficient, natural, and preferred interaction with the robot rather than with the simulated robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Therapeutic and educational objectives in robot assisted play for children with autism

    Page(s): 108 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    This article is a methodological paper that describes the therapeutic and educational objectives that were identified during the design process of a robot aimed at robot assisted play. The work described in this paper is part of the IROMEC project (Interactive Robotic Social Mediators as Companions) that recognizes the important role of play in child development and targets children who are prevented from or inhibited in playing. The project investigates the role of an interactive, autonomous robotic toy in therapy and education for children with special needs. This paper specifically addresses the therapeutic and educational objectives related to children with autism. In recent years, robots have already been used to teach basic social interaction skills to children with autism. The added value of the IROMEC robot is that play scenarios have been developed taking children's specific strengths and needs into consideration and covering a wide range of objectives in children's development areas (sensory, communicational and interaction, motor, cognitive and social and emotional). The paper describes children's developmental areas and illustrates how different experiences and interactions with the IROMEC robot are designed to target objectives in these areas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing modular robotic playware

    Page(s): 115 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (802 KB) |  | HTML iconHTML  

    In this paper, we explore the design of modular robotic objects that may enhance playful experiences. The approach builds upon the development of modular robotics to create a kind of playware, which is flexible in both set-up and activity building for the end-user to allow easy creation of games. Key features of this design approach are modularity, flexibility, and construction, immediate feedback to stimulate engagement, activity design by end-users, and creative exploration of play activities. These features permit the use of such modular playware by a vast array of users, including disabled children who often could be prevented from using and taking benefits from modern technologies. The objective is to get any children moving, exchanging, experimenting and having fun, regardless of their cognitive or physical ability levels. The paper describes two prototype systems developed as modular robotic tiles, and discusses the challenges and opportunities of this modular playware when used by children with different cognitive abilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive CPG based coordinated control of healthy and robotic lower limb movements

    Page(s): 122 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9478 KB) |  | HTML iconHTML  

    This paper proposes an adaptive CPG based controller for a lower limb prosthesis consisting of online trajectory generation and interlimb coordination. The adaptive CPG can produce multidimensional rhythmic patterns and modulate their frequency by tuning relevant parameters in an autonomously way adapting to a changing periodicity of external signals. Also, to increase the stability of the prosthesis, a spring-damper component is attached between the hip and ankle joints, allowing the absorption of impulsive ground reaction forces at landing. We verify the validity of the proposed controller with a simulated humanoid robot through the investigation of the self-coordination between the healthy and robotic legs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of social exploration mechanisms on robot learning

    Page(s): 128 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (791 KB) |  | HTML iconHTML  

    Social learning in robotics has largely focused on imitation learning. Here we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four social learning mechanisms on a robot: stimulus enhancement, emulation, mimicking, and imitation, and illustrate the computational benefits of each. In particular, we illustrate that some strategies are about directing the attention of the learner to objects and others are about actions. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot `playmates'. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Helping robots imitate: Metrics and technological solutions inspired by human behaviour

    Page(s): 135 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    In this paper we describe three lines of research related to the issue of helping robots imitate people. These studies are based on observed human behaviour, technical metrics and implemented technical solutions. The three lines of research are: (a) a number of user studies that show how humans naturally tend to demonstrate a task for a robot to learn, (b) a formal approach to tackle the problem of what a robot should imitate, and (c) a technology-driven conceptual framework and technique, inspired by social learning theories, that addresses how a robot can be taught. In this merging exercise we will try to propose a way through this problem space, towards the design of a Human-Robot Interaction (HRI) system able to be taught by humans via demonstration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human to robot demonstrations of routine home tasks: Adaptation to the robot's preferred style of demonstration

    Page(s): 141 - 146
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1700 KB) |  | HTML iconHTML  

    This paper presents a follow-up to a previous user-study that considered the participants' acknowledgment of feedback from a robot, to which they had to demonstrate the task of laying a table. The question was if (and how) they modified their teaching instructions when the robot stated a misunderstanding, and whether they remained consistent for the rest of the sub-tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Teaching a humanoid: A user study on learning by demonstration with HOAP-3

    Page(s): 147 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1190 KB) |  | HTML iconHTML  

    This article reports on the results of a user study investigating the satisfaction of nave users conducting two learning by demonstration tasks with the HOAP-3 robot. The main goal of this study was to gain insights on how to ensure a successful as well as satisfactory experience for nave users. The participants performed two tasks: They taught the robot to (1) push a box, and to (2) close a box. The user study was accompanied by three pre-structured questionnaires, addressing the users' satisfaction with HOAP-3, the users' affect toward the robot caused by the interaction, and the users' attitude towards robots. Furthermore, a retrospective think aloud was conducted to gain a better understanding of what influences the users' satisfaction in learning by demonstration tasks. A high task completion and final satisfaction rate could be observed. These results stress that learning by demonstration is a promising approach for nave users to learn the interaction with a robot Moreover, the short term interaction with HOAP-3 led to a positive affect, higher than the normative average on half of the female users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.