By Topic

Robot and Human Interactive Communication, 2004. ROMAN 2004. 13th IEEE International Workshop on

Date 20-22 Sept. 2004

Filter Results

Displaying Results 1 - 25 of 142
  • Robots we like to live with?! - a developmental perspective on a personalized, life-long robot companion

    Page(s): 17 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (952 KB) |  | HTML iconHTML  

    This work addresses different possible social relationships between robots and humans, drawing on animal-human relationships. I argue that humans have been living in (generally peaceful) co-existence with a number of potentially dangerous species, such as some canines. Interestingly dogs are not born 'pet dogs', it's not completely 'predefined' in their genes whether they become friendly or dangerous. A critical period in a puppy's early life significantly shapes its socialization and behavioral conformation. I suggest that such a developmental model of socialization could be an interesting viewpoint on the design of future generations of robots that need to co-exist with humans, and that humans like to live with. I propose the challenge of developing 'personalized robot companions', machines that can serve as life-long companions. I argue that such individualized robots are necessary due to human nature: people have individual needs, likes and dislikes, preferences and personalities that a companion would have to adapt to: one and the same robot not fit all people. Cognitive robot companions above all need to be socialized and personalized in order to meet the social, emotional and cognitive needs of people they are 'living with'. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal development of dual timing mechanism in synchronization tapping task

    Page(s): 181 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (894 KB) |  | HTML iconHTML  

    It is well known that the sensory-motor coupling represents negative asynchrony phenomenon in which motion timing precedes the onset of stimulus. In our previous researches, the tapping task has been investigated by spectrum analysis of synchronization error (SE) and discovered two frequency characteristics in their behavior. Therefore, in this report, we made an improvement on the time-series analysis and it was shown that asynchronous behavior in synchronization tapping task is composed of two different dynamics. One has self-similar structure, and the other is periodic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Informational support in distributed sensor environment sensing room

    Page(s): 353 - 358
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1082 KB) |  | HTML iconHTML  

    This work describes an informational support system based on room-type behavior measurement environment 'sensing room' and active image projector. In the room, there are a floor and some other furniture with embedded pressure sensors, electric appliances with their usage sensors, and tag sensors, which identify objects in the room. Using these embedded sensors, the room is able to recognize the state of the habitant in the room. Based on the status and the habitant's personal profile accumulated, the room determine the timing and the content to perform informational support for the habitant. The information is properly displayed by the pan-tilt enabled image projector utilizing image distortion correction method. Some typical supporting scenario are shown as examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robotic body-mind integration: next grand challenge in robotics

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (858 KB) |  | HTML iconHTML  

    The field of robotics has evolved from industrial robots in the 1960s to entertainment and service robots in the 2000s. During the last decade, major progress has been made in integrating a robotic body with sensors and Al-based software. We describe our efforts to realize a next generation of intelligent robots called cognitive robots. Our work is embedded within a multiagent-based cognitive robot architecture with three distinctive memory systems: short-term and long-term memory structures for routine task execution and a working memory system (WMS) which is closely tied to learning and execution of tasks. The concept of WMS is relatively new in robotics and is expected to play a similar role of the prefrontal cortex (PFC) of our brain that performs cognitive tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Drawing interface for creating embodied space in the real world

    Page(s): 199 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1222 KB) |  | HTML iconHTML  

    Drawing is an important bodily activity in human communication in daily life since it is able to express the abstract information that is difficult in expression using verbal means. On the other hand, the bodily action for the expression is not fully taken advantage of as usual due to physical restrictions in the drawing range. Hence we propose the drawing interface system for supporting human communication in daily life, which is the augmented reality system to draw 3D in the air that is consist of the optical see-through head-mounted display. Additionally, in order to draw smoothly in the real world, it is embedded in the direct manipulate interface to be able to capture and utilize the visual information around the user as a painting material. As a result using this system, various ways of drawing are demonstrated in the real world, such as 3D drawing of the architecture in body size, drawing on the physical objects directly and 3D drawing using physical objects as the ruler. In addition, it is confirmed that the volume and density of the drawing things can be experienced in the immersive drawing space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor based utterance interpretation system SAR for communication robot

    Page(s): 425 - 430
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1023 KB) |  | HTML iconHTML  

    This study develops a dialog system named SAR for communication robots. SAR employs robot's sensor information to accept oral commands from users. There is a difficulty for the robots to deal with the oral commands because humans frequently use an anaphoric expression or an abstract expression. SAR succeeds in interpreting these expressions based on the obtained sensor information. For example, SAR could interpret Japanese anaphoric expressions such as "KOTTI MUITE (look at me)". Moreover, SAR could interpret an abstract expression such as "MOUSUKOSHI SUSUME (move further)". View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A conversation robot using head gesture recognition as para-linguistic information

    Page(s): 159 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (840 KB) |  | HTML iconHTML  

    A conversation robot that recognizes user's head gestures and uses its results as para-linguistic information is developed. In the conversation, humans exchange linguistic information, which can be obtained by transcription of the utterance, and para-linguistic information, which helps the transmission of linguistic information. Para-linguistic information brings a nuance that cannot be transmitted by linguistic information, and the natural and effective conversation is realized. We recognize user's head gestures as the para-linguistic information in the visual channel. We use the optical flow over the head region as the feature and model them using HMM for the recognition. In actual conversation, while the user performs a gesture, the robot may perform a gesture, too. In this situation, the image sequence captured by the camera mounted on the eyes of the robot includes sways caused by the movement of the camera. To solve this problem, we introduced two artifices. One is for the feature extraction: the optical flow of the body area is used to compensate the swayed images. The other is for the probability models: mode-dependent models are prepared by the MLLR model adaptation technique, and the models are switched according to the motion mode of the robot. Experimental results show the effectiveness of these techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An embodied computational model of simulating depression

    Page(s): 131 - 133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (593 KB) |  | HTML iconHTML  

    To develop more accurate and realistic emotions in robots with facial expressions and animal robots, a model of emotions is proposed. Following the formulated principles of artificial brain methodology (Noda), a prototype of the model was constructed and used to simulate depression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power assist method based on phase sequence driven by interaction between human and robot suit

    Page(s): 491 - 496
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (854 KB) |  | HTML iconHTML  

    We proposed a power assist method for leg based on the autonomous motion driven by the interaction between human and the robot suit, HAL (hybrid assistive limb) and verified the effectiveness of this method in the experiments in walking. In order to perform walking task autonomically, we used a phase sequence control which generates a task by transiting some simple basic motions called phase. A task was divided into some phases on the basis of the task performed by a normal person. The joint moving modes were categorized into the active, passive and free modes according to the characteristic of the muscle force conditions. The autonomous motion which HAL generates in each phases were designed according to one of the categorized modes. The floor reaction force and joint angle were adopted as the condition to transit each phase. The experiments in power assist were performed for normal person. The experimental results showed that the muscle activation levels in each phase were significantly reduced. With this, we confirmed the effectiveness of the proposed assist method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A surgical knowledge based interaction method for a laparoscopic assistant robot

    Page(s): 313 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB) |  | HTML iconHTML  

    This work presents a surgeon-robot interaction method for a laparoscopic assistant robot. The ultimate goal of our research is to develop an assistant robot that can replace a human assistant surgeon without placing an extra control burden on the operating surgeon. Unlike previous interaction methods, the proposed method generates the viewpoint of a laparoscope through the reference to the surgical knowledge including the type of surgery and the surgical procedure. As a first application of this method, we focus a cholecystectomy, which is one of the simplest and most common surgeries. Based on the results of the task analysis, we find that a steering method is highly related to surgical procedures. In order to implement this method, we identify the surgical tool from a laparoscopic image, because the surgical procedure can be estimated by the types of surgical tools. The proposed interaction method changes automatically the steering mode that corresponds to the surgical procedure. In addition, in order to provide the surgeon with the capacity to adjust the laparoscopic view, we combined the surgeon's voice commands to the proposed interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast eye localization method for face recognition

    Page(s): 241 - 245
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (855 KB) |  | HTML iconHTML  

    We introduce a fast, robust, accurate eye localization algorithm. Detecting and normalizing human faces from live video streams is the first crucial step in a face verification/recognition system. The accuracy and robustness affect the performance of the following face registration and classification. To localize face regions properly, we detect eye corners by using a corner detector and Gabor wavelets. First, by applying a corner detector in skin color regions, we dramatically reduce the candidate regions for eye corners. Extracted features are represented in a semilocal manner to increase discrimination. Then, in the set of the reduced candidates, a robust feature decision algorithm based on Gabor response analysis gives accurate eye corner locations. Experimental results on real images are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of a basic dance training system with mobile robots

    Page(s): 211 - 216
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1105 KB) |  | HTML iconHTML  

    We develop a basic dance training system with a mobile robot. The system targets on beginners and enables them to learn basics of traditional dances easily. Students can observe not only a fixed screen in front of them but also the display mounted on a mobile robot that moves according to the dance steps. The display shows the actions of a dance master and real-time captured movement of the students and the robot always takes its appropriate position. This enables the students to know bad points of their actions when they are dancing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robots as assistive technology - does appearance matter?

    Page(s): 277 - 282
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB) |  | HTML iconHTML  

    This work studies the effect of a robot's design (appearance) in facilitating and encouraging interaction of children with autism with a small humanoid robot. The paper compares the children's level of interaction with the response to the robot in two different scenarios: one where the robot was dressed like a human (with a 'pretty-girl' appearance) with an uncovered face, and the other, when it appeared with plain clothing and with a featureless, masked face. The results of these, trials clearly indicate the children's preference in their initial response for interaction with a plain, featureless robot over interaction with a human like robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Framework of distributed audition

    Page(s): 77 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (788 KB) |  | HTML iconHTML  

    We propose distributed audition for natural man-machine interface, for robots that act in the environment, and for continuous personal identification. The distributed audition system, consisting of a network of microphones and speakers, monitors the environment, maintains the environment models, and provide information to agents in the environment. The distributed audition system can calibrate locations and parameters in self-organizing manner by producing sounds and observing them. Concepts and fundamental problems of distributed audition are discussed. This work also provides a prototype system and an experimental result of auto calibration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-creative walking support as music therapy

    Page(s): 289 - 294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (819 KB) |  | HTML iconHTML  

    Mutual adaptation process in musical communication is often used for walking support and rehabilitation. But this kind of supporting system is not realized yet. So the purpose of this research is to construct walking support system as a musical communication process. Our strategy is to extend walk-mate system, which realizes mutual adaptation process between human and virtual walker by exchanging step sound. We replaced this step sound to the music performance and showed the effectiveness in the experiment of walking support. The smoothness of gate cycle was improved in the both music performance condition and step sound condition. But in the music condition, the improvement of smoothness remained after the support walking. This result show the effectiveness of this proposed system as a walking support. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Operation assist control system of rotary crane using proposed haptic joystick as man-machine interface

    Page(s): 533 - 538
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB) |  | HTML iconHTML  

    The purpose of This work is to describe the development of a safe, semi-automatic man-machine control system in which the automatic control of a machine is incorporated with manual instructions from an operator. First, a haptic joystick which can provide suggestive information to a crane operator via haptic feedback was developed. Second, operational support was provided to the joystick through the application of the impedance control and the gravity compensation. Third, restrictions on the crane's velocity were, imposed by haptic control. Finally, the validity of the proposed haptic control system which can easily and safely transfer a load to arbitrary positions without colliding with obstacles was demonstrated experimentally. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pinching function with fingertips in humanoid robot hand

    Page(s): 383 - 388
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1087 KB) |  | HTML iconHTML  

    It is difficult to generate the action of stably pinching paper or needle, etc., with the fingertips, which is one of the important functions to be realized by the humanoid robot hands. The authors propose a new robot hand capable of properly realizing a pinching motion with fingertips, by adding the minimum required degree of supplementary freedom which can be realized only with a machine. Through this designing of the mechanism, even the motion of writing characters with a pen is accomplished by our robot hand. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive task generation for humanoid based on human motion strategy

    Page(s): 485 - 490
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB) |  | HTML iconHTML  

    The objective of this research is to propose an interactive method for "task" generating of humanoid robots to accomplish adaptable motions to environments. It is likely that task generation by using the interaction between strategies of human's motions and humanoid robots is effective because the structure of humanoid have a resemblance to that of human. To confirm the effectiveness of proposed method, first, we measured the human's motions such as walking and stair ascending with motion capturing system, and analyzed the patterns of joints angles and floor reaction force. Secondly, each measured motion is divided into meaningful components called "phases" which contained the trajectories for end-effector of humanoid's motions and composed a task. Thirdly, we constructed the task library with phases and tasks for the humanoid's motions. As a result, we found the task such as walking and stair ascending could be generated based on human motion strategy, and motion trajectory of these tasks could be changed by Bezier curve. A series of motions for humanoids was carried out by using task library which consisted of these tasks. These works were simulated by OpenHRP. In conclusion, it was verified that the humanoid's task could be generated according to different environments by the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time multiple people detection using skin color, motion and appearance information

    Page(s): 331 - 334
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (812 KB) |  | HTML iconHTML  

    Recent research in video surveillance has shown the advantages of multiple human detection and tracking. This paper proposes a novel method for detection of multiple people using robust skin color, background subtraction and human upper body appearance information. We extract the human candidate regions using color transform and background subtraction and update. To classify humans and other objects that have similar skin color region or motion, an efficient incorporation of geometric pixel value structure and model based image matching using Hausdorff distance is implemented. The experimental results show that the proposed algorithm can detect humans under various conditions such as skin color noise and complex backgrounds. The extracted human information can be applied to many natural interface or intelligent video surveillance system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Why robots need body for mind communication - an attempt of eye-contact between human and robot

    Page(s): 473 - 478
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (975 KB) |  | HTML iconHTML  

    An eye-contact robot, a robot which can establish an eye-contact with humans is being developed. Most researchers agree that an eye-contact is not simply "simultaneously looking at each other's eyes", but a mutual recognition of a shared attention by an intentional agent. Our argument is that an eye-contact is itself a procedure to confirm the partner's intentionality (being an intentional agent). The aim of our research are explained, and the result of an interaction experiment is reported. It is found that an eye-contact from a robot has some effect on the stance a human take to the robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Social identification of embodied interactive agent

    Page(s): 449 - 454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (868 KB) |  | HTML iconHTML  

    An embodiment interactive agent has a virtual body that is generally drawn by CG animation. We intuitively assume that the agent's body primarily expresses non-verbal messages, or symbolizes its social characteristics through its appearance. However, we have not objectively elucidated the expressive competence of an agent's body beyond the conclusions of our empirical and subjective intuition. Therefore, it is necessary to explore scientifically how do users regard the functional competence of an agent's embodiment. We investigated how users physically interact with an agent even though it is merely a virtual entity drawn by CG on the display, through "showing" something to the eye of the agent, "listening" something from the mouth of the agent, and "speaking" something to the ear of the agent. However, such interaction does not necessarily attribute the intellectual processing function to the agent, and this issue is explored through two psychological experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A prototype dance training support system with motion capture and mixed reality technologies

    Page(s): 217 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1141 KB) |  | HTML iconHTML  

    The mixed reality technology, with which scenes of the real world and the virtual world generated by CG are merged in real time, has been drawing considerable attention in the fields of entertainment, manufacturing, and so on. This work presents a prototype support system for dance training and education with the mixed reality technology and motion capture. Several functions which are thought to be significant for dance education, training and learning are devised and evaluated. Some user interface functions for this kind of new interactive systems are also proposed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Facial action unit recognition using temporal templates

    Page(s): 253 - 258
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB) |  | HTML iconHTML  

    Automatic recognition of human facial expressions is a challenging problem with many applications in human-computer interaction. Most of the existing facial expression analyzers succeed only in recognizing a few emotional facial expressions, such as anger or happiness. Instead of being another approach to automatic detection of prototypic facial expressions of emotion, this work attempts to measure a large range of facial behavior by recognizing facial action units (AUs, i.e. atomic facial signals) that produce expressions. The proposed system performs AU recognition using temporal templates as input data. Temporal templates are 2D images, constructed from image sequences, which show where and when motion in the image sequence has occurred. A two-stage learning machine, combining a k-nearest-neighbor (kNN) algorithm and a rule-based system, performs the recognition of 15 AUs occurring alone or in combination in an input face image sequence. Each rule utilized for recognition of a given AU (or a given AU combination) is based on the presence of a specific temporal template in a particular facial region, in which the presence of facial muscle activity characterizes the AU (or AU combination) in question. When trained and tested on the Cohn-Kanade face image database, the proposed method achieved an average recognition rate of 76.2%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved Kansei-based music retrieval system with a new distance in a Kansei space

    Page(s): 141 - 146
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (851 KB) |  | HTML iconHTML  

    The previously proposed music retrieval system is improved. In order to describe individual Kansei traits and in order to evaluate tunes, scores for 40 pairs of Kansei words (adjectives) were used. Fifteen subjects participated in an experiment in which they rated a seven-point scale for each of 12 tunes. The neural network built in the system learned associating each person's Kansei trait with a standard parson's one. After learning the system could retrieve tunes out of the built-in database, requested by an user in term of scores for 40 word pairs. We show that the system can be improved by selecting a qualified standard evaluator and introducing a new distance for retrieving tunes that well match to users' request. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remarks on SVM-based emotion recognition from multi-modal bio-potential signals

    Page(s): 95 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1093 KB) |  | HTML iconHTML  

    This work proposes an emotion recognition system from multi-modal bio-potential signals. For emotion recognition, support vector machines (SVM) are applied to design the emotion classifier and its characteristics are investigated. Using gathered data under psychological emotion stimulation experiments, the classifier is trained and tested. In experiments of recognizing five emotion: joy, anger, sadness, happiness, and relax, recognition rate of 41.1% is achieved. The experimental result shows that using multi-modal bio-potential signals is feasible and that SVM is well suited for emotion recognition tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.