Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on

Date 11-13 March 2009

Filter Results

Displaying Results 1 - 25 of 98
  • [Title pages]

    Publication Year: 2009 , Page(s): i - xv
    Save to Project icon | PDF file iconPDF (1332 KB)  
    Freely Available from IEEE
  • Bringing physical characters to life

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    At Disney, we are storytellers, and all good stories are filled with compelling characters. One way to present these characters to audiences in immersive, 3D environments is through the use of entertainment robots, or Audio Animatronics Figures, as they have traditionally been known at Disney in attractions such as Pirates of the Caribbean. In this talk, I hope to give insight into the design and development of entertainment robots at Disney. In particular, I share - from the point of view of a robot builder - some of the guidelines distilled from Disney's tradition of hand-drawn animation as they are applied to these systems. As examples of characters which partake in two-way interactions with audiences via teleoperation, I discuss two newer characters. The first, Lucky the Dinosaur, was designed to roam freely through the Disney theme park environment while interacting with guests. The second, Wall-E, was developed in conjunction with Pixar Animation Studios to represent the character from the film, and has made appearances and given interviews at red carpet premieres, press events, and in television studios around the world. Ultimately, we hope that a further scientific study of the principles of animation and character development would be useful to anyone designing robots, autonomous or teleoperated, which must interact with humans. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interacting with robots on Mars: operation of the Mars Exploration Rovers

    Publication Year: 2009 , Page(s): 3
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (343 KB)  

    The rovers Spirit and Opportunity have been operating on the surface of Mars since January of 2004. Interaction with these robotic vehicles involves overcoming a number of operational challenges. The challenges include the distance between Mars and Earth (the one-way travel time for commands and data can be as long as 20 minutes), environmental factors (e.g., extreme temperatures, dust storms), and the need to respond quickly and effectively to unexpected events and scientific discoveries. In the five years since the rovers landed, the Mars Exploration Rover project team has developed operational procedures for interacting with the rovers that are both scientifically productive and sustainable for what has become a long-duration mission. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robots with emotional intelligence

    Publication Year: 2009 , Page(s): 5
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (342 KB)  

    This keynote talk will illustrate a basic set of skills of emotional intelligence, how they are important for robots and agents that interact with people, and how our research at MIT addresses part of the problem of giving robots such skills. One of the most important skills is the ability to perceive and understand expressions of emotion, which I will highlight by demonstrating new technologies developed to read joint facial-head movements in real-time and associate these with complex affective-cognitive states, and technologies to read paralinguistic vocal cues from speech. I will also show some non-traditional ways robots might sense and learn about human emotion, and ways they can respond to what they sense that can help or hurt people. I will discuss social and ethical issues these technologies raise. Finally, I will present some new possibilities for robots to both learn from people and help teach skills of emotional intelligence to people, especially to those with nonverbal learning impairments who often want to learn these skills, including many people with diagnoses of autism spectrum disorders such as Aspergers Syndrome. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Snackbot: Documenting the design of a robot for long-term Human-Robot Interaction

    Publication Year: 2009 , Page(s): 7 - 14
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (4430 KB)  

    We present the design of the Snackbot, a robot that will deliver snacks in our university buildings. The robot is intended to provide a useful, continuing service and to serve as a research platform for long-term Human-Robot Interaction. Our design process, which occurred over 24 months, is documented as a contribution for others in HRI who may be developing social robots that offer services. We describe the phases of the design project, and the design decisions and tradeoffs that led to the current version of the robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning about objects with human teachers

    Publication Year: 2009 , Page(s): 15 - 22
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1320 KB)  

    A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “non-social” data sets when training SVM classifiers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How people talk when teaching a robot

    Publication Year: 2009 , Page(s): 23 - 30
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (535 KB)  

    We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naïvely and spontaneously use intensely affective vocalizations. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • I am my robot: The impact of robot-building and robot form on operators

    Publication Year: 2009 , Page(s): 31 - 36
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    As robots become more pervasive, operators will develop richer relationships with them. In a 2 (robot form: humanoid vs. car) × 2 (assembler: self vs. other) between-participants experiment (N=56), participants assembled either a humanoid or car robot. Participants then used, in the context of a game, either the robot they built or a different robot. Participants showed greater extension of their self-concept into the car robot and preferred the personality of the car robot over the humanoid robot. People showed greater self extension into a robot and preferred the personality of the robot they assembled over a robot they believed to be assembled by another. Implications for the theory and design of robots and human-robot interaction are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Egocentric and exocentric teleoperation interface using real-time, 3D video projection

    Publication Year: 2009 , Page(s): 37 - 44
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1845 KB)  

    The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robots in the wild: Understanding long-term use

    Publication Year: 2009 , Page(s): 45 - 52
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1492 KB)  

    It has long been recognized that novelty effects exist in the interaction with technologies. Despite this recognition, we still know little about the novelty effects associated with domestic robotic appliances and more importantly, what occurs after the novelty wears off. To address this gap, we undertook a longitudinal field study with 30 households to which we gave Roomba vacuuming robots and then observed use over six months. During this study, which spans over 149 home visits, we encountered methodological challenges in understanding households' usage patterns. In this paper we report on our longitudinal research, focusing particularly on the methods that we used 1) to understand human-robot interaction over time despite the constraints of privacy and temporality in the home, and 2) to uncover information when routines became less conscious to the participants themselves. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Providing route directions: Design of robot's utterance, gesture, and timing

    Publication Year: 2009 , Page(s): 53 - 60
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1454 KB)  

    Providing route directions is a complicated interaction. Utterances are combined with gestures and pronounced with appropriate timing. This study proposes a model for a robot that generates route directions by integrating three important crucial elements: utterances, gestures, and timing. Two research questions must be answered in this modeling process. First, is it useful to let robot perform gesture even though the information conveyed by the gesture is given by utterance as well? Second, is it useful to implement the timing at which humans speaks? Many previous studies about the natural behavior of computers and robots have learned from human speakers, such as gestures and speech timing. However, our approach is different from such previous studies. We emphasized the listener's perspective. Gestures were designed based on the usefulness, although we were influenced by the basic structure of human gestures. Timing was not based on how humans speak, but modeled from how they listen. The experimental result demonstrated the effectiveness of our approach, not only for task efficiency but also for perceived naturalness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Footing in human-robot conversations: How robots might shape participant roles using gaze cues

    Publication Year: 2009 , Page(s): 61 - 68
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3653 KB)  

    During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity)-or “footing” as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjects in 36 trials. In three conditions, the robot signaled to two subjects, only by means of gaze, the roles of (1) two addressees, (2) an addressee and a bystander, or (3) an addressee and an overhearer. Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them. In subjective evaluations, significant differences were observed in feelings of groupness between addressees and others and liking between overhearers and others. Participation in the conversation did not affect task performance-measured by recall of information presented by the robot-but affected subjects' ratings of how much they attended to the task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior

    Publication Year: 2009 , Page(s): 69 - 76
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (4548 KB)  

    Human communication involves a number of nonverbal cues that are seemingly unintentional, unconscious, and automatic-both in their production and perception-and convey rich information on the emotional state and intentions of an individual. One family of such cues is called “nonverbal leakage.” In this paper, we explore whether people can read nonverbal leakage cues-particularly gaze cues-in humanlike robots and make inferences on robots' intentions, and whether the physical design of the robot affects these inferences. We designed a gaze cue for Geminoid-a highly humanlike android-and Robovie-a robot with stylized, abstract humanlike features-that allowed the robots to “leak” information on what they might have in mind. In a controlled laboratory experiment, we asked participants to play a game of guessing with either of the robots and evaluated how the gaze cue affected participants' task performance. We found that the gaze cue did, in fact, lead to better performance, from which we infer that the cue led to attributions of mental states and intentionality. Our results have implications for robot design, particularly for designing expression of intentionality, and for our understanding of how people respond to human social cues when they are enacted by robots. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual attention in spoken human-robot interaction

    Publication Year: 2009 , Page(s): 77 - 84
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1840 KB)  

    Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called “joint attention”, as a further means for facilitating mutual understanding. We hypothesise that human-robot interaction will benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing the user with valuable non-verbal information concerning the robot's intended message or the robot's successful understanding. We report findings from two eye-tracking experiments demonstrating (1) that human gaze is modulated by both the robot speech and gaze, and (2) that human comprehension of robot speech is improved when the robot's real-time gaze behaviour is similar to that of humans. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An information pipeline model of human-robot interaction

    Publication Year: 2009 , Page(s): 85 - 92
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (346 KB)  

    This paper investigates the potential usefulness of viewing the system of human, robot, and environment as an “information pipeline” from environment to user and back again. Information theory provides tools for analyzing and maximizing the information rate of each stage of this pipeline, and could thus encompass several common HRI goals: “situational awareness” [6], which can be seen as maximizing the information content of the human's model of the situation; efficient robotic control, which can be seen as finding a good codebook and high throughput for the Human-Robot channel; and artificial intelligence, which can be assessed by how much it reduces the traffic on all four channels. Analysis of the information content of the four channels suggests that human to robot communication tends to be the bottleneck, suggesting the need for greater onboard intelligence and a command interface that can adapt to the situation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systemic Interaction Analysis (SInA) in HRI

    Publication Year: 2009 , Page(s): 93 - 100
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    Recent developments in robotics enable advanced human-robot interaction. Especially interactions of novice users with robots are often unpredictable and, therefore, demand for novel methods for the analysis of the interaction in systemic ways. We propose Systemic Interaction Analysis (SInA) as a method to jointly analyze system level and interaction level in an integrated manner using one tool. The approach allows us to trace back patterns that deviate from prototypical interaction sequences to the distinct system components of our autonomous robot. In this paper, we exemplarily apply the method to the analysis of the follow behavior of our domestic robot BIRON. The analysis is the basis to achieve our goal of improving human-robot interaction iteratively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Oz of Wizard: Simulating the human for interaction research

    Publication Year: 2009 , Page(s): 101 - 107
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (947 KB)  

    The Wizard of Oz experiment method has a long tradition of acceptance and use within the field of human-robot interaction. The community has traditionally downplayed the importance of interaction evaluations run with the inverse model: the human simulated to evaluate robot behavior, or “Oz of Wizard”. We argue that such studies play an important role in the field of human-robot interaction. We differentiate between methodologically rigorous human modeling and placeholder simulations using simplified human models. Guidelines are proposed for when Oz of Wizard results should be considered acceptable. This paper also describes a framework for describing the various permutations of Wizard and Oz states. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How to approach humans?-strategies for social robots to initiate interaction

    Publication Year: 2009 , Page(s): 109 - 116
    Cited by:  Papers (11)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (878 KB)  

    This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robot's presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robot's performance in initiating conversations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ShadowPlay: A generative model for nonverbal human-robot interaction

    Publication Year: 2009 , Page(s): 117 - 124
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (935 KB)  

    Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaning-to communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Creating and using matrix representations of social interaction

    Publication Year: 2009 , Page(s): 125 - 132
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1927 KB)  

    This paper explores the use of an outcome matrix as a computational representation of social interaction suitable for implementation on a robot. An outcome matrix expresses the reward afforded to each interacting individual with respect to pairs of potential behaviors. We detail the use of the outcome matrix as a representation of interaction in social psychology and game theory, discuss the need for modeling the robot's interactive partner, and contribute an algorithm for creating outcome matrices from perceptual information. Experimental results explore the use of the algorithm with different types of partners and in different environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing a model of robot behavior to identify and appropriately respond to implicit attention-shifting

    Publication Year: 2009 , Page(s): 133 - 140
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (861 KB)  

    In this paper, we present our current research on developing a model of robot behavior that leads to feelings of “being together” using the robot's body position and orientation. Creating feelings of “being together” will be an essential skill for robots that live with humans and adapt to daily human activities such as walking together or establishing joint attention to information in the environment. We observe people's proxemic behavior in joint attention situations and develop a model of behavior for robots to detect a partner's attention shift and appropriately adjust its body position and orientation in establishing joint attention with the partner. We experimentally evaluate the effectiveness of our model, and our results demonstrate the model's effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How search and its subtasks scale in N robots

    Publication Year: 2009 , Page(s): 141 - 147
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2216 KB)  

    The present study investigates the effect of the number of controlled robots on performance of an urban search and rescue (USAR) task using a realistic simulation. Participants controlled either 4, 8, or 12 robots. In the fulltask control condition participants both dictated the robots' paths and controlled their cameras to search for victims. In the exploration condition, participants directed the team of robots in order to explore as wide an area as possible. In the perceptual search condition, participants searched for victims by controlling cameras mounted on robots following predetermined paths selected to match characteristics of paths generated under the other two conditions. By decomposing the search and rescue task into exploration and perceptual search subtasks the experiment allows the determination of their scaling characteristics in order to provide a basis for tentative task allocations among humans and automation for controlling larger robot teams. In the fulltask control condition task performance increased in going from four to eight controlled robots but deteriorated in moving from eight to twelve. Workload increased monotonically with number of robots. Performance per robot decreased with increases in team size. Results are consistent with earlier studies suggesting a limit of between 8-12 robots for direct human control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Field trial for simultaneous teleoperation of mobile social robots

    Publication Year: 2009 , Page(s): 149 - 156
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2176 KB)  

    Simultaneous teleoperation of mobile, social robots presents unique challenges, combining the real-time demands of conversation with the prioritized scheduling of navigational tasks. We have developed a system in which a single operator can effectively control four mobile robots performing both conversation and navigation. We compare the teleoperation requirements for mobile, social robots with those of traditional robot systems, and we identify metrics for evaluating task difficulty and operator performance for teleoperation of mobile social robots. As a proof of concept, we present an integrated priority model combining real-time conversational demands and non-real-time navigational demands for operator attention, and in a pioneering study, we apply the model and metrics in a demonstration of our multi-robot system through real-world field trials in a shopping arcade. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobile human-robot teaming with environmental tolerance

    Publication Year: 2009 , Page(s): 157 - 163
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2879 KB)  

    We demonstrate that structured light-based depth sensing with standard perception algorithms can enable mobile peer-to-peer interaction between humans and robots. We posit that the use of recent emerging devices for depth-based imaging can enable robot perception of non-verbal cues in human movement in the face of lighting and minor terrain variations. Toward this end, we have developed an integrated robotic system capable of person following and responding to verbal and non-verbal commands under varying lighting conditions and uneven terrain. The feasibility of our system for peer-to-peer HRI is demonstrated through two trials in indoor and outdoor environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On using mixed-initiative control: A perspective for managing large-scale robotic teams

    Publication Year: 2009 , Page(s): 165 - 172
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3537 KB)  

    Prior work suggests that the potential benefits of mixed initiative management of multiple robots are mitigated by situational factors, including workload and operator expertise. In this paper, we present an experiment where allowing a supervisor and group of searchers to jointly decide the correct level of autonomy for a given situation (“mixed initiative”) results in better overall performance than giving an agent exclusive control over their level of autonomy (“adaptive autonomy”) or giving a supervisor exclusive control over the agent's level of autonomy (“adjustable autonomy”), regardless of the supervisor's expertise or workload. In light of prior work, we identify two elements of our experiment that appear to be requirements for effective mixed initiative control of large-scale robotic teams: (a) Agents must be capable of making progress toward a goal without having to wait for human input in most circumstances. (b) The operator control interface must help the human to rapidly understand and modify the progress and intent of several agents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.