By Topic

Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on

Date 5-8 March 2012

Filter Results

Displaying Results 1 - 25 of 147
  • [Front matter]

    Page(s): i - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (907 KB)  
    Freely Available from IEEE
  • Strategies for human-in-the-loop robotic grasping

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2130 KB) |  | HTML iconHTML  

    Human-in-the loop robotic systems have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of a human operator with autonomous tools and behaviors. Along these lines, we present a system for remote human-in-the-loop grasp execution. An operator uses a computer interface to visualize a physical robot and its surroundings, and a point-and-click mouse interface to command the robot. We implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator. Our controlled experiment (N=48) results indicate that people were able to successfully grasp more objects and caused fewer unwanted collisions when using the strategies with more autonomous assistance. We used an untethered robot over wireless communications, making our strategies applicable for remote, human-in-the-loop robotic applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grip forces and load forces in handovers: Implications for designing human-robot handover controllers

    Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1281 KB) |  | HTML iconHTML  

    In this study, we investigate and characterize haptic interaction in human-to-human handovers and identify key features that facilitate safe and efficient object transfer. Eighteen participants worked in pairs and transferred weighted objects to each other while we measured their grip forces and load forces. Our data show that during object transfer, both the giver and receiver employ a similar strategy for controlling their grip forces in response to changes in load forces. In addition, an implicit social contract appears to exist in which the giver is responsible for ensuring object safety in the handover and the receiver is responsible for maintaining the efficiency of the handover. Compared with prior studies, our analysis of experimental data show that there are important differences between the strategies used by humans for both picking up/placing objects on table and that used for handing over objects, indicating the need for specific robot handover strategies as well. The results of this study will be used to develop a controller for enabling robots to perform object handovers with humans safely, efficiently, and intuitively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing robot learners that ask good questions

    Page(s): 17 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3234 KB) |  | HTML iconHTML  

    Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robot behavior toolkit: Generating effective social behaviors for robots

    Page(s): 25 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1545 KB) |  | HTML iconHTML  

    Social interaction involves a large number of patterned behaviors that people employ to achieve particular communicative goals. To achieve fluent and effective humanlike communication, robots must seamlessly integrate the necessary social behaviors for a given interaction context. However, very little is known about how robots might be equipped with a collection of such behaviors and how they might employ these behaviors in social interaction. In this paper, we propose a framework that guides the generation of social behavior for humanlike robots by systematically using specifications of social behavior from the social sciences and contextualizing these specifications in an Activity-Theory-based interaction model. We present the Robot Behavior Toolkit, an open-source implementation of this framework as a Robot Operating System (ROS) module and a community-based repository for behavioral specifications, and an evaluation of the effectiveness of the Toolkit in using these specifications to generate social behavior in a human-robot interaction study, focusing particularly on gaze behavior. The results show that specifications from this knowledge base enabled the Toolkit to achieve positive social, cognitive, and task outcomes, such as improved information recall, collaborative work, and perceptions of the robot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Do people hold a humanoid robot morally accountable for the harm it causes?

    Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Social facilitation with social robots?

    Page(s): 41 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (694 KB) |  | HTML iconHTML  

    Regarding the future usage of social robots in workplace scenarios, we addressed the question of potential mere robotic presence effects on human performance. Applying the experimental social facilitation paradigm in social robotics, we compared task performance of 106 participants on easy and complex cognitive and motoric tasks across three presence groups (alone vs. human present vs. robot present). Results revealed significant evidence for the predicted social facilitation effects for both human and robotic presence compared to an alone condition. Implications of these findings are discussed with regard to the consideration of the interaction of robotic presence and task difficulty in modeling robotic assistance systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New measurement of psychological safety for humanoid

    Page(s): 49 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (730 KB) |  | HTML iconHTML  

    In this article, we aim to discover the important factors for determining the psychological safety of humanoids and to develop a new psychological scale to measure the degree of safety quantitatively. To discover the factors that determine the psychological safety of humanoids from an ordinary person's perspective, we studied 919 Japanese, who observed movies of 11 humanoids and then freely described their impressions about what the safety of each humanoid was for them. Five psychologists categorized all of the obtained descriptions into several categories and then used the categories to compose a new psychological scale. Then, 2,624 different Japanese evaluated the same 11 humanoids using the new scale. Factor analysis on the obtained quantitative data revealed six factors of psychological safety: Performance, Humanness, Acceptance, Harmlessness, Toughness, and Agency. Additional analysis revealed that Performance, Acceptance, Harmlessness, and Toughness were the most important factors for determining the psychological safety of general humanoids. The usability of the new scale is discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistency in physical and on-screen action improves perceptions of telepresence robots

    Page(s): 57 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1542 KB)  

    Does augmented movement capability improve people's experiences with telepresent meeting participants? We performed two web-based studies featuring videos of a telepresence robot. In the first study (N=164), participants observed clips of typical conversational gestures performed a) on a stationary screen only, b) with an actuated screen moving in physical space, or c) both on-screen and in-space. In the second study (N=103), participants viewed scenario videos depicting two people interacting with a remote collaborator through a telepresence robot, whose distant actions were a) visible on the screen only, or b) accompanied by local physical motion. These studies suggest that synchronized on-screen and in-space gestures significantly improved viewers' interpretation of the action compared to on-screen or in-space gestures alone, and that in-space gestures positively influenced perceptions of both local and remote participants. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real world haptic exploration for telepresence of the visually impaired

    Page(s): 65 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (959 KB) |  | HTML iconHTML  

    Robotic assistance through telepresence technology is an emerging area in aiding the visually impaired. By integrating the robotic perception of a remote environment and transferring it to a human user through haptic environmental feedback, the disabled user can increase one's capability to interact with remote environments through the telepresence robot. This paper presents a framework that integrates visual perception from heterogeneous vision sensors and enables real-time interactive haptic representation of the real world through a mobile manipulation robotic system. Specifically, a set of multi-disciplinary algorithms such as stereovision processes, three-dimensional map building algorithms, and virtual-proxy haptic rendering processes are integrated into a unified framework to accomplish the goal of real-world haptic exploration successfully. Results of our framework in an indoor environment are displayed, and its performances are analyzed. Quantitative results are provided along with qualitative results through a set of human subject testing. Our future work includes real-time haptic fusion of multi-modal environmental perception and more extensive human subject testing in a prolonged experimental design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of changing reliability on trust of robot systems

    Page(s): 73 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4479 KB) |  | HTML iconHTML  

    Prior work in human-autonomy interaction has focused on plant systems that operate in highly structured environments. In contrast, many human-robot interaction (HRI) tasks are dynamic and unstructured, occurring in the open world. It is our belief that methods developed for the measurement and modeling of trust in traditional automation need alteration in order to be useful for HRI. Therefore, it is important to characterize the factors in HRI that influence trust. This study focused on the influence of changing autonomy reliability. Participants experienced a set of challenging robot handling scenarios that forced autonomy use and kept them focused on autonomy performance. The counterbalanced experiment included scenarios with different low reliability windows so that we could examine how drops in reliability altered trust and use of autonomy. Drops in reliability were shown to affect trust, the frequency and timing of autonomy mode switching, as well as participants' self-assessments of performance. A regression analysis on a number of robot, personal, and scenario factors revealed that participants tie trust more strongly to their own actions rather than robot performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Teamwork in controlling multiple robots

    Page(s): 81 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB) |  | HTML iconHTML  

    Simultaneously controlling increasing numbers of robots requires multiple operators working together as a team. Helping operators allocate attention among different robots and determining how to construct the human-robot team to promote performance and reduce workload are critical questions that must be answered in these settings. To this end, we investigated the effect of team structure and search guidance on operators' performance, subjective workload, work processes and communication. To investigate team structure in an urban search and rescue setting, we compared a pooled condition, in which team members shared control of 24 robots, with a sector condition, in which each team member control half of all the robots. For search guidance, a notification was given when the operator spent too much time on one robot and either suggested or forced the operator to change to another robot. A total of 48 participants completed the experiment with two persons forming one team. The results demonstrate that automated search guidance neither increased nor decreased performance. However, suggested search guidance decreased average task completion time in Sector teams. Search guidance also influenced operators' teleoperation behaviors. For team structure, pooled teams experienced lower subjective workload than sector teams. Pooled teams communicated more than sector teams, but sector teams teleoperated more than pool teams. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards human control of robot swarms

    Page(s): 89 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (807 KB) |  | HTML iconHTML  

    In this paper we investigate principles of swarm control that enable a human operator to exert influence on and control large swarms of robots. We present two principles, coined selection and beacon control, that differ with respect to their temporal and spatial persistence. The former requires active selection of groups of robots while the latter exerts a passive influence on nearby robots. Both principles are implemented in a testbed in which operators exert influence on a robot swarm by switching between a set of behaviors ranging from trivial behaviors up to distributed autonomous algorithms. Performance is tested in a series of complex foraging tasks in environments with different obstacles ranging from open to cluttered and structured. The robotic swarm has only local communication and sensing capabilities with the number of robots ranging from 50 to 200. Experiments with human operators utilizing either selection or beacon control are compared with each other and to a simple autonomous swarm with regard to performance, adaptation to complex environments, and scalability to larger swarms. Our results show superior performance of autonomous swarms in open environments, of selection control in complex environments, and indicate a potential for scaling beacon control to larger swarms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing interfaces for multi-user, multi-robot systems

    Page(s): 97 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    The use of autonomous robots in organizations is expected to increase steadily over the next few decades. Although some empirical work exists that examines how people collaborate with robots, little is known about how to best design interfaces to support operators in understanding aspects of the task or tasks at hand. This paper presents a design investigation to understand how interfaces should be designed to support multi-user, multirobot teams. Through contextual inquiry, concept generation, and concept evaluation, we determine what operators should see, and with what salience different types of information should be presented. We present our findings through a series of design questions that development teams can use to help define interaction and design interfaces for these systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A touchscreen-based ‘Sandtray’ to facilitate, mediate and contextualise human-robot social interaction

    Page(s): 105 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    In the development of companion robots capable of any-depth, long-term interaction, social scenarios enable exploration of the robot's capacity to engage a human interactant. These scenarios are typically constrained to structured task-based interactions, to enable the quantification of results for the comparison of differing experimental conditions. This paper introduces a hardware setup to facilitate and mediate human-robot social interaction, simplifying the robot control task while enabling an equalised degree of environmental manipulation for the human and robot, but without implicitly imposing an a priori interaction structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Children's knowledge and expectations about robots: A survey for future user-centered design of social robots

    Page(s): 107 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (490 KB) |  | HTML iconHTML  

    This paper seeks to establish a precedent for future development and design of social robots by considering the knowledge and expectations about robots of a group of 296 children. Humanrobot interaction experiments were conducted with a Teleoperated anthropomorphic robot, and surveys were taken before and after the experiments. Children were also asked to perform a drawing of a robot. An image analysis algorithm was developed to classify drawings into 4 types: Anthropomorphic Mechanic/Non Mechanic (AM/AnM) and Non-Anthropomorphic Mechanic/Non Mechanic (nAM/nAnM). Image analysis algorithm was used in combination with human classification using a 2003 (two out of three) voting scheme to find children's strongest stereo-type about robots. Survey and image analysis results suggest that children in general have some general knowledge about robots, and some children even have a deep understanding and expectations for future robots. Moreover, children's strongest stereotype is directed towards mechanical anthropomorphic systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human-robot interaction: Developing trust in robots

    Page(s): 109 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    In all human-robot interaction, trust is an important element to consider because the presence or absence of trust certainly impacts the ultimate outcome of that interaction. Limited research exists that delineates the development and maintenance of this trust in various operational contexts. Our own prior research has investigated theoretical and empirically supported antecedents of human-robot trust. Here, we describe progress to date relating to the development of a comprehensive human-robot trust model based on our ongoing program of research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic gesture vocabulary design for intuitive human-robot dialog

    Page(s): 111 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB) |  | HTML iconHTML  

    This paper presents a generalized method for the design of a gesture vocabulary (GV) for intuitive and natural two-way human-robot dialog. Two GV design methodologies are proposed; one for a robot GV (RGV) and a second for a human GV (HGV). The design is based on motion gestures exerted from a cohort of subjects in response to a set of tasks needed to execute several robot waiter (RW)-customer dialogs. Using a RW setting as a case study, preliminary experimental results indicate the unique nature of the HGV obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of a haptic joystick for shared robot control

    Page(s): 113 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (631 KB) |  | HTML iconHTML  

    Autonomous mobile robots are often equipped with sophisticated sensors designed to provide the system with a model of its surrounding environment. This information can then be used for making task-related decisions and conveying information back to the operator. To date, autonomous systems tend to exceed at well defined tasks such as navigation, planning, and obstacle avoidance, usually in fairly structured environments. However, for many current mobile robotic systems, teleoperated control is still largely favored, in part due to a human operator's sophisticated ability to reason about unstructured environments [6]. Introducing varying levels of autonomy into a teleoperated system allows for a human operator to make high level decisions while leaving other tasks to the autonomy [5]. With this technique, problems can arise when the human operator does not understand why a part of the system they do not have direct control over is behaving in a particular manner (see Figure 1), usually due to poor situation awareness [1]. Attempts have been made to correct these issues by displaying additional sensor and system state information in the operator control unit (e.g., [8]). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User experience of industrial robots over time

    Page(s): 115 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (83 KB) |  | HTML iconHTML  

    This paper reports about a User Experience (UX) study on industrial robotic arms in the context of a semiconductor factory cleanroom. The goal was to find out (1) if there is a difference in the UX between robots used over years with a strict security perimeter (robot A) and a newly installed robot without security perimeter (robot B), and (2) if the UX ratings of the new robot change over time. Therefore, a UX questionnaire was developed and handed out to the operators working with these robots. The first survey was conducted one week after the deployment of robot B (n=23), the second survey (n=21) six months later. Thereby, we found that time is crucial for experiencing human-robot interaction. Our results showed an improvement between the first and second measurement of UX regarding robot B. Although robot A was significantly better rated than robot B in terms of usability, general UX, cooperation, and stress, we assume that the differences in UX will decrease gradually with prolonged interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual cues-based anticipation for percussionist-robot interaction

    Page(s): 117 - 118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1722 KB) |  | HTML iconHTML  

    Visual cues-based anticipation is a fundamental aspect of human-human interaction, and it plays an especially important role in the time demanding medium of group performance. In this work we explore the importance of visual gesture anticipation in music performance involving human and robot. We study the case in which a human percussionist is playing a four-piece percussion set, and a robot musician is playing either the marimba, or a three-piece percussion set. Computer Vision is used to embed anticipation in the robotic response to the human gestures. We developed two algorithms for anticipation, predicting the strike location about 10 mili-seconds or about 100 mili-seconds before it occurs. Using the second algorithm, we show that the robot outperforms, on average, a group of human subjects, in synchronizing its gesture with a reference strike. We also show that, in the tested group of users, having some time in advance is important for a human to synchronize the strike with a reference player, but, from a certain time, that good influence stops increasing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Socially constrained management of power resources for social mobile robots

    Page(s): 119 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    Autonomous robots acting as companions or assistants in real social environments should be able to sustain and operate over an extended period of time. Generally, autonomous mobile robots draw power from batteries to operate various sensors, actuators and perform tasks. Batteries have a limited power life and take a long time to recharge via a power source, which may impede human-robot interaction and task performance. Thus, it is important for social robots to manage their energy, this paper discusses an approach to manage power resources on mobile robot with regard to social aspects for creating life-like autonomous social robots. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensorless collision detection and control by physical interaction for wheeled mobile robots

    Page(s): 121 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    In this paper, we present the adaptation of a sensorless (in De Luca's sense [1], i.e., without the use of extra sensors,) collision detection approach previously used on robotic arms to mobile wheeled robots. The method is based on detecting the torque disturbance and does not require a model of the robot's dynamics. We then consider the feasibility of developing control by physical interaction strategies using the described adapted technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assitive teleoperation for manipulation tasks

    Page(s): 123 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1006 KB) |  | HTML iconHTML  

    How should a human user and a robot collaborate during teleoperation? The user understands the full semantics of the task: they know, for example, what the robot should search for in a cupboard, or that it should be more careful when moving near a glass of water than near a box of tissues. Since the robot might not have this knowledge, allowing it to operate fully autonomously may be risky; its model is incomplete and its policy might be wrong. On the other hand, teleoperating the robot through every motion is slow and tiresome, especially on difficult tasks. Between these two extremes lies a spectrum, from almost no assistance at all (very timid) to full autonomy (very aggressive). So what is the appropriate level of assistance? And how do factors like task difficulty and policy correctness affect this decision? View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ‘If you sound like me, you must be more human’: On the interplay of robot and user features on human-robot acceptance and anthropomorphism

    Page(s): 125 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (446 KB) |  | HTML iconHTML  

    In an experiment we manipulated a robot's voice in two ways: First, we varied robot gender; second, we equipped the robot with a human-like or a robot-like synthesized voice. Moreover, we took into account user gender and tested effects of these factors on human-robot acceptance, psychological closeness and psychological anthropomorphism. When participants formed an impression of a same-gender robot, the robot was perceived more positively. Participants also felt more psychological closeness to the same-gender robot. Similarly, the same-gender robot was anthropomorphized more strongly, but only when it utilized a human-like voice. Results indicate that a projection mechanism could underlie these effects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.