Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Human-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on

Date 9-11 March 2007

Filter Results

Displaying Results 1 - 25 of 50
  • [Title pages]

    Publication Year: 2007 , Page(s): i - x
    Save to Project icon | Request Permissions | PDF file iconPDF (494 KB)  
    Freely Available from IEEE
  • Effects of anticipatory action on human-robot teamwork: Efficiency, fluency, and perception of team

    Publication Year: 2007 , Page(s): 1 - 8
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (594 KB)  

    A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic team-mates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team's fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human control for cooperating robot teams

    Publication Year: 2007 , Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1109 KB)  

    Human control of multiple robots has been characterized by the average demand of single robots on human attention or the distribution of demands from multiple robots. When robots are allowed to cooperate autonomously, however, demands on the operator should be reduced by the amount previously required to coordinate their actions. The present experiment compares control of small robot teams in which cooperating robots explored autonomously, were controlled independently by an operator or through mixed initiative as a cooperating team. Mixed initiative teams found more victims and searched wider areas than either fully autonomous or manually controlled teams. Operators who switched attention between robots more frequently were found to perform better in both manual and mixed initiative conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Natural person-following behavior for social robots

    Publication Year: 2007 , Page(s): 17 - 24
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (641 KB)  

    We are developing robots with socially appropriate spatial skills not only to travel around or near people, but also to accompany people side-by-side. As a step toward this goal, we are investigating the social perceptions of a robot's movement as it follows behind a person. This paper discusses our laser-based person-tracking method and two different approaches to person-following: direction-following and path-following. While both algorithms have similar characteristics in terms of tracking performance and following distances, participants in a pilot study rated the direction-following behavior as significantly more human-like and natural than the path-following behavior. We argue that the path-following method may still be more appropriate in some situations, and we propose that the ideal person-following behavior may be a hybrid approach, with the robot automatically selecting which method to use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing autonomy in robot teams: Observations from four experiments

    Publication Year: 2007 , Page(s): 25 - 32
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    It is often desirable for a human to manage multiple robots. Autonomy is required to keep workload within tolerable ranges, and dynamically adapting the type of autonomy may be useful for responding to environment and workload changes. We identify two management styles for managing multiple robots and present results from four experiments that have relevance to dynamic autonomy within these two management styles. These experiments, which involved 80 subjects, suggest that individual and team autonomy benefit from attention management aids, adaptive autonomy, and proper information abstraction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing performance metrics for the supervisory control of multiple robots

    Publication Year: 2007 , Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (968 KB)  

    Efforts are underway to make it possible for a single operator to effectively control multiple robots. In these high workload situations, many questions arise including how many robots should be in the team (Fan-out), what level of autonomy should the robots have, and when should this level of autonomy change (i.e., dynamic autonomy). We propose that a set of metric classes should be identified that can adequately answer these questions. Toward this end, we present a potential set of metric classes for human-robot teams consisting of a single human operator and multiple robots. To test the usefulness and appropriateness of this set of metric classes, we conducted a user study with simulated robots. Using the data obtained from this study, we explore the ability of this set of metric classes to answer these questions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adapting GOMS to model human-robot interaction

    Publication Year: 2007 , Page(s): 41 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (981 KB)  

    A formal interaction modeling technique known as Goals, Operators, Methods, and Selection rules (GOMS) is well-established in human-computer interaction as a cost-effective way of evaluating designs without the participation of end users. This paper explores the use of GOMS for evaluating human-robot interaction. We provide a case study in the urban search-and-rescue domain and raise issues for developing GOMS models that have not been previously addressed. Further, we provide rationale for selecting different types of GOMS modeling techniques to help the analyst model human-robot interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive robot task training through dialog and demonstration

    Publication Year: 2007 , Page(s): 49 - 56
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1691 KB)  

    Effective human/robot interfaces which mimic how humans interact with one another could ultimately lead to robots being accepted in a wider domain of applications. We present a framework for interactive task training of a mobile robot where the robot learns how to do various tasks while observing a human. In addition to observation, the robot listens to the human's speech and interprets the speech as behaviors that are required to be executed. This is especially important where individual steps of a given task may have contingencies that have to be dealt with depending on the situation. Finally, the context of the location where the task takes place and the people present factor heavily into the robot's interpretation of how to execute the task. In this paper, we describe the task training framework, describe how environmental context and communicative dialog with the human help the robot learn the task, and illustrate the utility of this approach with several experimental case studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning by demonstration with critique from a human teacher

    Publication Year: 2007 , Page(s): 57 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1081 KB)  

    Learning by demonstration can be a powerful and natural tool for developing robot control policies. That is, instead of tedious hand-coding, a robot may learn a control policy by interacting with a teacher. In this work we present an algorithm for learning by demonstration in which the teacher operates in two phases. The teacher first demonstrates the task to the learner. The teacher next critiques learner performance of the task. This critique is used by the learner to update its control policy. In our implementation we utilize a 1-Nearest Neighbor technique which incorporates both training dataset and teacher critique. Since the teacher critiques performance only, they do not need to guess at an effective critique for the underlying algorithm. We argue that this method is particularly well-suited to human teachers, who are generally better at assigning credit to performances than to algorithms. We have applied this algorithm to the simulated task of a robot intercepting a ball. Our results demonstrate improved performance with teacher critiquing, where performance is measured by both execution success and efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient model learning for dialog management

    Publication Year: 2007 , Page(s): 65 - 72
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1217 KB)  

    Intelligent planning algorithms such as the Partially Observable Markov Decision Process (POMDP) have succeeded in dialog management applications [10, 11, 12] because they are robust to the inherent uncertainty of human interaction. Like all dialog planning systems, however, POMDPs require an accurate model of the user (e.g., what the user might say or want). POMDPs are generally specified using a large probabilistic model with many parameters. These parameters are difficult to specify from domain knowledge, and gathering enough data to estimate the parameters accurately a priori is expensive. In this paper, we take a Bayesian approach to learning the user model simultaneously with dialog manager policy. At the heart of our approach is an efficient incremental update algorithm that allows the dialog manager to replan just long enough to improve the current dialog policy given data from recent interactions. The update process has a relatively small computational cost, preventing long delays in the interaction. We are able to demonstrate a robust dialog manager that learns from interaction data, out-performing a hand-coded model in simulation and in a robotic wheelchair application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using vision, acoustics, and natural language for disambiguation

    Publication Year: 2007 , Page(s): 73 - 80
    Cited by:  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB)  

    Creating a human-robot interface is a daunting experience. Capabilities and functionalities of the interface are dependent on the robustness of many different sensor and input modalities. For example, object recognition poses problems for state-of-the-art vision systems. Speech recognition in noisy environments remains problematic for acoustic systems. Natural language understanding and dialog are often limited to specific domains and baffled by ambiguous or novel utterances. Plans based on domain-specific tasks limit the applicability of dialog managers. The types of sensors used limit spatial knowledge and understanding, and constrain cognitive issues, such as perspective-taking. In this research, we are integrating several modalities, such as vision, audition, and natural language understanding to leverage the existing strengths of each modality and overcome individual weaknesses. We are using visual, acoustic, and linguistic inputs in various combinations to solve such problems as the disambiguation of referents (objects in the environment), localization of human speakers, and determination of the source of utterances and appropriateness of responses when humans and robots interact. For this research, we limit our consideration to the interaction of two humans and one robot in a retrieval scenario. This paper will describe the system and integration of the various modules prior to future testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • To kill a mockingbird robot

    Publication Year: 2007 , Page(s): 81 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (889 KB)  

    Robots are being introduced in our society but their social status is still unclear. A critical issue is if the robot's exhibition of intelligent life-like behavior leads to the users' perception of animacy. The ultimate test for the life-likeness of a robot is to kill it. We therefore conducted an experiment in which the robot's intelligence and the participants' gender were the independent variables and the users' destructive behavior of the robot the dependent variables. Several practical and methodological problems compromised the acquired data, but we can conclude that the robot's intelligence had a significant influence on the users' destructive behavior. We discuss the encountered problems and the possible application of this animacy measuring method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A dancing robot for rhythmic social interaction

    Publication Year: 2007 , Page(s): 89 - 96
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1205 KB)  

    This paper describes a robotic system that uses dance as a form of social interaction to explore the properties and importance of rhythmic movement in general social interaction. The system consists of a small creature-like robot whose movement is controlled by a rhythm-based software system. Environmental rhythms can be extracted from auditory or visual sensory stimuli, and the robot synchronizes its movement to a dominant rhythm. The system was demonstrated, and an exploratory study conducted, with children interacting with the robot in a generalized dance task. Through a behavioral analysis of videotaped interactions, we found that the robot's synchronization with the background music had an effect on children's interactive involvement with the robot. Furthermore, we observed a number of expected and unexpected styles and modalities of interactive exploration and play that inform our discussion on the next steps in the design of a socially rhythmic robotic system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The interactive robotic percussionist - new developments in form, mechanics, perception and interaction design

    Publication Year: 2007 , Page(s): 97 - 104
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1142 KB)  

    We present new developments in the improvisational robotic percussionist project, aimed at improving human-robot interaction through design, mechanics, and perceptual modeling. Our robot, named Haile, listens to live human players, analyzes perceptual aspects in their playing in real-time, and uses the product of this analysis to play along in a collaborative and improvisatory manner. It is designed to combine the benefits of computational power in algorithmic music with the expression and visual interactivity of acoustic playing. Haile's new features include an anthropomorphic form, a linear-motor based robotic arm, a novel perceptual modeling implementation, and a number of new interaction schemes. The paper begins with an overview of related work and a presentation of goals and challenges based on Haile's original design. We then describe new developments in physical design, mechanics, perceptual implementation, and interaction design, aimed at improving human-robot interactions with Haile. The paper concludes with a description of a user study, conducted in an effort to evaluate the new functionalities and their effectiveness in facilitating expressive musical human-robot interaction. The results of the study show correlation between human's and Haile's rhythmic perception as well as user satisfaction regarding Haile's perceptual and mechanical abilties. The study also indicates areas for improvement such as the need for better timbre and loudness control and more advance and responsive interaction schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using proprioceptive sensors for categorizing human-robot interactions

    Publication Year: 2007 , Page(s): 105 - 112
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1420 KB)  

    Increasingly researchers are looking outside of normal communication channels (such as video and audio) to provide additional forms of communication or interaction between a human and a robot, or a robot and its environment. Amongst the new channels being investigated is the detection of touch using infrared, proprioceptive and temperature sensors. Our work aims at developing a system that can detect natural touch or interaction coming from children playing with a robot, and adapt to this interaction. This paper reports trials carried out using Roball, a spherical mobile robot, demonstrating how sensory data patterns can be identified in human-robot interaction, and exploited for achieving behavioral adaptation. The experimental methodology used for these trials is reported, which validated the hypothesis that human interaction can not only be perceived from proprioceptive sensors on-board a robotic platform, but that this perception has the ability to lead to adaptation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving human-robot interaction through adaptation to the auditory scene

    Publication Year: 2007 , Page(s): 113 - 120
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1400 KB)  

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot ego-noise, echoes, and human interference are all common sources of decreased intelligibility. In real-world environments, however, these common problems are supplemented with many different types of background noise sources. For instance, military scenarios might be punctuated by high decibel plane noise and bursts from weaponry that mask parts of the speech output from the robot. Even in non-military settings, however, fans, computers, alarms, and transportation noise can cause enough interference that they might render a traditional speech interface unintelligible. In this work, we seek to overcome these problems by applying robotic advantages of sensing and mobility to a text-to-speech interface. Using perspective taking skills to predict how the human user is being affected by new sound sources, a robot can adjust its speaking patterns and/or reposition itself within the environment to limit the negative impact on intelligibility, making a speech interface easier to use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group attention control for communication robots with Wizard of OZ approach

    Publication Year: 2007 , Page(s): 121 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1736 KB)  

    This paper describes a group attention control (GAC) system that enables a communication robot to simultaneously interact with many people. GAC is based on controlling social situations and indicating explicit control to unify all purposes of attention. We implemented a semi-autonomous GAC system into a communication robot that guides visitors to exhibits in a science museum and engages in free-play interactions with them. The GAC system's effectiveness was demonstrated in a two-week experiment in the museum. We believe these results will allow us to develop interactive humanoid robots that can interact effectively with groups of people. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How robotic products become social products: An ethnographic study of cleaning in the home

    Publication Year: 2007 , Page(s): 129 - 136
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (534 KB)  

    Robots that work with people foster social relationships between people and systems. The home is an interesting place to study the adoption and use of these systems. The home provides challenges from both technical and interaction perspectives. In addition, the home is a seat for many specialized human behaviors and needs, and has a long history of what is collected and used to functionally, aesthetically, and symbolically fit the home. To understand the social impact of robotic technologies, this paper presents an ethnographic study of consumer robots in the home. Six families' experience of floor cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a handheld upright) was studied. While the Flair had little impact, the Roomba changed people, cleaning activities, and other product use. In addition, people described the Roomba in aesthetic and social terms. The results of this study, while initial, generate implications for how robots should be designed for the home. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Humanoid robots as a passive-social medium - a field experiment at a train station

    Publication Year: 2007 , Page(s): 137 - 144
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1709 KB)  

    This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing a computer agent with a humanoid robot

    Publication Year: 2007 , Page(s): 145 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (993 KB)  

    HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiments with a robotic computer: Body, affect and cognition interactions

    Publication Year: 2007 , Page(s): 153 - 160
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (902 KB)  

    We present RoCo, the first robotic computer designed with the ability to move its monitor in subtly expressive ways that respond to and encourage its user's own postural movement. We use RoCo in a novel user study to explore whether a computer's “posture” can influence its user's subsequent posture, and if the interaction of the user's body state with their affective state during a task leads to improved task measures such as persistence in problem solving. We believe this is possible in light of new theories that link physical posture and its influence on affect and cognition. Initial results with 71 subjects support the hypothesis that RoCo's posture not only manipulates the user's posture, but also is associated with hypothesized posture-affect interactions. Specifically, we found effects on increased persistence on a subsequent cognitive task, and effects on perceived level of comfort. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RSVP: An investigation of remote shared visual presence as common ground for human-robot teams

    Publication Year: 2007 , Page(s): 161 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (821 KB)  

    This study presents mobile robots as a way of augmenting communication in distributed teams through a remote shared visual presence (RSVP) consisting of the robot's view. By giving all team members access to the shared visual display provided by a robot situated in a remote workspace, the robot can serve as a source of common ground for the distributed team. In a field study examining the effects of remote shared visual presence on team performance in collocated and distributed Urban Search & Rescue technical search teams, data were collected from 25 dyadic teams comprised of US&R task force personnel drawn from high-fidelity training exercises held in California (2004) and New Jersey (2005). They performed a 2 × 2 repeated measures search task entailing robot-assisted search in a confined space rubble pile. Multilevel regression analyses were used to predict team performance based upon use of RSVP (RSVP or no-RSVP) and whether or not team members had visual access to other team members. Results indicated that the use of RSVP technology predicted team performance (β = -1.24, p<;.05). No significant differences emerged in performance between teams with and without visual access to their team members. Findings suggest RSVP may enable distributed teams to perform as effectively as collocated teams. However, differences detected between sites suggest efficiency of RSVP may depend on the user's domain experience and team cohesion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A field experiment of autonomous mobility: Operator workload for one and two robots

    Publication Year: 2007 , Page(s): 169 - 176
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB)  

    An experiment was conducted on aspects of human-robot interaction in a field environment using the U.S. Army's Experimental Unmanned Vehicle (XUV). Goals of this experiment were to examine the use of scalable interfaces and to examine operator span of control when controlling one versus two autonomous unmanned ground vehicles. We collected workload ratings from two Soldiers after they had performed missions that included monitoring, downloading and reporting on simulated reconnaissance, surveillance, and target acquisition (RSTA) images, and responding to unplanned operator intervention requests from the XUV. Several observations are made based on workload data, experimenter notes, and informal interviews with operators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HRI caught on film

    Publication Year: 2007 , Page(s): 177 - 183
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1798 KB)  

    The Human Robot Interaction 2007 conference hosted a video session, in which movies of interesting, important, illustrative, or humorous HRI research moments are shown. This paper summarizes the abstracts of the presented videos. Robots and humans do not always behave as expected and the results can be entertaining and even enlightening - therefore instances of failures have also been considered in the video session. Besides the importance of the lessons learned and the novelty of the situation, the videos have also an entertaining value. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A cognitive robotics approach to comprehending human language and behaviors

    Publication Year: 2007 , Page(s): 185 - 192
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (665 KB)  

    The ADAPT project is a collaboration of researchers in linguistics, robotics and artificial intelligence at three universities. We are building a complete robotic cognitive architecture for a mobile robot designed to interact with humans in a range of environments, and which uses natural language and models human behavior. This paper concentrates on the HRI aspects of ADAPT, and especially on how ADAPT models and interacts with humans. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.