By Topic

Development and Learning (ICDL), 2011 IEEE International Conference on

Date 24-27 Aug. 2011

Filter Results

Displaying Results 1 - 25 of 69
  • Reinforcement learning of impedance control in stochastic force fields

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1533 KB) |  | HTML iconHTML  

    Variable impedance control is essential for ensuring robust and safe physical interaction with the environment. As demonstrated in numerous force field experiments, humans combine two strategies to adapt their impedance to external perturbations: 1) if perturbations are unpredictable, subjects increase their impedance through co-contraction; 2) if perturbations are predictable, subjects learn a feed-forward command to counter the known perturbation. In this paper, we apply the force field paradigm to a simulated 7-DOF robot, by exerting stochastic forces on the robot's end-effector. The robot `subject' uses our model-free reinforcement learning algorithm PI2 to simultaneously learn the end-effector trajectories and variable impedance schedules. We demonstrate how the robot learns the same two-fold strategy to perturbation rejection as humans do, resulting in qualitatively similar behavior. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncertain semantics, representation nuisances, and necessary invariance properties of bootstrapping agents

    Publication Year: 2011 , Page(s): 1 - 8
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (753 KB) |  | HTML iconHTML  

    In the problem of bootstrapping, an agent must learn to use an unknown body, in an unknown world, starting from zero information about the world, its sensors, and its actuators. So far, this fascinating problem has not; been given a proper normalization. In this paper, we provide a possible rigorous definition of one of the key aspects of bootstrapping, namely the fact that an agent must be able to use “uninterpreted” observations and commands. We show that this can be formalized by positing the existence of representation nuisances that act on the data, and which must be tolerated by an agent. The classes of nuisances tolerate d in directly encode the assumptions needed about the world, and therefore the agent's ability to solve smaller or larger classes of bootstrapping problem instances. Moreover, we argue that the behavior of an agent that claims optimality must actually be invariant to the representation nuisances, and we discuss several design principles to obtain such invariance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Familiarity-to-novelty shift driven by learning: A conceptual and computational model

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1235 KB) |  | HTML iconHTML  

    We propose a new theory explaining the familiarity-to-novelty shift in infant habituation. In our account, infants' interest in a stimulus is related to their learning progress, i.e. the improvement of an internal model of the stimulus. Specifically, we propose infants prefer the stimulus for which its current learning progress is maximal. We also propose a new algorithm called Selective Learning Self Organizing Map (SL-SOM), a biologically inspired modification to SOM, exhibiting familiarity-to-novelty shift. Using this algorithm we present experiments on a robotic platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The interaction of maturational constraints and intrinsic motivations in active motor development

    Publication Year: 2011 , Page(s): 1 - 8
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1232 KB) |  | HTML iconHTML  

    This paper studies computational models of the coupling of intrinsic motivations and physiological maturational constraints, and argues that both mechanisms may have complex bidirectional interactions allowing the active control of the growth of complexity in motor development which directs an efficient learning and exploration process. First, we outline the Self-Adaptive Goal Generation - Robust Intelligent Adaptive Curiosity algorithm (SAGG-RIAC) that instantiates an intrinsically motivated goal exploration mechanism for motor learning of inverse models. Then, we introduce a functional model of maturational constraints inspired by the myelination process in humans, and show how it can be coupled with the SAGG-RIAC algorithm, forming a new system called McSAGG-RIAC2. We then present experiments to evaluate qualitative and, more importantly, quantitative properties of these systems when applied to a 12DOF quadruped controlled with 24 dimensions motor synergies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised discovery of phoneme boundaries in multi-speaker continuous speech

    Publication Year: 2011 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (367 KB) |  | HTML iconHTML  

    Children rapidly learn the inventory of phonemes used in their native tongues. Computational approaches to learning phoneme boundaries from speech data do not yet reach the level of human performance. We present an algorithm that operates on, qualitatively, similar data to those children receive: natural language utterances from multiple speakers. Our algorithm is unsupervised and discovers phoneme boundary positions in speech. The approach draws inspiration from the word and text segmentation literature. To demonstrate the efficacy of our algorithm on speech data, we present empirical results of our method using the TIMIT data set. Our method achieves F-measure scores in the 0.68 - 0.73 range for locating phoneme boundary positions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of joint attention and social referencing

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1163 KB) |  | HTML iconHTML  

    In this work, we are interested in understanding how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing, facial expression recognition and the joint attention can emerge from a simple sensori-motor architecture. Without knowing that the other is an agent, we show our robot is able to learn some complex tasks if the human partner has a low level emotional resonance with the robot head. Hence we advocate the idea that social referencing can be bootstrapped from a simple sensori-motor system not dedicated to social interactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modelling early infant walking: Testing a generic CPG architecture on the NAO humanoid

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1135 KB) |  | HTML iconHTML  

    In this article, a simple CPG network is shown to model early infant walking, in particular the onset of independent walking. The difference between early infant walking and early adult walking is addressed with respect to the underlying neurophysiology and evaluated according to gait attributes. Based on this, we successfully model the early infant walking gait on the NAO robot and compare its motion dynamics and performance to those of infants. Our model is able to capture the core properties of early infant walking. We identify differences in the morphologies between the robot and infant and the effect of this on their respective performance. In conclusion, early infant walking can be seen to develop as a function of the CPG network and morphological characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Is talking to a simulated robot like talking to a child?

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (886 KB) |  | HTML iconHTML  

    Previous research has found people to transfer behaviors from social interaction among humans to interactions with computers or robots. These findings suggest that people will talk to a robot which looks like a child in a similar way as people talking to a child. However, in a previous study in which we compared speech to a simulated robot with speech to preverbal, 10 months old infants, we did not find the expected similarities. One possibility is that people were targeting an older child than a 10 months old. In the current study, we address the similarities and differences between speech to four different age groups of children and a simulated robot. The results shed light on how people talk to robots in general. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards grounding concepts for transfer in goal learning from demonstration

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (823 KB) |  | HTML iconHTML  

    We aim to build robots that frame the task learning problem as goal inference so that they are natural to teach and meet people's expectations for a learning partner. The focus of this work is the scenario of a social robot that learns task goals from human demonstrations without prior knowledge of high-level concepts. In the system that we present, these discrete concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned on them using Bayesian inference. The grounded concepts are derived from the structure of the Learning from Demonstration (LfD) problem and exhibit degrees of prototypicality. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. Using sensor data taken during demonstrations to our robot from five human teachers, we show the expressivity of using grounded concepts when learning new tasks from demonstration. We then show how the learning curve improves when transferring the knowledge of grounded concepts to future tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robots as social mediators for children with Autism - A preliminary analysis comparing two different robotic platforms

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (909 KB) |  | HTML iconHTML  

    Robots can be very helpful therapeutic tools, especially for children with special needs. In the present paper we describe the application of two robotic platforms with different design parameters in interaction with children with autism and other cognitive impairments. IROMEC is a mobile robotic platform designed for children with different levels of disabilities to encourage them to be engaged in social interactions. KASPAR is a humanoid child-size robot designed for social interaction. KASPAR has been used extensively in studies with children with autism. The aim of this study is to examine how KASPAR and IROMEC can support social interaction and facilitate the cognitive and social development of children with special needs via play activities. Natural engagement in social play behaviour is often a problem in the development of children with disabilities. Due to the nature of their disabilities they are often excluded from such activities. As part of a long-term study we carried out different play scenarios based on imitation, turn taking and the cause and effect game according to the main educational and therapeutic objectives considered important for child development. In this paper we focus on the turn taking and the imitation game scenarios. A preliminary analysis of the data showed encouraging results. The level of the improvement of the children depended on the level and nature of their disabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A cognitive basis for theories of intrinsic motivation

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (804 KB) |  | HTML iconHTML  

    Since intelligent agents make choices based on both external rewards and intrinsic motivations, the structure of a realistic decision theory should also present as an indirect model of intrinsic motivation. We have recently proposed a model of sequential choice-making that is grounded in well-articulated cognitive principles. In this paper, we show how our model of choice selection predicts behavior that matches the predictions of state-of-the-art intrinsic motivation models, providing both a clear causal mechanism for explaining its effects and testable predictions for situations where its predictions differ from those of existing models. Our results provide a unified cognitively grounded explanation for phenomena that are currently explained using different theories of motivation, creativity and attention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The two-dimensional organization of behavior

    Publication Year: 2011 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1033 KB) |  | HTML iconHTML  

    This paper addresses the problem of continual learning [1] in a new way, combining multi-modular reinforcement learning with inspiration from the motor cortex to produce a unique perspective on hierarchical behavior. Most reinforcement-learning agents represent policies monolithically using a single table or function approximator. In those cases where the policies are split among a few different modules, these modules are related to each other only in that they work together to produce the agent's overall policy. In contrast, the brain appears to organize motor behavior in a two-dimensional map, where nearby locations represent similar behaviors. This representation allows the brain to build hierarchies of motor behavior that correspond not to hierarchies of subroutines but to regions of the map such that larger regions correspond to more general behaviors. Inspired by the benefits of the brain's representation, the system presented here is a first step and the first attempt toward the two-dimensional organization of learned policies according to behavioral similarity. We demonstrate a fully autonomous multi-modular system designed for the constant accumulation of ever more sophisticated skills (the continual-learning problem). The system can split up a complex task among a large number of simple modules such that nearby modules correspond to similar policies. The eventual goal is to develop and use the resulting organization hierarchically, accessing behaviors by their location and extent in the map. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards using prosody to scaffold lexical meaning in robots

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (868 KB) |  | HTML iconHTML  

    We present a case-study analysing the prosodic contours and salient word markers of a small corpus of robot-directed speech where the human participants had been asked to talk to a socially interactive robot as if it were a child. We assess whether such contours and salience characteristics could be used to extract relevant information for the subsequent learning and scaffolding of meaning in robots. The study uses measures of pitch, energy and word duration from the participants speech and exploits Pierrehumbert and Hirschberg's theory of the meaning of intonational contours which may provide information on shared belief between speaker and listener. The results indicate that 1) participants use a high number of contours which provide new information markers to the robot, 2) that prosodic question contours reduce as the interactions proceed and 3) that pitch, energy and duration features can provide strong markers for relevant words and 4) there was little evidence that participants altered their prosodic contours in recognition of shared belief. A description and verification of our software which allows the semi-automatic marking of prosodic phrases is also described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emergence of mirror neuron system: Immature vision leads to self-other correspondence

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1329 KB) |  | HTML iconHTML  

    The question of how the mirror neuron system (MNS) develops has attracted increased attention of researchers. Among various hypotheses, a widely accepted model is associative sequence learning, which acquires the MNS as a by-product of sensorimotor learning. The model, however, cannot discriminate self from others since it adopts too much simplified sensory representations. We propose a computational model for early development of the MNS, which is originated in immature vision. The model gradually increases the spatiotemporal resolution of a robot's vision while the robot learns sensorimotor mapping through primal interactions with others. In the early stage of development, the robot interprets all observed actions as equivalent due to a lower resolution, and thus associates the non-differentiated observation with motor commands. As vision develops, the robot starts discriminating actions generated by self from those by others. The initially acquired association is, however, maintained through development, which results in two types of associations: one is between motor commands and self-observation and the other between motor commands and other-observation (i.e., what the MNS does). Our experiments demonstrate that the model achieves early development of the MNS, which enables a robot to imitate others' actions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning of audiovisual integration

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1029 KB) |  | HTML iconHTML  

    We present a system for learning audiovisual integration based on temporal and spatial coincidence. The current sound is sometimes related to a visual signal that has not yet been seen, we consider this situation as well. Our learning algorithm is tested in online adaptation of audio-motor maps. Since audio-motor maps are not reliable at the beginning of the experiment, learning is bootstrapped using temporal coincidence when there is only one auditory and one visual stimulus. In the course of time, the system can automatically decide to use both spatial and temporal coincidence depending on the quality of maps and the number of visual sources. We can show that this audio-visual integration can work when more than one visual source appears. The integration performance does not decrease when the related visual source has not yet been spotted. The experiment is executed on a humanoid robot head. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emerging social awareness: Exploring intrinsic motivation in multiagent learning

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    Recently, a novel framework has been proposed for intrinsically motivated reinforcement learning (IMRL) in which a learning agent is driven by rewards that include not only information about what the agent must accomplish in order to “survive”, but also additional reward signals that drive the agent to engage in other activities, such as playing or exploring, because they are “inherently enjoyable”. In this paper, we investigate the impact of intrinsic motivation mechanisms in multiagent learning scenarios, by considering how such motivational system may drive an agent to engage in behaviors that are “socially aware”. We show that, using this approach, it is possible for agents to learn individually to acquire socially aware behaviors that tradeoff individual well-fare for social acknowledgment, leading to a more successful performance of the population as a whole. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intrinsically motivated neuroevolution for vision-based reinforcement learning

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1193 KB) |  | HTML iconHTML  

    Neuroevolution, the artificial evolution of neural networks, has shown great promise on continuous reinforcement learning tasks that require memory. However, it is not yet directly applicable to realistic embedded agents using high-dimensional (e.g. raw video images) inputs, requiring very large networks. In this paper, neuroevolution is combined with an unsupervised sensory pre-processor or compressor that is trained on images generated from the environment by the population of evolving recurrent neural network controllers. The compressor not only reduces the input cardinality of the controllers, but also biases the search toward novel controllers by rewarding those controllers that discover images that it reconstructs poorly. The method is successfully demonstrated on a vision-based version of the well-known mountain car benchmark, where controllers receive only single high-dimensional visual images of the environment, from a third-person perspective, instead of the standard two-dimensional state vector which includes information about velocity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bootstrapping intrinsically motivated learning with human demonstration

    Publication Year: 2011 , Page(s): 1 - 8
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (883 KB) |  | HTML iconHTML  

    This paper studies the coupling of internally guided learning and social interaction, and more specifically the improvement owing to demonstrations of the learning by intrinsic motivation. We present Socially Guided Intrinsic Motivation by Demonstration (SGIM-D), an algorithm for learning in continuous, unbounded and non-preset environments. After introducing social learning and intrinsic motivation, we describe the design of our algorithm, before showing through a fishing experiment that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation to gain a wide repertoire while being specialised in specific subspaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • People-aware navigation for goal-oriented behavior involving a human partner

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (932 KB) |  | HTML iconHTML  

    In order to facilitate effective autonomous interaction behavior for human-robot interaction the robot should be able to execute goal-oriented behavior while reacting to sensor feedback related to the people with which it is interacting. Prior work has demonstrated that autonomously sensed distance-based features can be used to correctly detect user state. We wish to demonstrate that such models can also be used to weight action selection as well. This paper considers the problem of moving to a goal along with a partner, demonstrating that a learned model can be used to weight trajectories of a navigation system for autonomous movement. This paper presents a realization of a person-aware navigation system which requires no ad-hoc parameter tuning, and no input other than a small set of training examples. This system is validated using an in-lab demonstration of people-aware navigation using the described system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards incremental learning of task-dependent action sequences using probabilistic parsing

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    We study an incremental process of learning where a set of generic basic actions are used to learn higher-level task-dependent action sequences. A task-dependent action sequence is learned by associating the goal given by a human demonstrator with the task-independent, general-purpose actions in the action repertoire. This process of contextualization is done using probabilistic parsing. We propose stochastic context-free grammars as the representational framework due to its robustness to noise, structural flexibility, and easiness on defining task-independent actions. We demonstrate our implementation on a real-world scenario using a humanoid robot and report implementation issues we had. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trying anyways: How ignoring the errors may help in learning new skills

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (881 KB) |  | HTML iconHTML  

    Traditional view stresses the role of errors in the learning process. The result obtained from our experiment with older infants suggested that omitting the errors during learning can also be beneficial. We propose that a temporal decrease in learning from negative feedback could be an efficient mechanism behind infant learning new skills. Herein, we claim that disregarding the errors is tightly connected to the sense of control, and results from extremely high level of self-efficacy (overconfidence). Our preliminary results with a robot simulator serve as a proof-of-concept for our approach, and suggest a possible new route for constraints balancing exploration and exploitation in intrinsically motivated reinforcement learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • It's the child's body: The role of toddler and parent in selecting toddler's visual experience

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (887 KB) |  | HTML iconHTML  

    Human visual experience is tightly coupled to action - to the perceiver's eye, head, hand and body movements. Social interactions and joint attention are also tied to action, to the mutually influencing and coupled eye, head, hand and body movements of the participants. This study considers the role of the child's own sensory-motor dynamics and those of the social partner in structuring the visual experiences of the toddler. To capture the first-person visual experience, a mini head-mounted camera was placed on the participants' forehead. Two social contexts were studied: (1) parent-child play wherein children and parents jointly played with toys; and (2) child play alone wherein parents were asked to read a document while letting the child play by himself. Visual information from the child's first person view and manual actions from both participants were processed and analyzed. The main finding is that the dynamics of the toddler's visual experience did not differ significantly between the two conditions, showing in both conditions highly selective views that largely reduced noise perceived by the child. These views were strongly related to the child's own head and hand actions. Although the dynamics of children's visual experience appear dependent mainly on their own body dynamics, parents also play a complementary role in selecting the targets for the child's momentary attention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-line learning and planning in a pick-and-place task demonstrated through body manipulation

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1392 KB) |  | HTML iconHTML  

    When a robot is brought into a new environment, it has a very limited knowledge of what surrounds it and what it can do. One way to build up that knowledge is through exploration but it is a slow process. Programming by demonstration is an efficient way to learn new things from interaction. A robot can imitate gestures it was shown through passive manipulation. Depending on the representation of the task, the robot may also be able to plan its actions and even adapt its representation when further interactions change its knowledge about the task to be done. In this paper we present a bio-inspired neural network used in a robot to learn arm gestures demonstrated through passive manipulation. It also allows the robot to plan arm movements according to activated goals. The model is applied to learning a pick-and-place task. The robot learns how to pick up objects at a specific location and drop them in two different boxes depending on their color. As our system is continuously learning, the behavior of the robot can always be adapted by the human interacting with it. This ability is demonstrated by teaching the robot to switch the goals for both types of objects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential pattern mining of multimodal data streams in dyadic interactions

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1158 KB) |  | HTML iconHTML  

    In this paper we propose a sequential pattern mining method to analyze multimodal data streams using a quantitative temporal approach. While the existing algorithms can only find sequential orders of temporal events, this paper presents a new temporal data mining method focusing on extracting exact timings and durations of sequential patterns extracted from multiple temporal event streams. We present our method with its application to the detection and extraction of human sequential behavioral patterns over multiple multimodal data streams in human-robot interactions. Experimental results confirmed the feasibility and quality of our proposed pattern mining algorithm, and suggested a quantitative data-driven way to ground social interactions in a manner that has never been achieved before. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ignorance is bliss: A complexity perspective on adapting reactive architectures

    Publication Year: 2011 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    We study the computational complexity of adapting a reactive architecture to meet task constraints. This computational problem has application in a wide variety of fields, including cognitive and evolutionary robotics and cognitive neuroscience. We show that-even for a rather simple world and a simple task-adapting a reactive architecture to perform a given task in the given world is NP-hard. This result implies that adapting reactive architectures is computationally intractable regardless the nature of the adaptation process (e.g., engineering, development, evolution, learning, etc.) unless very special conditions apply. In order to find such special conditions for tractability, we have performed parameterized complexity analyses. One of our main findings is that architectures with limited sensory and perceptual abilities are efficiently adaptable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.