Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Autonomous Mental Development, IEEE Transactions on

Issue 2 • Date June 2012

Filter Results

Displaying Results 1 - 12 of 12
  • Table of contents

    Publication Year: 2012 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (163 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Autonomous Mental Development publication information

    Publication Year: 2012 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (129 KB)  
    Freely Available from IEEE
  • The “Interaction Engine”: A Common Pragmatic Competence Across Linguistic and Nonlinguistic Interactions

    Publication Year: 2012 , Page(s): 105 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (882 KB) |  | HTML iconHTML  

    Recent research in cognitive psychology, neuro- science, and robotics has widely explored the tight relations between language and action systems in primates. However, the link between the pragmatics of linguistic and nonlinguistic inter- actions has received less attention up to now. In this paper, we argue that cognitive agents exploit the same cognitive processes and neural substrate-a general pragmatic competence-across linguistic and nonlinguistic interactive contexts. Elaborating on Levinson's idea of an “interaction engine” that permits to convey and recognize communicative intentions in both linguistic and nonlinguistic interactions, we offer a computationally guided analysis of pragmatic competence, suggesting that the core abilities required for successful linguistic interactions could derive from more primitive architectures for action control, nonlinguistic interactions, and joint actions. Furthermore, we make the case for a novel, embodied approach to human-robot interaction and communication, in which the ability to carry on face-to-face communication develops in coordination with the pragmatic competence required for joint action. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Learning in Continuous Multimodal Space: A Bayesian Approach to Action-Based Soft Partitioning and Learning

    Publication Year: 2012 , Page(s): 124 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3210 KB) |  | HTML iconHTML  

    A probabilistic framework for interactive learning in continuous and multimodal perceptual spaces is proposed. In this framework, the agent learns the task along with adaptive partitioning of its multimodal perceptual space. The learning process is formulated in a Bayesian reinforcement learning setting to facilitate the adaptive partitioning. The partitioning is gradually and softly done using Gaussian distributions. The parameters of distributions are adapted based on the agent's estimate of its actions' expected values. The probabilistic nature of the method results in experience generalization in addition to robustness against uncertainty and noise. To benefit from experience generalization diversity in different perceptual subspaces, the learning is performed in multiple perceptual subspaces-including the original space-in parallel. In every learning step, the policies learned in the subspaces are fused to select the final action. This concurrent learning in multiple spaces and the decision fusion result in faster learning, possibility of adding and/or removing sensors-i.e., gradual expansion or contraction of the perceptual space-, and appropriate robustness against probable failure of or ambiguity in the data of sensors. Results of two sets of simulations in addition to some experiments are reported to demonstrate the key properties of the framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tool–Body Assimilation of Humanoid Robot Using a Neurodynamical System

    Publication Year: 2012 , Page(s): 139 - 149
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1429 KB) |  | HTML iconHTML  

    Researches in the brain science field have uncovered the human capability to use tools as if they are part of the human bodies (known as tool-body assimilation) through trial and experience. This paper presents a method to apply a robot's active sensing experience to create the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool-body assimilation module. Self-organizing map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple time-scales recurrent neural network (MTRNN) is used as the dynamics learning module. Parametric bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the properties of the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robot's dynamic properties change when holding a tool. Motion generation experiments show that the tool-body assimilation model is capable of applying to unknown tools to generate goal-oriented motions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Are Robots Appropriate for Troublesome and Communicative Tasks in a City Environment?

    Publication Year: 2012 , Page(s): 150 - 160
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1456 KB) |  | HTML iconHTML  

    We studied people's acceptance of robots that per- form tasks in a city. Three different beings (a human, a human wearing a mascot costume, and a robot) performed tasks in three different scenarios: endless guidance, responding to irrational complaints, and removing an accidentally discarded key from the trash. All of these tasks involved beings interacting with visitors in troublesome situations: dull, stressful, and dirty. For this paper, 30 participants watched nine videos (three tasks performed by three beings) and evaluated each being's appropriateness for the task and its human-likeness. The results indicate that people prefer that a robot rather than a human perform these troublesome tasks, even though they require much interaction with people. In addition, comparisons with the costumed-human suggest that people's beliefs that a being deserves human rights rather than having a human-like appearance and behavior or cognitive capability is one explanation for their judgments about appropriateness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Brain-Like Emergent Spatial Processing

    Publication Year: 2012 , Page(s): 161 - 185
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2355 KB) |  | HTML iconHTML  

    This is a theoretical, modeling, and algorithmic paper about the spatial aspect of brain-like information processing, modeled by the developmental network (DN) model. The new brain architecture allows the external environment (including teachers) to interact with the sensory ends and the motor ends of the skull-closed brain through development. It does not allow the human programmer to hand-pick extra-body concepts or to handcraft the concept boundaries inside the brain . Mathematically, the brain spatial processing performs real-time mapping from to , through network updates, where the contents of all emerge from experience. Using its limited resource, the brain does increasingly better through experience. A new principle is that the effector ends serve as hubs for concept learning and abstraction. The effector ends serve also as input and the sensory ends serve also as output. As DN embodiments, the Where-What Networks (WWNs) present three major function novelties-new concept abstraction, concept as emergent goals, and goal-directed perception. The WWN series appears to be the first general purpose emergent systems for detecting and recognizing multiple objects in complex backgrounds. Among others, the most significant new mechanism is general-purpose top-down attention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Xplore Digital Library [advertisement]

    Publication Year: 2012 , Page(s): 186
    Save to Project icon | Request Permissions | PDF file iconPDF (1346 KB)  
    Freely Available from IEEE
  • IEEE Foundation [advertisement]

    Publication Year: 2012 , Page(s): 187
    Save to Project icon | Request Permissions | PDF file iconPDF (320 KB)  
    Freely Available from IEEE
  • Quality without compromise [advertisement]

    Publication Year: 2012 , Page(s): 188
    Save to Project icon | Request Permissions | PDF file iconPDF (324 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2012 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Autonomous Mental Development information for authors

    Publication Year: 2012 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Autonomous Mental Development (TAMD) includes computational modeling of mental development, including mental architecture, theories, algorithms, properties, and experiments.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhengyou Zhang
Microsoft Research