Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Autonomous Mental Development, IEEE Transactions on

Issue 2 • Date June 2013

Filter Results

Displaying Results 1 - 12 of 12
  • Table of contents

    Publication Year: 2013 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Autonomous Mental Development publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (132 KB)  
    Freely Available from IEEE
  • Brain-Like Emergent Temporal Processing: Emergent Open States

    Publication Year: 2013 , Page(s): 89 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3100 KB) |  | HTML iconHTML  

    Informed by brain anatomical studies, we present the developmental network (DN) theory on brain-like temporal information processing. The states of the brain are at its effector end, emergent and open. A finite automaton (FA) is considered an external symbolic model of brain's temporal behaviors, but the FA uses handcrafted states and is without “internal” representations. The term “internal” means inside the network “skull.” Using action-based state equivalence and the emergent state representations, the time driven processing of DN performs state-based abstraction and state-based skill transfer. Each state of DN, as a set of actions, is openly observable by the external environment (including teachers). Thus, the external environment can teach the state at every frame time. Through incremental learning and autonomous practice, the DN lumps (abstracts) infinitely many temporal context sequences into a single equivalent state. Using this state equivalence, a skill learned under one sequence is automatically transferred to other infinitely many state-equivalent sequences in the future without the need for explicit learning. Two experiments are shown as examples: The experiments for video processing showed almost perfect recognition rates in disjoint tests. The experiment for text language, using corpora from the Wall Street Journal, treated semantics and syntax in a unified interactive way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simple Ontology of Manipulation Actions Based on Hand-Object Relations

    Publication Year: 2013 , Page(s): 117 - 134
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2943 KB) |  | HTML iconHTML  

    Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Autonomous Social Robot in Fear

    Publication Year: 2013 , Page(s): 135 - 151
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1675 KB) |  | HTML iconHTML  

    Currently artificial emotions are being extensively used in robots. Most of these implementations are employed to display affective states. Nevertheless, their use to drive the robot's behavior is not so common. This is the approach followed by the authors in this work. In this research, emotions are not treated in general but individually. Several emotions have been implemented in a real robot, but in this paper, authors focus on the use of the emotion of fear as an adaptive mechanism to avoid dangerous situations. In fact, fear is used as a motivation which guides the behavior during specific circumstances. Appraisal of fear is one of the cornerstones of this work. A novel mechanism learns to identify the harmful circumstances which cause damage to the robot. Hence, these circumstances elicit the fear emotion and are known as fear releasers. In order to prove the advantages of considering fear in our decision making system, the robot's performance with and without fear are compared and the behaviors are analyzed. The robot's behaviors exhibited in relation to fear are natural, i.e., the same kind of behaviors can be observed on animals. Moreover, they have not been preprogrammed, but learned by real inter actions in the real world. All these ideas have been implemented in a real robot living in a laboratory and interacting with several items and people. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptability of Tacit Learning in Bipedal Locomotion

    Publication Year: 2013 , Page(s): 152 - 161
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1978 KB) |  | HTML iconHTML  

    The capability of adapting to unknown environmental situations is one of the most salient features of biological regulations. This capability is ascribed to the learning mechanisms of biological regulatory systems that are totally different from the current artificial machine-learning paradigm. We consider that all computations in biological regulatory systems result from the spatial and temporal integration of simple and homogeneous computational media such as the activities of neurons in brain and protein-protein interactions in intracellular regulations. Adaptation is the outcome of the local activities of the distributed computational media. To investigate the learning mechanism behind this computational scheme, we proposed a learning method that embodies the features of biological systems, termed tacit learning. In this paper, we elaborate this notion further and applied it to bipedal locomotion of a 36DOF humanoid robot in order to discuss the adaptation capability of tacit learning comparing with that of conventional control architectures and that of human beings. Experiments on walking revealed a remarkably high adaptation capability of tacit learning in terms of gait generation, power consumption and robustness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reaching for the Unreachable: Reorganization of Reaching with Walking

    Publication Year: 2013 , Page(s): 162 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1121 KB) |  | HTML iconHTML  

    Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Redundant Neural Vision Systems—Competing for Collision Recognition Roles

    Publication Year: 2013 , Page(s): 173 - 186
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3330 KB) |  | HTML iconHTML  

    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition. View full abstract»

    Open Access
  • Open Access

    Publication Year: 2013 , Page(s): 187
    Save to Project icon | Request Permissions | PDF file iconPDF (1156 KB)  
    Freely Available from IEEE
  • IEEE Xplore Digital Library

    Publication Year: 2013 , Page(s): 188
    Save to Project icon | Request Permissions | PDF file iconPDF (1372 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Publication Year: 2013 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Autonomous Mental Development information for authors

    Publication Year: 2013 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (89 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Autonomous Mental Development (TAMD) includes computational modeling of mental development, including mental architecture, theories, algorithms, properties, and experiments.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhengyou Zhang
Microsoft Research