By Topic

Cognitive Informatics & Cognitive Computing (ICCI*CC), 2013 12th IEEE International Conference on

Date 16-18 July 2013

Filter Results

Displaying Results 1 - 25 of 82
  • Brain dump: How publicly available fMRI can help inform neuronal network architecture

    Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (146 KB)  

    Summary form only given. Connectomics is an emergent discipline of Neuroinformatics that studies how the brain is connected, both anatomically and functionally. A number of projects throughout the neuroscience community have tackled the problem of determining interconnectivity from the nano scale - such as those enabled by laser-scanning light microscopy and semi-automated electron microscopy - to the micro scale of neurons and neuron clusters, to the macro scale of fMRI. The amount of data required to map out the macro scale is in the manageable multi-terabyte range, and largely exists today, while gathering one milometer cubed of synaptic-level data already approaches the multi-petabyte scale and is a few years away. Between these two extremes most likely lies the fastest and most representative path to a useful map of human neuronal connectivity. Although extremely informative on may topics concerning neuronal connectivity, the of the main limitations of the smaller scale approaches are 1) the considerable amount of time and energy needed to have one sample which will likely not be widely representative of a typical human brain and 2) their highly invasive or destructive nature renders them poorly adaptable to living humans. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Watson: The Jeopardy! Challenge and beyond

    Page(s): 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (137 KB)  

    Summary form only given. Watson, named after IBM founder Thomas J. Watson, was built by a team of IBM researchers who set out to accomplish a grand challenge - build a computing system that rivals a human's ability to answer questions posed in natural language with speed, accuracy and confidence. The quiz show Jeopardy! provided the ultimate test of this technology because the game's clues involve analyzing subtle meaning, irony, riddles and other complexities of natural language in which humans excel and computers traditionally fail. Watson passed its first test on Jeopardy!, beating the show's two greatest champions in a televised exhibition match, but the real test will be in applying the underlying natural language processing and analytics technology in business and across industries. In this talk I will introduce the Jeopardy! grand challenge, present an overview of Watson and the DeepQA technology upon which Watson is built, and explore future applications of this technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Basic theories for neuroinformatics and neurocomputing

    Page(s): 3 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB)  

    Summary form only given. A fundamental challenge for almost all scientific disciplines is to explain how natural intelligence is generated by physiological organs and what the logical model of the brain is beyond its neural architectures. According to cognitive informatics and abstract intelligence, the exploration of the brain is a complicated recursive problem where contemporary denotational mathematics is needed to efficiently deal with it. Cognitive psychology and medical science were used to explain that the brain works in a certain way based on empirical observations on related activities in usually overlapped brain areas. However, the lack of precise models and rigorous causality in brain studies has dissatisfied the formal expectations of researchers in computational intelligence and mathematics, because a computer, the logical counterpart of the brain, might not be explained in such a vague and empirical approach without the support of formal models and rigorous means. In order to fonnally explain the architectures and functions of the brain, as well as their intricate relations and interactions, systematic models of t he brain are s ought for revealing the principles and mechanisms of the brain at the neural, physiological, cognitive, and logical (abstract) levels. Cognitive and brain informatics investigate into the brain via not only inductive syntheses through these four cognitive levels from the bottom up in order to form theories based on empirical observations, but also deductive analyses from the top down in order to explain various functional and behavioral instances according to the abstract intelligence theory. This keynote lecture presents systematic models of the brain from the facets of cognitive informatics, abstract intelligence, brain Informatics, neuroinformatics, and cognitive psychology. A logical model of the brain is introduced that maps the cognitive functions of the brain onto its neural and physiological architectures. This work le- ds to a coherent abstract intelligence theory based on both denotational mathematical models and cognitive psychology observations, which rigorously explains the underpinning principles and mechanisms of the brain. On the basis of the abstract intelligence theories and the logical models of the brain, a comprehensive set of cognitive behaviors as identified in the Layered Reference Model of the Brain (LRMB) such as perception, inference and learning can be rigorously explained and simulated.The logical model of the brain and the abstract intelligence theory of natural intelligence will enable the development of cognitive computers that perceive, think and learn. The functional and theoretical difference between cognitive computers and classic computers are that the latter are data processors based on Boolean algebra and its logical counterparts; while the former are knowledge processors based on contemporary denotational mathematics. A wide range of applications of cognitive computers have been developing in ICIC and my laboratory such as, inter alia, cognitive robots, cognitive learning engines, cognitive Internet, cognitive agents, cognitive search engines, cognitive translators, cognitive control systems, and cognitive automobiles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The measurement and analysis of cortical networks

    Page(s): 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (729 KB)  

    Summary form only given. Technological advances over the past five years have led to an unprecedented level of volume and detail in the acquisition of neuroscientific data relating to the mammalian brain. However, this creates significant challenges in the processing and interpretation of the data. We will adopt a network-centric approach to tackle this, as it matches the physical structure of the brain. We present methods to extract functional brain networks from spatio-temporal time series that describe neural activity, such as in functional magnetic resonance imaging (fMRI). These networks capture intrinsic brain dynamics. We describe computational methods to extract topological regularities in such networks, including motifs and cycles. We analyze the relations hip between the structure of the network, as represented by its motifs, and its function. For instance, example hub neurons in the hippocampus promote synchrony and shortest loops act as pacemakers of neural activity. We demonstrate the relevance of the network analysis techniques in understanding specific brain-related disorders such as schizophrenia and autism. For instance, the disruption of cortical networks involved in synchronization may be a contributor to autism and schizophrenic patients which have been shown to have higher connectivity within the default mode network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cognitive diversity in perceptive informatics and affective computing

    Page(s): 6 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (118 KB) |  | HTML iconHTML  

    The advent of sensor technologies and imaging modalities has greatly increased our ability to map the brain structure and understand its cognitive function. In order for the acquired Big Data (with large volume, wide variety, and high velocity) to be valuable, innovative data-centric algorithms and systems in machine learning, data mining and artificial intelligence have been developed, designed and implemented. Due to the complexity of the brain system and its cognitive processes, new data-driven paradigm is needed to recognize patterns in Big Data, to fuse information from different sources (systems and sensors), and to extract useful knowledge for actionable decisions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling communicative virtual agents based on joint activity theory

    Page(s): 8 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1002 KB) |  | HTML iconHTML  

    In order to produce agents which are effective social actors, behavior must be modeled in an appropriate way. Models exist for a wide range of agent components, but this paper focuses on communication through body expression. Additionally, rather than formulating communication models from scratch, this paper discusses modeling of agents based on existing communication theory in the real world. In this case, Herbert Clark's joint activity theory is used, where each collaborative act is regarded as a joint project between one or more parties. The generalized model is defined and its first implementation in the form of a virtual basketball game is also described. The use of a game provides an ideal testbed for the analysis of communication using Clark's theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A semantic algebra for cognitive linguistics and cognitive computing

    Page(s): 17 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    Semantics is the meaning of a language unit at the levels of word, phrase, sentence, paragraph, and essay. Cognitive linguistics focuses on cognitive semantics of sentences and its interaction with syntactic structures. A denotational mathematical framework of language semantics known as semantic algebra is developed in this paper. Semantic algebra reveals the nature of semantics by a general mathematical model. On the basis of the formal semantic structure, language semantics can be deductively manipulated by a set of algebraic operations at different levels of language units. According to semantic algebra, semantic interpretation and comprehension can be embodied as a process of formal semantic aggregation in cognitive linguistics from the bottom up. Applications of semantic algebra are illustrated in computational linguistics, computing with words, cognitive informatics, and cognitive computing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Chaotic simulated annealing for task allocation in a multiprocessing system

    Page(s): 26 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1660 KB) |  | HTML iconHTML  

    Two different variations of chaotic simulated annealing were applied to combinatorial optimization problems in multiprocessor task allocation. Chaotic walks in the solution space were taken to search for the global optimum or “good enough” task-to-processor allocation solutions. Chaotic variables were generated to set the number of perturbations made in each iteration of a chaotic simulated annealing algorithm. In addition, parameters of a chaotic variable generator were adjusted to create different chaotic distributions with which to search the solution space. The results show a faster convergence time than conventional simulated annealing when the solutions are far apart in the solution space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cognition as a part of computational creativity

    Page(s): 36 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB) |  | HTML iconHTML  

    Computational creativity and cognitive computing are distinct fields that have developed in a parallel fashion. In this paper, we examine the relationship between the two, concluding that the two fields overlap in one precise way: the evaluation or assessment of artifacts with respect to creativity. Furthermore, we discuss a particular instance of computational creativity, culinary recipe design, and how cognitive informatics and cognitive computation enter into the domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring human dynamics in global information system implementations Culture, attitudes and cognitive elements

    Page(s): 44 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (743 KB) |  | HTML iconHTML  

    Global information systems (IS) are often being designed and implemented without due consideration or management of the human aspect of information systems. The lack of acknowledgement of human factors generates cost overruns, time delays and ultimately could lead to a partial failure of the system or even an aborted implementation. In this paper we present the concept of the information system implementation transformation (ISIT) cloud that covers dynamics of global information system implementations. We have depicted these dynamics as interpretative readiness curves in relation to IS implementation phases. We argue that human elements are impacting the overall level of implementation readiness. We support our argument by discussing the role of attitudes towards IS implementations, after which we break it down into on the role culture link our ISIT concept to the layered reference model of the brain (LRMB) to understand the role cognitive elements within IS implementations. The related charts that we present are serving as the framework our research. The results of our approach provide improved understanding of the human elements of global information system implementations and its organizational readiness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Technicians and their learning styles preferences and cognitive processes of formal inferences

    Page(s): 51 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB) |  | HTML iconHTML  

    During the last seven years several Argentinian national universities have offered (as part of their academic studies and programs) different undergraduate degrees under the denomination of technical degrees. The personal characteristics of their students are radically different from the traditional academic offer. Students learn by practicing and they comprehend information best by actively doing something with the information. We ran an experiment with students of a technical degree of web development in a course of Programming Introduction using Python. Preliminary findings revealed that their learning styles are mainly active and visual, and learners who are more verbal or have stronger concrete experience obtained higher scores in their tests. They perceive that inductive tasks are easier than deductive and abductive tasks. We also found that those subjects who are more efficient in solving formal inference tasks obtained higher qualifications in their exams. The findings can be useful not only for didactic transposition in teaching courses which take into account the balance of the students' preference but also to develop new instructional methods and software which focus on the cognitive preferences and cognitive process of technicians. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inconsistencies in big data

    Page(s): 61 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3267 KB) |  | HTML iconHTML  

    We are faced with a torrent of data generated and captured in digital form as a result of the advancement of sciences, engineering and technologies, and various social, economical and human activities. This big data phenomenon ushers in a new era where human endeavors and scientific pursuits will be aided by not only human capital, and physical and financial assets, but also data assets. Research issues in big data and big data analysis are embedded in multi-dimensional scientific and technological spaces. In this paper, we first take a close look at the dimensions in big data and big data analysis, and then focus our attention on the issue of inconsistencies in big data and the impact of inconsistencies in big data analysis. We offer classifications of four types of inconsistencies in big data and point out the utility of inconsistency-induced learning as a tool for big data analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Natural language cognition of humor by humans and computers: A computational semantic approach

    Page(s): 68 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (929 KB) |  | HTML iconHTML  

    This paper deals with a contribution of computational analysis of verbal humor to natural language cognition. After a brief introduction to the growing area of computational humor and of its roots in humor theories, it describes and compares the results of a human-subject and computer experiment. The specific interest is to compare how well the computer, equipped with the resources and methodologies of the Ontological Semantic Technology, a comprehensive meaning access approach to natural language processing, can model several aspects of the cognitive behaviors of humans processing jokes from the Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel multimodal template generation algorithm

    Page(s): 76 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1196 KB) |  | HTML iconHTML  

    Multimodal biometric system has emerged as a highly successful new approach to combat problems of unimodal biometric system such as intraclass variability, interclass similarity, data quality, non-universality, and sensitivity to noise. The idea behind the cancelable biometric or cancelability is to transform a biometric data or feature into a new one so that the stored biometric template can be easily changed in a biometric security system. In this paper, we present a novel architecture for template generation within the context of the cancelable multimodal system. We develop a novel cancelable biometric template generation algorithm using random projection and transformation-based feature extraction and selection. We further validate the performance of the proposed algorithm on a virtual multimodal face and ear database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Survey of measures for the structural dimension of ontologies

    Page(s): 83 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB) |  | HTML iconHTML  

    Different authors from literature had argued that measures for ontologies can help: to select a suitable ontology for the user needs, to improve dynamic web service composition and to predict the completed system's overall quality. However, the majority of the ontologies' measures go no further than their definitions. We have compared a set of 51 measures according to minimal criteria that a measure must fulfill. In order to do a coherent comparison of their definitions and their intents we have formalized the measures using Object Constraint Language (OCL) upon the Ontology Definition Model (ODM). The formalization of the measures help to avoid the misunderstanding and misinterpretation introduced when measures are informally defined using natural language. The formal definitions upon a OMD metamodel assure that measures capture the concepts they intend for and could facilitate the implementation of measures extraction tools. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of selected cryptosystems using single-scale and poly-scale measures

    Page(s): 93 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (979 KB) |  | HTML iconHTML  

    This paper presents useful measures for comparison of distinct cryptosystems, including (i) the public-key cryptography RSA algorithm, (ii) the elliptic-curve cryptography ElGamal algorithm, (iii) a cryptosystem based on radio background noise (RBN), and (iv) a new cryptosystem based on chaos phenomena in cellular automata. The comparison is based on (i) a single-scale measure (i.e., the marginal probability mass functions (mpmf), and (ii) a poly-scale measure (i.e., the finite-sense stationarity, FSS10). Both comparison approaches use the same plaintext and computational power when testing the four cryptosystems. This paper shows experimentally that the chaos based modular dynamical cryptosystem is (i) strong to single-scale statistical cryptanalysis by leaving no patterns in the ciphertexts, (ii) strong to poly-scale cryptanalysis by having a smaller stationarity window than the alternative cryptosystems, and (iii) faster than the selected algorithms from RSA, ElGamal, and natural sources of randomness (RBN). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analysis of Koch and Minkowski fractal antennas for cognitive systems

    Page(s): 103 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2352 KB) |  | HTML iconHTML  

    Cognitive systems call for wireless communications with antennas having stringent requirements. For example, software-defined radio, cognitive radio, and cognitive sensor networks operate over very wide bandwidth, with small dimensions, high gain, and omnidirectionally. A candidate capable of addressing such requirements is the fractal antenna. This paper describes selected simulations results of the following two fractal antenna: Koch and Minkowski. The variation trends of the voltage standing-wave ratio, the reflection coefficient, and the S11 parameters of the antenna have been studied for several successive iterates of the fractal shapes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A biologically inspired computational model of Moral Decision Making for autonomous agents

    Page(s): 111 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1236 KB) |  | HTML iconHTML  

    In areas such as psychology and neuroscience a common approach to study human behavior has been the development of theoretical models of cognition. In fields such as artificial intelligence, these cognitive models are usually translated into computational implementations and incorporated into the architectures of intelligent autonomous agents (AAs). The main assumption is that this design approach contributes to the development of intelligent systems capable of displaying very believable and human-like behaviors. Decision Making is one of the most investigated and computationally implemented cognitive functions. The literature reports several computational models designed to allow AAs to make decisions that help achieve their personal goals and needs. However, most models disregard crucial aspects of human decision making such as other agents' needs, ethical values, and social norms. In this paper, we propose a biologically inspired computational model of Moral Decision Making (MDM). This model is designed to enable AAs to make decisions based on ethical and moral judgment. The simulation results demonstrate that the model helps to improve the believability of virtual agents when facing moral dilemmas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SS5 design of a P2P information sharing system and its application to communication support in natural disaster

    Page(s): 118 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1095 KB) |  | HTML iconHTML  

    There are many obstacles to overcome when we use network services during a large-scale disaster, such as poor communication with unstable networks. Under such network environment, computers need to save network resources and to balance loads among nodes without user's operation. We propose the P2P Safety Confirmation System in which each node sufficiently achieves autonomous dynamic load balancing. This system is based on our proposed structured P2P network called Well-distribution Algorithm for an Overly Network (Waon), which does not require any network restriction and not incur additional maintenance costs during its load balancing. We implemented the system and evaluated node's behavior while nodes autonomously balance loads. Moreover we applied this framework to build a communication support system in natural disaster. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of the teleoperation platform based on augmented reality

    Page(s): 126 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1074 KB) |  | HTML iconHTML  

    Predictive display based on virtual environment models is an effective method of solving the problem of time delay in teleoperation. However, this method will not work well without the precise virtual environment model. Thus, it is of great significance that augmented reality with video feedback is introduced into teleoperation, instead of the virtual environment models. A teleoperation system platform based on augmented reality was developed to improve system stability and enhance system telepresence, facilitating the operator's observation and operation. The improved algorithm ARToolkit-based made the system adaptable to many types of lighting environments. This paper introduces system structure and the realization of key modules. Lots of experiments such as pressing the button, pulling the drawer and so on are also conducted to evaluate the system performance. The simulation results indicate that the proposed system can compensate the defect of prediction and improve teleoperation system reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A phrase-based approach based on morphological information for Japanese-Uighur statistical machine translation system

    Page(s): 133 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    Japanese and Uighur language has many similarities. Uighur language belongs to Altaic language branch of Turkic language, the statistical translation approach of Japanese Uighur language in machine translation system is a blank. This paper analyses the approach of statistical machine translation system in Uighur language, discusses how to establishing of dictionary and parallel corpus and phrase based statistical machine translation system based on linguistic rules for Uighur language, and it presents the method of statistical machine translation system based on morphological information of Uighur, the rule base and the dictionary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning through overcoming incompatible and anti-subsumption inconsistencies

    Page(s): 137 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB) |  | HTML iconHTML  

    It is a grand challenge to build intelligent agent systems that can improve their problem-solving performance through perpetual learning. In our previous work, we have proposed a special type of perpetual learning paradigm called inconsistency-induced learning, or i2Learning, along with several inconsistency-specific learning algorithms. i2Learning is a step toward meeting the challenge. The work reported in this paper is a continuation of the ongoing research with i2Learning. We describe two more learning algorithms for incompatible inconsistency and anti-subsumption inconsistency in the context of i2Learning. The results will be incorporated into empirical studies as part of future work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Understanding human learning using a multi-agent simulation of the unified learning model

    Page(s): 143 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (855 KB) |  | HTML iconHTML  

    Within cognitive science, computational modeling based on cognitive architectures has been an important approach to addressing questions of human cognition and learning. This paper reports on a multi-agent computational model based on the principles of the Unified Learning Model (ULM). Derived from a synthesis of neuroscience, cognitive science, psychology, and education, the ULM merges a statistical learning mechanism with a general learning architecture. Description of the single agent model and the multi-agent environment which translate the principles of the ULM into an integrated computational model is provided. Validation results from simulations with respect to human learning are presented. Simulation suitability for cognitive learning investigations is discussed. Multi-agent system performance results are presented. Findings support the ULM theory by documenting a viable computational simulation of the core ULM components of long-term memory, motivation, and working memory and the processes taking place among them. Implications for research into human learning and intelligent agents are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IDEAL: Interactive design environment for agent system with learning mechanism

    Page(s): 153 - 160
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    The agent-oriented computing is a technique for generating the agent who operates autonomously according to the behavior knowledge. Moreover, agent can have the characteristic called “Learning” skill. More efficient operation of agents can be expected by realizing “Learning” skill. In this research, our aim is to support agent designer who designs and develops the intelligent agent system equipped with “Learning” skill. We propose interactive design environment for agent system with learning mechanism using repository-based agent framework called DASH framework. Proposed framework enables agent designer to design and implement the learning agents without highly expertise, therefore we can reduce the designer's burden. In this paper, we explain the DASH framework, Q-learning, Profit Sharing and proposed design environment. Moreover we show the effectiveness of the proposal method through the some experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional Mesh Learning for pattern analysis of cognitive processes

    Page(s): 161 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (989 KB) |  | HTML iconHTML  

    We propose a statistical learning model for classifying cognitive processes based on distributed patterns of neural activation in the brain, acquired via functional magnetic resonance imaging (fMRI). In the proposed learning machine, local meshes are formed around each voxel. The distance between voxels in the mesh is determined by using functional neighborhood concept. In order to define functional neighborhood, the similarities between the time series recorded for voxels are measured and functional connectivity matrices are constructed. Then, the local mesh for each voxel is formed by including the functionally closest neighboring voxels in the mesh. The relationship between the voxels within a mesh is estimated by using a linear regression model. These relationship vectors, called Functional Connectivity aware Local Relational Features (FC-LRF) are then used to train a statistical learning machine. The proposed method was tested on a recognition memory experiment, including data pertaining to encoding and retrieval of words belonging to ten different semantic categories. Two popular classifiers, namely k-Nearest Neighbor and Support Vector Machine, are trained in order to predict the semantic category of the item being retrieved, based on activation patterns during encoding. The classification performance of the Functional Mesh Learning model, which range in 62-68% is superior to the classical multi-voxel pattern analysis (MVPA) methods, which range in 40-48%, for ten semantic categories. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.