By Topic

Integration of Knowledge Intensive Multi-Agent Systems, 2003. International Conference on

Date 30 Sept.-4 Oct. 2003

Filter Results

Displaying Results 1 - 25 of 125
  • Multiple knowledge intelligent system

    Publication Year: 2003 , Page(s): 71 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (342 KB) |  | HTML iconHTML  

    Knowledge is a quite ambiguous word to understand, but it is quite important to know what it is in order to make an artificial intelligence. So "Knowledge" is firstly discussed and defined. Secondly within this definition, some problems facing for making Knowledge are discussed. For getting rid of these problems, a necessary mechanism and its structure are studied. And then, we introduce an intelligent knowledge system (multiple knowledge intelligent system), which has a characteristic of an easy construction of knowledge and also which is able to behave intelligently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advanced core design using multi-agents algorithm

    Publication Year: 2003 , Page(s): 196 - 202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    We deal with the application of multiagents algorithm to the information system in a nuclear industry. We develop an original solution algorithm for the automatic core design of boiling water reactor using multiagents. The characteristics of this algorithm are that the coupling structure and the coupling operation suitable for the assigned problem are assumed, and an optimal solution is obtained by mutual interference of multistate transitions using multiagents. We have already proposed an integrated optimization algorithm using a two-stage genetic algorithm for the automatic core design. The objective of this approach is to improve the convergence performance of the optimization in the automatic core design. We compared the results of the proposed technique using multiagents algorithm with the two-stage genetic algorithm that had been proposed before. The proposed technique is shown to be effective in reducing the iteration numbers in the search process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast, feature-based wavelet shrinkage algorithm for image denoising

    Publication Year: 2003 , Page(s): 722 - 728
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (657 KB) |  | HTML iconHTML  

    We present a selective wavelet shrinkage algorithm for digital image denoising. The performance of this method is an improvement upon other methods proposed in the literature and is algorithmically simple for large computational savings. The improved performance and computational speed of the proposed wavelet shrinkage algorithm is presented and experimentally compared with established methods. The denoising methodology incorporated in this new algorithm involves a two-threshold validation process for real-time selection of wavelet coefficients. The two-threshold criteria selects wavelet coefficients based on their absolute value, spatial regularity, and regularity across multiresolutional scales. The proposed algorithm takes image features into consideration in the selection process. Statistically, most images have regular features resulting in connected subband coefficients. Therefore, the resulting subbands of wavelet transformed images in large part do not contain isolated coefficients. In the proposed algorithm, after coefficients are selected due to their magnitude, image features in terms of spatial regularity are used to further reduce the number of coefficients kept for image reconstruction. The proposed wavelet denoising technique is unique in that its performance improved upon several other established wavelet denoising techniques as well as being computationally efficient to facilitate realtime image processing applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Higher spiritual abilities (prolegomena to a physical theory)

    Publication Year: 2003 , Page(s): 406 - 411
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB) |  | HTML iconHTML  

    Mathematical mechanisms of mind operations are described including concepts, understanding, imagination, thinking, learning, instincts, consciousness, unconscious, intuitions, emotions, including aesthetic emotions. Few basic mathematical principles ("physical laws of mind") are elucidated describing the multiplicity of mind phenomena in correspondence with intuition and psychological and neural data. Predictions and testing of the theory is briefly discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A family of agent based models

    Publication Year: 2003 , Page(s): 312 - 317
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (453 KB) |  | HTML iconHTML  

    With the increasing need to thwart the efforts of global terrorism and for the United States to effectively protect its citizens and infrastructure from attack, analysts must be provided with an improved set of tools and capabilities to evaluate, and predict, potential threats. We describe preliminary efforts associated with development of an approach based upon the concept of a family of agent-based models to represent, evaluate and predict potential terrorist activity. This system of models will be employed to search a large information repository (Infosphere) of intelligence, public record, and financial, etc. data to identify suspicious (or potentially suspicious) activity. The approach will rely heavily on concepts associated with modeling field theory (MFT) to identify data relationships. As part of the work associated with this effort, a simple example will be provided to identify the general approach and to aid in evaluation of its feasibility for the large-scale problem associated with counter-terrorism. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coordinating activity in knowledge-intensive dynamic systems

    Publication Year: 2003 , Page(s): 105 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (406 KB) |  | HTML iconHTML  

    We consider the problem of coordinating activity in knowledge-intensive dynamic systems (KIDS) - large-scale, multiagent systems in which agents with significant individual capabilities work together to accomplish complex, knowledge-intensive tasks. Given the time pressure and resource limitations under which a KIDS typically operates, the establishment of plans and schedules can significantly improve organizational performance. However, there are several complicating factors: (1) there is diversity and novelty in the structure of processes that must be executed over time, requiring tight coupling of action selection with resource allocation, (2) processes are unpredictable in their outcomes and require continual dynamic adjustment and revision, (3) the collective capabilities of the agents of a KIDS are its primary asset and, to minimize future resource limitations, task allocation should consider the side effects of acquired expertise, and (4) KIDS are large-scale enterprises, requiring an ability to effectively distribute decision-making. Our previous research has developed constraint-based search techniques for continuous, dynamic scheduling that have been successfully applied to complex, large-scale transportation and manufacturing domains. We outline current work aimed at extending these models to address the above issues and provide an effective basis for managing KIDS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SUO communicator: agent-based support for small unit operations

    Publication Year: 2003 , Page(s): 77 - 82
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB) |  | HTML iconHTML  

    In small unit operations (SUO), people and equipment work together to meet some mission objective, have distinct roles and information needs, but are often distributed geographically. We describe the SUO Communicator, a combat situation awareness, information management, and communication system for use in small unit operations. The design is agent-based. Key ideas include email-based communication that is scalable and supports disconnected operation over tethered and wireless LANs and WANs; XML-based messaging, including init messages that provision a generic agent (egent) into a role-enabled egent; and an explicit representation for scenarios in terms of a bcc log of all messages sent that can also be used to drive simulations and for after-action analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The virtual design team (VDT): a multi-agent analysis framework for designing project organizations

    Publication Year: 2003 , Page(s): 115 - 120
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB) |  | HTML iconHTML  

    The virtual design team (VDT) is a multiagent modeling and simulation framework that has been developed over the past 15 years to help project managers design work processes and organizations for highly concurrent, "fast-track" project work. VDT has been extensively validated as an analysis tool for project organizations engaged in a routine - albeit complex and fast-track - product development efforts. Three important limitations of VDT are (1) it models only routine projects for which all tasks, agents, and relationships between and among them can be prespecified and held constant; (2) it assumes that all exceptions are handled hierarchically; and (3) it ignores any goal incongruency among project participants. We describe VDT, highlights its limitations, and presents ongoing research that is attempting to address these limitations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User-centric service brokerage in a personal multi-agent environment

    Publication Year: 2003 , Page(s): 729 - 734
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    Service brokerage is a central issue in mult-agent systems. Existing approaches often address the brokering among agents in terms of cooperative problem solving. Our work, however, deals with brokering the services offered to users in a multiagent system. The user should be able to choose the desired services and assign concrete tasks to the service-offering agents utilizing those services. To this end, we based our work on the "semantic Web services " approach. After pointing out the problems the users and agents are faced with, we discuss some related approaches and introduce the results of our work: a broker agent as a user service of the agent platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On a computational model of the Peircean semiosis

    Publication Year: 2003 , Page(s): 703 - 708
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB) |  | HTML iconHTML  

    We propose computational approach to the Peircean triadic model of semiosis (meaning processes). We investigate several theoretical constraints on the feasibility of a simulated semiosis within digital computers. These constraints, which are basic requirements for the simulation of semiosis, refer to the synthesis of irreducible triadic relations (sign-object-interpretant). We examine the internal organization of the triad, that is, the relative position of its elements and how they relate to each other by determinative relations. We also suggest a computational approach based on self-organization principles. In this context, relations of determination are described as emergent properties of the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying sets of key players in a network

    Publication Year: 2003 , Page(s): 127 - 131
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    Two problems are considered: finding a set of nodes that is maximally connected to all other nodes (KPP-Pos), and finding a set of nodes whose removal would result in a residual network of minimum cohesion. The problems are solved by the introduction of connectedness and fragmentation metrics, which are incorporated in a combinatorial optimization procedure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agent-based dynamic information security model

    Publication Year: 2003 , Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    From the decision support point of view, we proposed an agent-based dynamic information security model (ADISM) that can 24-hours automatically drive the intrusion inspection by dynamically integrating all distributed information security systems executing sound security strategies and recover the damages when systems have been under attack. Because of operation distributed, hackers are unlikely to wreck the whole system. Thus, it is expecting to yield information security cost-effective solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiple objectives optimization approach to robotic teams' analysis, design and control

    Publication Year: 2003 , Page(s): 31 - 34
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    An optimization approach to distributed intelligent system design and control is presented. It is expected to enhance the autonomous decision-making capabilities of subsystems. It is applicable to autonomous multiple agents homogeneous or heterogeneous clusters, when they must collaborate to achieve a common goal, acting in a coordinated manner that will provide situation awareness, collision avoidance and operations in complex environment and degraded communications or sensors failures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Artificial intelligence and sensor fusion

    Publication Year: 2003 , Page(s): 591 - 595
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB) |  | HTML iconHTML  

    Future US air force sensor systems must be able to adapt to changing environments in real time. A capabilities-based modeling approach is a new method being promoted for the building of the next generation weapon systems. To accommodate this modeling approach the Department of Defense (DoD) is promoting the use of waveform diversity for radar systems. Building a weapon system including one or more radar systems with waveform diversity will require the use of artificial intelligence (AI) tools and techniques. We investigate leveraging the AI tools being developed by the Semantic Web, DARPA's DAML program and, specifically, the building of ontologies and resource description framework (RDF) for sensor systems so that they can efficiently communicate and share their data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A toolkit for the search of the most general interpretable hypotheses

    Publication Year: 2003 , Page(s): 318 - 323
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    We apply first order logic (FOL) to formalize the problem of "meaningful generalization", finding the most general and easily interpretable hypotheses. Our software toolkit, LogicMill, is designed to solve this meaningful generalization problem as well as two other related problems: search for a maximal subsystem of mutually independent attributes, and aggregation of the hypotheses in the concise rules. We describe all three algorithms used for these purposes. Application of the toolkit to the data from various public domains demonstrates that LogicMill not only produces concise interpretable hypotheses and decision rules, but also it can compete in the prognostic power with traditional predictive learning algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Market-based task allocation for dynamic processing environments

    Publication Year: 2003 , Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB) |  | HTML iconHTML  

    Flexible and large-scale information processing across enterprises entails dynamic and decentralized control of workflow through adaptive allocation of knowledge and processing resources. Markets comprise a well-understood class of mechanisms for decentralized resource allocation, where agents interacting through a price system direct resources toward their most valued uses as indicated by these prices. The information-processing domain presents several challenges for market-based approaches, including (1) representing knowledge-intensive tasks and capabilities, (2) propagating price signals across multiple levels of information processing, (3) handling dynamic task arrival and changing priorities, and (4) accommodating the increasing-returns and public-good characteristics of information products. A market gaming environment provides a methodology for testing alternative market structures and agent strategies, and evaluating proposed solutions in a realistic decentralized manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The assessment of knowledge, in theory and in practice

    Publication Year: 2003 , Page(s): 609 - 615
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    Adaptations from a book and many scholarly articles are presented. It reviews the main ideas of a novel theory for the assessment of a student's knowledge in a topic and gives details on a practical implementation in the form of a software system available on the Internet. The basic concept of the theory is the 'knowledge state,' which is the complete set of problems that an individual is capable of solving in a particular topic, such as elementary algebra. The task of the assessor consists in uncovering the particular state of the student being assessed. Even though the number of knowledge states for a topic may exceed several hundred thousand, these large numbers are well within the capacity of current home or school computers. The result of an assessment consists in two short lists of problems, which may be labelled: 'what the student can do' and 'what the student is ready to learn'. In the most important applications of the theory, these two lists specify the exact knowledge state of the individual being assessed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A proposal on a model of an autonomous agent using the meta-level architecture

    Publication Year: 2003 , Page(s): 83 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    We propose a model on the structure of an autonomous agent using the meta-level architecture, which consists of two level, that is, base-level and meta-level. In the base-level of an agent, usual computations are performed. In the meta-level of an agent, several computations, for example, load balancing, communications among other agents and so on, are executed. Therefore, multiagent applications based on our model can execute not only their usual computations but also their computational environment dynamically. It is possible to regard a computer, which is a component of a cluster as an agent. Then, the cluster of personal computers and/or workstations is one of such applications. To implement such applications easily, we also propose to construct the system based on our model using some features of the MPI (message passing interface). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contextual vocabulary acquisition: from algorithm to curriculum

    Publication Year: 2003 , Page(s): 306 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB) |  | HTML iconHTML  

    We are developing algorithms for computational contextual vocabulary acquisition (CVA; computing a meaning for an unknown word from context) and applying the computational CVA system to build an educational curriculum for enhancing students' abilities to use CVA strategies in their reading. The knowledge gained from case studies of students using our CVA techniques feeds back into further development of our computational theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Threat response management system (TRMS)

    Publication Year: 2003 , Page(s): 547 - 554
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    We describe the motivations, concept of operation, and architecture of a threat response management system (TRMS): an information-integrated, dynamically reconfigurable, simulation-based decision support system that is deployable as (i) a dynamic planning and threat readiness training tool and (ii) an emergency response execution decision support tool. TRMS functions include (i) automated threat detection, (ii) collaborative (multinational and multiagency) plan development, (iii) social network representation and analysis, and (iv) agent-based simulation and optimization. Components of the TRMS solution are currently being developed and tested through an ongoing Navy-funded research project. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software tool for agent-based distributed data mining

    Publication Year: 2003 , Page(s): 710 - 715
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    We describe multi-agent technology and software tool for the joint engineering, implementation, deployment and possibly use of applied multi-agent distributed data mining and distributed decision making systems. The core problem of distributed data mining and decision making technology does not concerns particular data mining techniques, because the respective library of classes can be extended when necessary. Instead of this, its core problem is development of an infrastructure and protocols supporting coherent collaborative work of distributed software components (agents) responsible for data mining and decision making. We focus on architecture of multiagent distributed data mining and decision making system, on its design technology, software tool and on the protocols of software tool agents' interaction, mainly, in distributed data mining and decision making processes. The presented software tool is implemented and validated on the basis of several case studies from data fusion scope. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multisensor-multitarget sensor management using geometric objective functions

    Publication Year: 2003 , Page(s): 349 - 354
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    Multisensor-multitarget sensor management is at root a problem in nonlinear control theory. We apply newly developed theories for sensor management based on a Bayesian control-theoretic foundation. Finite-Set-Statistics (FISST) and the Bayes recursive filter for the entire multisensor-multitarget system are used with information-theoretic objective functions in the development of the sensor management algorithms. The theoretical analysis indicates that some of these objective functions are geometric, and lead to potentially tractable sensor management algorithms when used in conjunction with MHC (multihypothesis correlator)-like algorithms. We show examples of such algorithms, and present a preliminary evaluation of their performance against simulated scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A k-partition, graph theoretic approach to perceptual organization

    Publication Year: 2003 , Page(s): 336 - 342
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (523 KB) |  | HTML iconHTML  

    We present an k-partition, graph theoretic approach to perceptual organization. Principal results include a generalization of the bipartition normalized cut to a k-partition measure, and a derivation of a suboptimal, polynomial time solution to the NP-hard k-partition problem. The solution is obtained by first relaxing to an eigenvalue problem, followed by a heuristic procedure to enforce feasible solutions. This approach is a departure from the standard k-partitioning graph literature in that the partition measure used is nonquadratic, and is a departure from image segmentation literature in that k-partitioning is used in place of a recursive bipartition. We apply this approach to image segmentation of infrared (IR) images, and show representative segmentation results. Initial results show promise for further investigation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object pattern recognition below clutter in images

    Publication Year: 2003 , Page(s): 385 - 390
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    We are developing a technique for recognizing patterns below clutter based on modelling field theory. The presentation briefly summarizes the difficulties related to the combinatorial complexity of computations, and analyzes the fundamental limitations of existing algorithms such as multiple hypothesis testing. A new concept, dynamic logic, is introduced along with an algorithm suitable for pattern recognition in images with intense clutter data. This new mathematical technique is inspired by the analysis of biological systems, like the human brain, which combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of model-based techniques. The presentation provides examples of object pattern recognition below clutter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unified Bayes multitarget fusion of ambiguous data sources

    Publication Year: 2003 , Page(s): 343 - 348
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB) |  | HTML iconHTML  

    The fact that evidence can take highly disparate forms has been a major stumbling block in multisource-multitarget data fusion. Evidence can have at least three forms: unambiguous data (easily amenable to probabilistic analysis); ambiguously-generated data (difficult to characterize probabilistically); and ambiguous data (difficult to even model mathematically). We summarize a unified, systematic, and fully probabilistic methodology for fusing all three data types with the aim of detecting, tracking, and identifying multiple targets. The basic tool is the generalized likelihood function, which hedges against the inherent uncertainties associated with ambiguous and ambiguously-generated data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.