By Topic

Tools with Artificial Intelligence, Proceedings of the 13th International Conference on

Date 7-9 Nov. 2001

Filter Results

Displaying Results 1 - 25 of 46
  • Proceedings 13th IEEE International Conference on Tools with Artificial Intelligence. ICTAI 2001

    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 351
    Save to Project icon | Request Permissions | PDF file iconPDF (59 KB)  
    Freely Available from IEEE
  • Inconsistent requirements: an argumentation view

    Page(s): 79 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (99 KB) |  | HTML iconHTML  

    We present a logical framework for reasoning about inconsistent requirements in the context of the multi-viewpoint requirements engineering process. In order to analyse the sources of inconsistencies and to reason with inconsistent requirements, we present an argumentation view of the requirements. Intuitively, argumentation is a tool for reasoning with inconsistent knowledge: requirements are defined in terms of arguments (a conclusion with its support); then, a class of acceptable arguments is built (arguments with no counterarguments). We propose to characterize different classes of requirements which are ordered: from weakly confident to strongly confident (i.e. consistent). We present inference rules to build intra and inter-viewpoint reasoning. Inference rules are issued from the classes of requirements. We show how this work is useful for the requirements engineers to analyse inconsistent fragments of requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interleaved backtracking in distributed constraint networks

    Page(s): 33 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB) |  | HTML iconHTML  

    The adaptation of software technology to distributed environments is an important challenge today. In this work we combine parallel and distributed search. By this way we add the potential speed-up of a parallel exploration in the processing of distributed problems. This paper extends DIBT a distributed search procedure operating in distributed constraint networks. The extension is twofold. First the procedure is updated to face delayed information problems upcoming in heterogeneous systems. Second, the search is extended to simultaneously explore independent parts of a distributed search tree. By this way we introduce parallelism into distributed search, which brings to interleaved distributed intelligent backtracking (IDIBT). Our results show that 1) insoluble problems do not greatly degrade performance over DIBT and 2) superlinear speed-up can be achieved when the distribution of solution is nonuniform View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource coordination in single agent and multiagent systems

    Page(s): 18 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (189 KB) |  | HTML iconHTML  

    An intelligent multiagent system includes a number of software agents interacting to solve a problem. Various agents are responsible for planning different tasks. These tasks must be coordinated to solve complex problems. A number of reasons exist for which coordination among the agents is necessary, and numerous issues have to be tackled to achieve efficient coordination. The focus of this paper is resource coordination. When an agent lacks a resource to achieve a goal, it may choose one of two options. It can acquire another resource, or it can change the goal so that it can achieve it with the resource at hand. We present a cost-benefit analysis that can be performed to determine if the requesting agent receives the requested resource. We present empirical results that show the relative goal satisfaction achieved in the PRODIGY planner with and without resource coordination. We have implemented a multiagent system called COMAS that implements resource coordination View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic load-balancing via a genetic algorithm

    Page(s): 121 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (42 KB) |  | HTML iconHTML  

    We produce a genetic algorithm (GA) scheduling routine, which with often relatively low cost finds well-balanced schedules. Incoming tasks (of varying durations) accumulate, then are periodically scheduled, in small batches, to the available processors. Two important priorities for our scheduling work are that loads on the processors are well balanced, and that scheduling per se remains cheap in comparison to the actual productive work of the processors. We also include experimental results, exploring a variety of distributions of task durations, which show that our scheduler consistently produces well-balanced schedules, and quite often does so at relatively low cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An on-line repository for embedded software

    Page(s): 314 - 321
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (117 KB) |  | HTML iconHTML  

    The use of off-the-shelf components (COTS) can significantly reduce the time and cost of developing large-scale software systems. However, there are some difficult problems with the component-based approach. First, the developers have to be able to effectively retrieve components. This requires the developers to have an extensive knowledge of available components and how to retrieve them. After identifying the components, the developers also face a steep learning curve to master the use of these components. We are developing an On-line Repository for Embedded Software (ORES) to facilitate component management and retrieval. In this paper, we address the issues of designing software repository systems to assist users in obtaining appropriate components and learning to understand and use the components efficiently. We use an ontology to construct an abstract view of the organization of the components in ORES. The ontology structure facilitates repository browsing and effective search. We also develop a set of tools to assist with component comprehension, including a tutorial manager and a component explorer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards ontological reconciliation for agents

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (47 KB) |  | HTML iconHTML  

    This paper addresses issues faced by agents operating in large-scale multi-cultural environments. We argue for systems that are tolerant of heterogeneity. The discussion is illustrated with a running example of researching and comparing university web sites, which is a realistic scenario representative of many current knowledge management tasks that would benefit from agent assistance. We discuss efforts of the Intelligent Agent Laboratory toward designing such tolerant systems, giving a detailed presentation of the results of several implementations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing collaborative Golog agents by reinforcement learning

    Page(s): 195 - 202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (83 KB) |  | HTML iconHTML  

    We consider applications where agents have to cooperate without any communication taking place between them, apart from the fact that they can see part of the environment in which they act. We present a multi-agent system, defined in Golog, that needs to service tasks whose value degrades in time. Initial plans, reflecting prior knowledge about the environment, are expressed as Golog procedures, and are provided to the agents. Then the agents are trained using reinforcement learning, in order to ensure coordination both at the action level and at the plan level. This ensures better scalability and increased performance of the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization support for user-centered model selection in knowledge discovery in databases

    Page(s): 228 - 235
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB) |  | HTML iconHTML  

    The process of knowledge discovery in databases inherently consists of several steps that are necessarily iterative and interactive. In each application, to go through this process the user has to exploit different algorithms and their settings that usually yield different discovered models. The selection of appropriate discovered models or algorithms to achieve such models, referred to as model selection-requires meta-knowledge on algorithm/model and model performance metrics - is generally a difficult task for the user. Taking account of this difficulty, we consider that the ease of model selection is crucial in the success of real-life knowledge discovery activities. Different from most related work that aims to an automatic model selection, in our view model selection should be a semiautomatic work requiring an effective collaboration between the user and the discovery system. For such a collaboration, our solution is to give the user the ability to try easily various alternatives and to compare competing models quantitatively by performance metrics, and qualitatively by effective visualization. This paper presents our research on such model selection and visualization in the development of a knowledge discovery system called D2MS View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smart cars as autonomous intelligent agents

    Page(s): 25 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (322 KB) |  | HTML iconHTML  

    This paper presents a study on the behavior of smart cars by considering them as autonomous intelligent agents. In particular, a smart car could behave as autonomous agent by extracting information from the surrounding environment (road, highway) and determining its position in it, detecting the motion and tracking the behavioral patterns of other moving objects (automobiles) in its own surrounding space, exchanging information via internet with other moving objects (if possible) and negotiating its safety during travel with the other moving objects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SHAMASH. An AI tool for modelling and optimizing business processes

    Page(s): 306 - 313
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (130 KB) |  | HTML iconHTML  

    In this paper we describe SHAMASH, a tool for modeling and automatically optimizing Business Processes. The main features that differentiate it from most current related tools are its ability to define and use organisation standards, and functional structure, and make automatic model simulations and optimisation of them. SHAMASH is a knowledge based system, and we include a discussion on how knowledge acquisition takes place. Furthermore, we introduce a high level description of the architecture, the conceptual model, and other important modules of the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New hybrid method for solving constraint optimization problems in anytime contexts

    Page(s): 325 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (102 KB) |  | HTML iconHTML  

    This paper describes a new hybrid method for solving constraint optimization problems in anytime contexts. We use the valued constraint satisfaction problem (VCSP) framework to model numerous discrete optimization problems. Our method (VNS/LDS+CP) combines a variable neighborhood search (VNS) scheme with limited discrepancy search (LDS) using constraint propagation (CP) to evaluate cost and legality of moves made by VNS. Experiments on real-word problem instances demonstrate that our method clearly outperforms both LNS/CP/GR and other standard local search methods as simulated annealing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Artificial neural networks in hydrological watershed modeling: surface flow contribution from the ungaged parts of a catchment

    Page(s): 367 - 374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB)  

    Watershed modeling is often faced with the difficulty of determining the flow contribution from the ungaged sections of the catchment. Where the main concern is making accurate streamflow forecasts at specific watershed locations, it is cost-effective and efficient to implement a simple system theoretic model. In this paper Artificial Neural Networks (ANNs) are used as system theoretic models to model the ungaged flows. Using data from the Kafue River sub-catchment in Zambia and a simple reservoir routing model, an estimate of the flow contribution from the ungaged sections is derived. Inputs: rainfall, evaporation, and previous-time-step flow are fed to a series of Feedforward-Backpropagation ANNs with target-output the current derived flow. Selected best performing ANNs are compared with Autoregressive Moving Average models with exogenous inputs (ARMAX) and they give accurate and more robust forecasts over long term than the best performing ARMAXs thereby making ANNs a viable alternative in forecasting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Successive search method for valued constraint satisfaction and optimization problems

    Page(s): 341 - 347
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (202 KB) |  | HTML iconHTML  

    In this paper we introduce a new method based on Russian Doll Search (RDS) for solving optimization problems expressed as Valued Constraint Satisfaction Problems (VCSPs). The RDS method solves problems of size n (where n is the number of variables) by replacing one search by n successive searches on nested subproblems using the results of each search to produce a better lower bound. The main idea of our method is to introduce the variables through the successive searches not one by one but by sets of k variables. We present two variants of our method: the first one where the number k is fixed, noted kfRDS; the second one, kuRDS, where k can be variable. Finally, we show that our method improves RDS on daily management of an earth observation satellite View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combinatorial optimization through statistical instance-based learning

    Page(s): 203 - 209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (97 KB) |  | HTML iconHTML  

    Different successful heuristic approaches have been proposed for solving combinatorial optimization problems. Commonly, each of them is specialized to serve a different purpose or address specific difficulties. However, most combinatorial problems that model real world applications have a priori well known measurable properties. Embedded machine learning methods may aid towards the recognition and utilization of these properties for the achievement of satisfactory solutions. In this paper, we present a heuristic methodology which employs the instance-based machine learning paradigm. This methodology can be adequately configured for several types of optimization problems which are known to have certain properties. Experimental results are discussed concerning two well known problems, namely the knapsack problem and the set partitioning problem. These results show that the proposed approach is able to find significantly better solutions compared to intuitive search methods based on heuristics which are usually applied to the specific problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generation of propagation rules for intentionally defined constraints

    Page(s): 236 - 243
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (97 KB) |  | HTML iconHTML  

    A general approach to implement propagation and simplification of constraints consists of applying rules over these constraints. However, a difficulty that arises frequently when writing a constraint solver is to determine the constraint propagation algorithm. In previous work, different methods for automatic generation of propagation rules for constraints defined over finite domains have been proposed. We present a method for generating propagation rules for constraint predicates defined by means of a constraint logic program View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data flow coherence criteria in ILP tools

    Page(s): 179 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (68 KB) |  | HTML iconHTML  

    In this paper we present a new method that uses data flow coherence criteria in definite logic program generation. We outline three main advantages of these criteria supported by our results: (i) drastically pruning the search space (around 90%), (ii) reducing the set of positive examples and reducing or even removing the need for the set of negative examples, and (iii) allowing the induction of predicates that are difficult or even impossible to generate by other methods. Besides these criteria, the approach takes into consideration the program termination condition for recursive predicates. The paper outlines some theoretical issues and implementation aspects of our system for automatic logic program induction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust model for intelligent text classification

    Page(s): 265 - 272
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (106 KB) |  | HTML iconHTML  

    Methods for taking into account linguistic content into text retrieval are receiving growing attention. Text categorization is an interesting area for evaluating and quantifying the impact of linguistic information. Work on text retrieval through the Internet suggests that embedding linguistic information at a suitable level within traditional quantitative approaches is the crucial issue able to bring the experimental stage to operational results. This kind of representational problem is studied in this paper where traditional methods for statistical text categorization are augmented via a systematic use of linguistic information. The addition of NLP capabilities also suggested a different application of existing methods in revised forms. This paper presents an extension of the Rocchio formula as a feature weighting and selection model used as a basis for multilingual information extraction. It allows an effective exploitation of the available linguistic information that better emphasizes the latter with significant data compression and accuracy. The results is an original statistical classifier fed with linguistic features and characterized by the novel feature selection and weighting model. It outperforms existing systems while keeping most of their interesting properties. Extensive tests of the model suggest its application as a viable and robust tool for large scale text classification and filtering, as well as a basic module for more complex scenarios View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New hybrid genetic algorithms for the frequency assignment problem

    Page(s): 136 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB) |  | HTML iconHTML  

    This paper presents a new hybrid genetic algorithm used to solve a frequency assignment problem. The hybrid genetic algorithm presented in this paper uses two original mutation operators. The first mutation operator is based on a greedy algorithm and the second one on an original probabilistic tabu search. The results obtained by our algorithm are better than the best known results obtained by other methods like tabu search and hybrid genetic algorithm. Our results are validated in the field of radiobroadcasting and compared to the best existing solutions in this domain View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A decentralized model-based diagnostic tool for complex systems

    Page(s): 95 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB) |  | HTML iconHTML  

    We address the problem of diagnosing complex discrete-event systems such as telecommunication networks. Given a flow of observations from the system, the goal is to explain those observations by identifying and localizing possible faults. Several model-based diagnosis approaches deal with this problem but they need the computation of a global model which is not feasible for complex systems like telecommunication networks. Our contribution is the proposal of a decentralized approach which permits to carry out an on-line diagnosis without computing the global model. This paper describes the implementation of a tool based on this approach. Given a decentralized model of the system and a flow of observations, the program analyzes the flow and computes the diagnosis in a decentralized way. We also present experimental results based on a real system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Word semantics for information retrieval: moving one step closer to the Semantic Web

    Page(s): 280 - 287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (85 KB) |  | HTML iconHTML  

    The goal of the Semantic Web is to create a new form of Web content meaningful to computers. The Semantic Web aims to provide greater functionality, via intelligent tools such as information extractors, brokers, reasoning services or question answering systems. Semantics can be addressed at several levels. In this paper, we focus on the lowest level-word semantics on which other higher levels such as concept, paragraph, or document levels can be based upon. This model, which we call Word Semantics (WS), does not include the rich set of tags proposed by the XML/RDF standards. Nevertheless, this simpler WS format comes with a big advantage: it is possible with existing technologies and resources. Practically, this new model relies on understanding word meanings, identifying important named entities such as person, organization and others, and linking all this information via an external general purpose ontology, namely WordNet. With these features, we regard the WS model as a short but strong step toward the long term goal of a Semantic Web View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • COREFDRAW: a tool for annotation and visualization of coreference data

    Page(s): 273 - 279
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB) |  | HTML iconHTML  

    In Natural Language Processing, coreference resolution involves finding antecedents of referential expressions (e.g. pronouns or some definite nominals). The resolution of coreference depends on a combination of salience, syntactic, semantic and discourse constraints. The acquisition of such knowledge is difficult and could certainly benefit from a visualization tool, enabling the linguist researcher to find examples of coreference relations. Furthermore, an alternative knowledge-minimalist technique for resolving coreference can be developed by relying on text corpora annotated with coreference data. In this paper we present COREFDRAW, a tool that enables both the annotation of coreference data and its visualization from large text corpora. The resulting annotations enable enhanced coreference resolution methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining first-order knowledge bases for association rules

    Page(s): 218 - 227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    Data mining from relational databases has recently become a popular way of discovering hidden knowledge. Methods such as association rules, chi square rules, ratio rules, implication rules, etc. that have been proposed in several contexts offer complimentary choices in rule induction in this model. Other than inductive and abductive logic programming, research into data mining from knowledge bases has been almost non-existent, because contemporary methods involve inherent procedurality which is difficult to cast into the declarativity of knowledge base systems. In this paper, we propose a logic-based technique for association rule mining from declarative knowledge which does not rely on procedural concepts such as candidate generation. This development is significant as this empowers the users with the capability to explore knowledge bases by mining association rules in a declarative and ad hoc fashion View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintaining credible dialogs in a VideoBot system with special audio techniques

    Page(s): 351 - 358
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (66 KB) |  | HTML iconHTML  

    An approach to using actual audio and video clips of real humans to construct artificial, conversational agents and bots is presented. This approach differs from other schemes focusing on believable, emotional, intelligent agents and bots in that it begins with real human subjects but constructs artificial behaviors and interactions with human users as opposed to beginning with artificial characters and trying to construct real interactions. The approach presents several production challenges during the filming, postproduction, and scripting phases of bot creation that make it difficult for human users to sustain suspension of disbelief during interaction with the bot. Various approaches to solving these problems are presented and described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.