By Topic

Tools with Artificial Intelligence, 1999. Proceedings. 11th IEEE International Conference on

Date 9-11 Nov. 1999

Filter Results

Displaying Results 1 - 25 of 66
  • Proceedings 11th International Conference on Tools with Artificial Intelligence

    Save to Project icon | Request Permissions | PDF file iconPDF (92 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (204 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 445
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB)  
    Freely Available from IEEE
  • SAMON: communication, cooperation and learning of mobile autonomous robotic agents

    Page(s): 229 - 236
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    The Applied Research Laboratory Penn State University “Ocean SAmpling MObile network” (SAMON) Project is developing the simulation testbed for the oceanographic communities interactions through the Web interface and the simulation based design of Autonomous Ocean Sampling Program missions. In this paper, a current implementation of the SAMON is presented, and a formal model based on interactive automata is described. The basic model is extended by process algebra constructs to handle mobility, evolution and learning. To allow cooperation of heterogeneous vehicles a generic behavior message-passing language is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-sided hypotheses generation for abductive analogical reasoning

    Page(s): 145 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    In general, if a knowledge base lacks the necessary knowledge, abductive reasoning cannot explain an observation. Therefore, it is necessary to generate missing hypotheses. CMS can generate missing hypotheses, but it can only generate short-cut hypotheses or hypotheses that will not be placed on real leaves. That is, the inference path is incomplete (truncated), so that abduction is not complete. The inference proposed tries to generate missing hypotheses that are placed on the middle of the inference path by both abductive inference and deductive inference using analogical mapping. As a result, the inference can generate missing hypotheses even on the middle of the inference path View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GraGA: a graph based genetic algorithm for airline crew scheduling

    Page(s): 27 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    Crew scheduling is an NP-hard constrained combinatorial optimization problem, which is very important for the airline industry. We propose a genetic algorithm, GraGA, to solve this problem. A new graph based representation utilizes memory effectively, and provides a framework in which we can easily develop various genetic operators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A knowledge representation and reasoning support for modeling and querying video data

    Page(s): 167 - 174
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    Describes a knowledge-based approach for modelling and retrieving video data. Selected objects of interest in a video sequence are described and stored in a database. This database forms the object layer. On top of this layer, we define the schema layer that is used to capture the structured abstractions of the objects stored in the object layer. We propose two abstract languages on the basis of description logics: one for describing the contents of these layers and the other, which is more expressive, for making queries. The query language provides possibilities for navigation of the schema through forward and backward traversal of links, sub-setting of attributes and constraints on links View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A practical student model in an intelligent tutoring system

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    We consider two questions related to student modeling in an intelligent tutoring system: 1) what kind of student model should we build when we design a new system; and 2) should we divide the student model into different components depending on the information involved. We consider these two questions in the context of a conversational intelligent tutoring system, CIRCSIM-Tutor. We first analyze the range of decisions that the system needs to make and define the information needed to support these decisions. We then describe four distinct models that provide different aspects of this information, taking into consideration the nature of the domain and the constraints provided by the tutoring system. Finally, we briefly discuss our experiments with enhancing the student model in CIRCSIM-Tutor and some general problems regarding building and evaluating different student models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HOT: heuristics for oblique trees

    Page(s): 91 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    This paper presents a new method (HOT) of generating oblique decision trees. Oblique trees have been shown to be useful tools for classification in some problem domains, producing accurate and intuitive solutions. The method can be incorporated into a variety of existing decision tree tools and the paper illustrates this with two very distinct tree generators. The key idea is a method of learning oblique vectors and using the corresponding families of hyperplanes orthogonal to these vectors to separate regions with distinct dominant classes. Experimental results indicate that the learnt oblique hyperplanes lead to compact and accurate oblique trees. HOT is simple and easy to incorporate into most decision tree packages, yet its results compare well with much more complex schemes for generating oblique trees View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structuring requirements specifications through goals interactions

    Page(s): 61 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB)  

    One of the foci of the recent developments in requirements engineering has been the study of conflicting requirements. However, there is no systematic way in the existing approaches to handling the interactions among requirements and their impact on the structuring of requirement specifications. In this paper, a new approach is proposed to formulate the requirement specifications based on goal interactions (i.e., cooperative, conflicting, counterbalanced and irrelevant) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capturing user access patterns in the Web for data mining

    Page(s): 345 - 348
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Existing methods for knowledge discovery in the Web are mostly server-oriented and approaches taken are affected by the use of proxy servers. As a result, it is difficult to capture individual Web user behavior from the current log mechanism. As an effort to remedy this problem, we develop methods for design and implementation of an access pattern collection server to conduct data mining in the Web. We also devise an innovative method, called page conversion, which converts the original Web pages to enciphered ones so that the devised data collection mechanism will not be deliberately bypassed. With the concept of page conversion, the methods we proposed involves a mechanism of software downloading to resolve the difficulty imposed by proxy servers and to effectively capture the Web user behavior. Using the devised mechanism, traversal patterns are generated and compared to those produced by the ordinary Web servers to validate our results. It is shown that the traversal patterns resulting from the devised system are not only more informative but also more accurate than those generated by ordinary Web servers, showing the importance and the usefulness of the mechanism devised View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Autonomous filter engine based on knowledge acquisition from the Web

    Page(s): 343 - 344
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (24 KB)  

    For a long time filtering data has been a very important matter. With the growth of the Internet this question has become more and more pressing. Everyone knows that finding information is not always very easy, because of the large amount of data that the network contains. More generally, our concern here is not only to find what is interesting to download but also to avoid what is unsuitable. One of the different ways recently explored to address the question of filtering and rating information is to apply the studies done in the field of linguistic and artificial intelligence to the Web. In this context, we would like to show that it is possible to build an easy going and low cost tool to compare information. This tool will use automatically extracted and selected Web contents to build a specialized knowledge database that can easily be adapted to any subject in any language View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new study on using HTML structures to improve retrieval

    Page(s): 406 - 409
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (40 KB)  

    Locating useful information effectively form the World Wide Web (WWW) is of wide interest. This paper presents new results on a methodology of using the structures and hyperlinks of HTML documents to improve the effectiveness of retrieving HTML documents. This methodology partitions the occurrences of terms in a document collection into classes according to the tags in which a particular term appears (such as Title, H1-H6, and Anchor). The rationale is that terms appearing in different structures of a document may have different significance in identifying the document. The weighting schemes of traditional information retrieval were extended to include class importance values. We implemented a genetic algorithm to determine a “best so far” class importance factor combination. Our experiments indicate that using this technique the retrieval effectiveness can be improved by 39.6% or higher View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cognitive packet networks

    Page(s): 47 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    We propose cognitive packet networks (CPN) in which intelligent capabilities for routing and flow control are concentrated in the packets, rather than in the nodes and protocols. Cognitive packets within a CPN route themselves. They are assigned goals before entering the network and pursue these goals adaptively. Cognitive packets learn from their own observations about the network and from the experience of other packets with whom they exchange information via mailboxes. Cognitive packets rely minimally on routers. This paper describes CPN and shows how learning can support intelligent behavior of cognitive packets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An expert system for multiple emotional classification of facial expressions

    Page(s): 113 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    This paper discusses the Integrated System for Facial Expression Recognition (ISFER), which performs facial expression analysis from a still dual facial view image. The system consists of three major parts: a facial data generator, a facial data evaluator and a facial data analyser. While the facial data generator applies fairly conventional techniques for facial feature extraction, the rest of the system represents a novel way of performing a reliable identification of 30 different face actions and a multiple classification of expressions into the six basic emotion categories. An expert system has been utilised to convert low level face geometry into high level face actions, and then this into highest level weighted emotion labels. The system evaluation results demonstrated rather high concurrent validity with human coding of facial expressions using FACS and formal instructions in emotion signals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial color histograms for content-based image retrieval

    Page(s): 183 - 186
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    The color histogram is an important technique for color image database indexing and retrieval. In this paper, the traditional color histogram is modified to capture the spatial layout information of each color, and three types of spatial color histograms are introduced: annular, angular and hybrid color histograms. Experiments show that, with a proper trade-off between the granularity in the color and spatial dimensions, these histograms outperform both the traditional color histogram and some existing histogram refinements such as the color coherent vector View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overcoming the Christmas tree syndrome

    Page(s): 425 - 430
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    We propose a new computational approach to logic-based systems that should reason in a fast but logically sound and complete manner about large-scale complex critical devices and systems that can exhibit unexpected faulty behaviors. The approach is original from at least two points of view. First, it makes use of local search techniques while preserving logical deductive completeness. Second, it proves experimentally efficient for very large knowledge bases thanks to new heuristics in the use of local search techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A formal basis for consistency, evolution and rationale management in requirements engineering

    Page(s): 77 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    This paper presents a formal framework that addresses the twin problems of inconsistencies in requirements specifications and requirements evolution. It presents techniques (building on results from the areas of default reasoning and belief revision) for identifying maximal consistent subsets of a specification rendered inconsistent by a change step, with provision for retaining requirements that would be otherwise discarded, in anticipation of their future reuse. The paper identifies the need for consistent application of requirements rationale and provides support for this in the framework. While the problem of requirements evolution is intractable in the general case, tractable special cases exist within the framework. The paper also provides pointers to designing tools based on this framework View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An engine for cursive handwriting interpretation

    Page(s): 271 - 278
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    This paper describes an engine for on-line cursive handwriting that requires very little initial training and that rapidly learns, and thus adapts to, the handwriting style of a user. Key features are a shape analysis algorithm that efficiently determines shapes in the handwritten word, a linear segmentation algorithm that optimally matches characters identified in the handwritten word to characters of candidate words, and a learning algorithm that adds, adjusts, or replaces character templates to adapt to the user writing style. In tests, the system was trained on four samples of each character of the alphabet. One writer wrote these samples in isolation. Using a lexicon with 10,000 words, the system achieved for four additional writers an average recognition rate of 81.3% for top choice and 91.7% for the top three choices. The average response time of the system was 1.2 seconds per handwritten word on a Sun SPARC 10 (42 mips) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring of aircraft operation using statistics and machine learning

    Page(s): 279 - 286
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    This paper describes the use of statistics and machine learning techniques to monitor the performance of commercial aircraft operation. The purpose of this research is to develop methods that can be used to generate reliable and timely alerts so that engineers and fleet specialists become aware of abnormal situations in a large fleet of commercial aircraft that they manage. We introduce three approaches that we have used for monitoring engines and generating alerts. We also explain how additional information can be generated from machine learning experiments so that the parameters influencing the particular abnormal situation and their ranges are also identified and reported. Various benefits of fleet monitoring are explained in the paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new GA approach for the vehicle routing problem

    Page(s): 307 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    This paper focuses on the study of a hybrid of two search heuristics, tabu search (TS) and genetic algorithms (GA) in the vehicle routing problem with time-windows (VRPTW). TS is a local search technique that has been successfully applied to many NP-complete problems. On the other hand, a GA which is capable of searching multiple search areas in a search space is good for diversification. We investigate whether a hybrid of the two heuristics outperforms the individual heuristics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using parallel techniques to improve the computational efficiency of evidential combination

    Page(s): 55 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (60 KB)  

    This paper presents a method of partitioning a Markov tree of belief functions into clusters so as to efficiently implement parallel belief function propagations on the basis of the local computation technique. Our method initially represents computations of combining evidence on all nodes in a Markov tree as parallelism instances, then balances the computation load among these instances, and finally partitions them into clusters which can be mapped onto a set of processors in a PowerPC network. The advantage of our method is that the maximum parallelization can still be achieved, even with limited processor availability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constrained simulated annealing with applications in nonlinear continuous constrained global optimization

    Page(s): 381 - 388
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    This paper improves constrained simulated annealing (CSA), a discrete global minimization algorithm with asymptotic convergence to discrete constrained global minima with probability one. The algorithm is based on the necessary and sufficient conditions for discrete constrained local minima in the theory of discrete Lagrange multipliers. We extend CSA to solve nonlinear continuous constrained optimization problems whose variables take continuous values. We evaluate many heuristics, such as dynamic neighborhoods, gradual resolution of nonlinear equality constraints and reannealing, in order to greatly improve the efficiency of solving continuous problems. We report much better solutions than the best-known solutions in the literature on two sets of continuous optimization benchmarks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Examplar-based prototype selection for a multi-strategy learning system

    Page(s): 37 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    Multistrategy learning (MSL) consists of combining at least two different learning strategies to bring out a powerful system, where the drawbacks of the basic algorithms are avoided. In this scope, instance-based learning (IBL) techniques are often used as the basic component. However, one of the major drawbacks of IBL is the prototype selection problem which consists in selecting a subset of representative instances in order to reduce the classification process. This paper presents a novel approach which consists of three steps. The first one builds a set of lattice-based hypotheses that characterize the training data set. Given an unseen example, the second step selects a subset of training instances through the way they verify the same hypotheses as the unseen example. Finally the last step uses this subset of training instances as the prototypes for the classification of the unseen example. Results of experiments that we conducted show the effectiveness of our approach compared to standard ML techniques on different datasets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the sufficiency of limited testing for knowledge based systems

    Page(s): 431 - 440
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    Knowledge-based engineering and computational intelligence are expected to become core technologies in the design and manufacturing for the next generation of space exploration missions. Yet, if one is concerned with the reliability of knowledge based systems, studies indicate significant disagreement regarding the amount of testing needed for system assessment. The sizes of standard black-box test suites are impracticably large since the black-box approach neglects the internal structure of knowledge-based systems. On the contrary, practical results repeatedly indicate that only a few tests are needed to sample the range of behaviors of a knowledge-based program. In this paper, we model testing as a search process over the internal state space of the knowledge-based system. When comparing different test suites, the test suite that examines larger portion of the state space is considered more complete. Our goal is to investigate the trade-off between the completeness criterion and the size of test suites. The results of testing experiment on tens of thousands of mutants of real-world knowledge based systems indicate that a very limited gain in completeness can be achieved through prolonged testing. The use of simple (or random) search strategies for testing appears to be as powerful as testing by more thorough search algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.