By Topic

Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings. 14th IEEE International Conference on

Date 4-6 Nov. 2002

Filter Results

Displaying Results 1 - 25 of 72
  • Proceedings 14th IEEE International Conference on Tools with Artificial Intelligence

    Save to Project icon | Request Permissions | PDF file iconPDF (324 KB)  
    Freely Available from IEEE
  • Error-based pruning of decision trees grown on very large data sets can work!

    Page(s): 233 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (254 KB) |  | HTML iconHTML  

    It has been asserted that, using traditional pruning methods, growing decision trees with increasingly larger amounts of training data will result in larger tree sizes even when accuracy does not increase. With regard to error-based pruning, the experimental data used to illustrate this assertion have apparently been obtained using the default setting for pruning strength; in particular, using the default certainty factor of 25 in the C4.5 decision tree implementation. We show that, in general, an appropriate setting of the certainty factor for error-based pruning will cause decision tree size to plateau when accuracy is not increasing with more training data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 547 - 548
    Save to Project icon | Request Permissions | PDF file iconPDF (214 KB)  
    Freely Available from IEEE
  • About adaptive state knowledge extraction for septic shock mortality prediction

    Page(s): 3 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    The early prediction of mortality is one of the unresolved tasks in intensive care medicine. This paper models medical symptoms as observations cased by transitions between hidden Markov states. Learning the underlying state transition probabilities results in a prediction probability success of about 91%. The results are discussed and put in relation to the model used. Finally, the rationales for using the model are reflected: Are there states in the septic shock data?. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data mining using cultural algorithms and regional schemata

    Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2432 KB) |  | HTML iconHTML  

    In the paper we demonstrate how evolutionary search for functional optima can be used as a vehicle for data mining, that is, in the process of searching for optima in a multi-dimensional space we can keep track of the constraints that must be placed on related variables in order to move towards the optima. Thus, a side effect of evolutionary search can be the mining of constraints for related variables. We use a cultural algorithm framework to embed the search and store the results in regional schemata. An application to a large-scale real world archaeological data set is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining association rules in text databases using multipass with inverted hashing and pruning

    Page(s): 49 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1137 KB) |  | HTML iconHTML  

    In this paper, we propose a new algorithm named multipass with inverted hashing and pruning (MIHP) for mining association rules between words in text databases. The characteristics of text databases are quite different from those of retail transaction databases, and existing mining algorithms cannot handle text databases efficiently because of the large number of itemsets (i.e., words) that need to be counted. Two well-known mining algorithms, the apriori algorithm and the direct hashing and pruning (DHP) algorithm, are evaluated in the context of mining text databases, and are compared with the proposed MIHP algorithm. It has been shown that the MIHP algorithm performs better for large text databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reinforcement learning in multiagent systems: a modular fuzzy approach with internal model capabilities

    Page(s): 469 - 474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1468 KB) |  | HTML iconHTML  

    Most of the methods proposed to improve the learning ability in multiagent systems are not appropriate to more complex multiagent learning problems because the state space of each learning agent grows exponentially in terms of the number of partners present in the environment. We propose a novel and robust multiagent architecture to handle these problems. The architecture is based on a learning fuzzy controller whose rule base is partitioned into several different modules. Each module deals with a particular agent in the environment and the fuzzy controller maps the input fuzzy sets to the output fuzzy sets that represent the state space of each learning module and the action space, respectively. Also, each module uses an internal model table to estimate the action of the other agents. Experimental results show the robustness and effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A TMO based approach to structuring real-time agents

    Page(s): 165 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    Mobile agent structuring is an increasingly practiced branch of distributed computing software engineering. In this paper we discuss the major issues encountered in producing real-time (RT) agents which are designed to perform output actions in a manner meeting specified timing requirements. An approach to structuring RT agents, which is an extension of a high-level real-time distributed object programming approach called the time-triggered message-triggered object (TMO) programming scheme, is also presented. The TMO based approach is promising because it provides a sound framework in which timeliness issues can be resolved in a cost-effective manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Updating a hybrid rule base with new empirical source knowledge

    Page(s): 9 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Neurules are a kind of hybrid rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. Each neurule is represented as an adaline unit. One way that the neurules can he produced is from training examples (empirical source knowledge). However, in certain application fields not all of the training examples are available a priori. A number of them become available over time. In these cases, updating the corresponding neurules is necessary. In this paper, methods for updating a hybrid rule base, consisting of neurules, to reflect the availability of new training examples are presented The methods are efficient, since they require the least possible retraining effort and the number of the produced neurules is kept as small as possible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A lazy divide and conquer approach to constraint solving

    Page(s): 91 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    A divide and conquer strategy enables a problem to be divided into subproblems, which are solved independently and later combined to form solutions of the original problem. For solving constraint satisfaction problems, however, the divide and conquer technique has not been shown to be effective. This is because it is not possible to cleanly divide a problem into independent subproblems in the presence of constraints that involve variables belonging to different subproblems. Consequently, solutions of one subproblem may prune solutions of another subproblem, making those solutions of the latter subproblem redundant. In this paper we propose a divide and conquer approach to constraint solving in a lazy evaluation framework. In this framework, a subproblem is solved on demand, which eliminates redundant consistency checks. Moreover, once solved, the solutions of a subproblem can be reused in the satisfaction of various global constraints connecting this subproblem with others, thus reducing the search space. We also demonstrate the effectiveness of our algorithm in solving a practical problem: finding all instances of a user-defined pattern in stock market price charts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting similarities and differences in images using the PFF and LGG approaches

    Page(s): 355 - 362
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (779 KB) |  | HTML iconHTML  

    This paper presents two methods for comparison of images and evaluation of visibility of artifacts due to hidden information, changes or noise. The first method is based on pixel flow functions (PFF) able to detect changes in images by projecting the pixel values vertically, horizontally and diagonally. These projections create "functions" related with the average values of pixels summarized horizontally, vertically and diagonally. These functions represent image signatures. The comparison of image signatures defines differences in images. The second method is based on a heuristic graph model, known as local-global graph (LGG), for evaluating visibility of modifications in digital images. The LGG is based on segmentation and comparing the segments while thresholding the differences in their attributes. The methods have been implemented in C++ and their performance is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural-network approach to modeling and analysis

    Page(s): 489 - 493
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1375 KB) |  | HTML iconHTML  

    A backpropagation network can always be used in modeling. This study is concerned with the stability problem of a neural network (NN) system which consists of a few subsystems represented by NN models. In this paper, the dynamics of each NN model is converted into linear inclusion representation. Subsequently, based on the representations, the stability conditions in terms of Lyapunov's direct method is derived to guarantee the asymptotic stability of NN systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DSatz: a directional SAT solver for planning

    Page(s): 199 - 208
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    SAT-based planners have been characterized as disjunctive planners that maintain a compact representation of search space of action sequences. Several ideas from refinement planners (conjunctive planners) have been used to improve performance of SAT-based planners or get a better understanding of planning as SAT One important lesson from refinement planning is that backward search being goal directed can be more efficient than forward search. Another lesson is that bidirectional search is generally not efficient. This is because the forward and backward searches can miss each other Though effect of direction of plan refinement (forward, backward, bidirectional etc.) on efficiency of plan synthesis has been deeply investigated in refinement planning, the effect of directional solving of SAT encodings is not investigated in depth. We solved several propositional encodings of benchmark planning problems with a modified form (DSatz) of the systematic SAT solver Satz. DSatz offers 21 options for solving a SAT encoding of a planning problem, where the options are about assigning truth values to action and/or fluent variables in forward or backward or both directions, in an intermittent or non-intermittent style. Our investigation shows that backward search on plan encodings (assigning values to fluent variables first, starting with goal) is very inferior We also show bidirectional solving options and forward solving options turn out to be far more efficient than other solving options. Our empirical results show that the efficient systematic solver Satz which exploits variable dependencies call be significantly enhanced with use of our variable ordering heuristics which are also computationally very cheap to apply. Our main results are that directionality does matter in solving SAT encodings of planning problems and that certain directional solving options are superior to others. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ontologies for knowledge representation in a computer-based patient record

    Page(s): 114 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (239 KB) |  | HTML iconHTML  

    In contrast to existing patient-record systems, which merely offer static applications for storage and presentation, a helpful patient-record system is a problem-oriented, knowledge-based system, which provides clinicians with situation-dependent information. We propose a practical approach to extend the current data model with (1) means to recognize and interpret situations, (2) knowledge of how clinicians work and what information they need, and (3) means to rank information according to its relevance in a given care situation. Following the methodology of second-generation knowledge-based systems, that use ontologies to define fundamental concepts, their properties, and interrelationships within a particular domain, we present an ontology that supports three prerequisite features for a future helpful patient-record system: a family-care workflow process, a problem-oriented patient record, and means to identify relevant information to the care process and medical problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software quality classification modeling using the SPRINT decision tree algorithm

    Page(s): 365 - 374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. This paper presents an in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm. Many classification algorithms have memory limitations including the requirement that data sets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful prediction accuracy. The case study used comprises of software metrics and fault data collected over four releases from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability in comparison to those built by CART. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A tool for extracting XML association rules

    Page(s): 57 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (886 KB) |  | HTML iconHTML  

    The recent success of XML as a standard to represent semi-structured data, and the increasing amount of available XML data pose new challenges to the data mining community. In this paper we present the XMINE operator, a tool developed to extract XML association rules for XML documents. The operator, based on XPath and inspired by the syntax of XQuery, allows us to express complex mining tasks, compactly and intuitively. XMINE can be used to specify indifferently (and simultaneously) mining tasks both on the content and on the structure of the data, since the distinction in XML is slight. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Graphplan

    Page(s): 138 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    Significant advances in plan synthesis under classical assumptions have occurred in the last seven years. Such efficient planners are all centralized planners. One very major development among these is the Graphplan planner. Its popularity is clear from its several efficient adaptations/extensions. Since several practical planning problems are solved in a distributed manner it is important to adapt Graphplan to distributed planning. This involves dealing with significant challenges like decomposing the goal and set of actions without losing completeness. We report two sound two-agent planners DGP (distributed Graphplan) and IG-DGP (interaction graph-based DGP). Decomposition of goal and action set in DGP is carried out manually and in IG-DGP it is carried out automatically based on a new representation called interaction graphs. Our empirical evaluation shows that both these distributed planners are faster than Graphplan. IG-DGP is orders of magnitude faster than Graphplan. IG-DGP benefits significantly from interaction graphs which allow decomposition of a problem into fully independent subproblems under certain conditions. IG-DGP is a hybrid planner in which a centralized planner processes a problem until it becomes separable into two independent subproblems that are passed to a distributed planner This paper also shows that advances in centralized planning can significantly benefit distributed planners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution and evaluation of software quality models

    Page(s): 543 - 545
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB)  

    Complex systems, services and enterprises are very dependent on computer-communication technology and in particular to their supporting and controlling software systems. This paper describes aspects of software system quality from different points of view. It provides a historical overview of the evolution of software quality and the means and methods used to achieve and enhance it. The traditional software quality models trace their origins to the well-established manufacturing industries. Since quality of a product consists of many attributes, we identify the commonly accepted established entities that conform to our vision of good quality. Notions and meanings of quality vary across disciplines and application domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Machine learning and software engineering

    Page(s): 22 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development and maintenance tasks could be formulated as learning problems and approached in terms of learning algorithms. This paper deals with the subject of applying machine learning methods to so are engineering. In the paper, we first provide the characteristics and applicability of some frequently utilized machine learning algorithms. We then summarize and analyze the existing work and discuss some general issues in this niche area. Finally we offer some guidelines on applying machine learning methods to software engineering tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A genetic testing framework for digital integrated circuits

    Page(s): 521 - 526
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    In order to reduce the time-to-market and simplify gate-level test generation for digital integrated circuits, GA-based functional test generation techniques are proposed for behavioral and register transfer level designs. The functional tests generated can be used for design verification, and they can also be reused at lower levels (i.e. register transfer and logic gate levels) for testability analysis and development. Experimental results demonstrate the effectiveness of the method in reducing the overall test generation time and increasing the gate-level fault coverage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reasoning on aspectual-temporal information in French within conceptual graphs

    Page(s): 315 - 322
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    This paper presents a modeling of time, aspect and verbal meanings in natural language processing within Simple Conceptual Graphs (SCG) by way of Semantico-Cognitive Schemes (SCS) and the aspectual-temporal theory. Our system translates a semantico-cognitive representation in terms of SCGs. The SCS allows us to build a representation of a text taking into account fine subtleties of natural language as the information about time and aspect. The Conceptual Graphs formalism provides a powerful inferential mechanism which makes it possible to reason from texts. Our work bears on French texts. A text is represented by two different structures both represented within the SCG model. The first structure models the semantico-cognitive representation while the second one is the temporal diagram representing the temporal constraints between the situations described in the text. Linking these structures entails an expansion of the original SCG model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Calculus of variations in discrete space for constrained nonlinear dynamic optimization

    Page(s): 67 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    We propose new dominance relations that can speed up significantly the solution process of nonlinear constrained dynamic optimization problems in discrete time and space. We first show that path dominance in dynamic programming cannot be applied when there are general constraints that span across multiple stages, and that node dominance, in the form of Euler-Lagrange conditions developed in optimal control theory in continuous space, cannot be extended to that in discrete space. This paper is the first to propose efficient dominance relations, in the form of local saddle-point conditions in each stage of a problem, for pruning states that will not lead to locally optimal paths. By utilizing these dominance relations, we develop efficient search algorithms whose complexity, despite exponential, has a much smaller base as compared to that without using the relations. Finally, we demonstrate the performance of our algorithms on some spacecraft planning and scheduling benchmarks and show significant improvements in CPU time and solution quality as compared to those obtained by the existing ASPEN planner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintenance scheduling of oil storage tanks using tabu-based genetic algorithm

    Page(s): 209 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    Due to the entry of Taiwan into WTO and the recently liberalized Petroleum Management Law, the oil market in Taiwan is liberalized and thus is becoming more competitive. However, the space limitation and the residents' increasing awareness of environmental protection issues in the island make international vendors unavoidably have to rent tanks from domestic oil companies. In order to help the leaseholder maximize revenue by increasing the availability of tanks, an efficient maintenance scheduling is needed. This paper introduces a tabu-based genetic algorithm (TGA) and its implementation for solving a real-world maintenance scheduling problem of oil storage tanks. TGA incorporates a tabu list to prevent inbreeding and utilizes an aspiration criterion to supply moderate selection pressure so that the selection efficiency is improved, and the population diversity is maintained. The experimental results validate that TGA outperform GA in terms of solution quality and convergence efficiency. Keywords: Tabu-based genetic algorithm, maintenance scheduling, tabu search, genetic algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An agent-based approach to inference prevention in distributed database systems

    Page(s): 413 - 422
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    We propose an inference prevention agent as a tool that enables each of the databases in a distributed system to keep track of probabilistic dependencies with other databases and then use that information to help preserve the confidentiality of sensitive data. This is accomplished with minimal sacrifice of the performance and survivability gains that are associated with distributed database systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diagnosis of component failures in the Space Shuttle main engines using Bayesian belief networks: a feasibility study

    Page(s): 181 - 188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB) |  | HTML iconHTML  

    Although the Space Shuttle is a high reliability system, its condition must he accurately diagnosed in real-time. Two problems plague the system - false alarms that may be costly, and missed alarms which may be not only expensive, but also dangerous to the crew. This paper describes the results of a feasibility study in which a multivariate state estimation technique is coupled with a Bayesian belief network to provide both fault detection and fault diagnostic capabilities for the Space Shuttle main engines (SSME). Five component failure modes and several single sensor failures are simulated in our study and correctly diagnosed. The results indicate that this is a feasible fault detection and diagnosis technique and fault detection and diagnosis can he made earlier than standard redline methods allow. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.