By Topic

Artificial Intelligence for Applications, 1994., Proceedings of the Tenth Conference on

Date 1-4 March 1994

Filter Results

Displaying Results 1 - 25 of 82
  • Proceedings of the Tenth Conference on Artificial Intelligence for Applications

    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Redesign of local area networks using similarity-based adaptation

    Page(s): 284 - 290
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    This paper describes the design of a case-based reasoning system for the redesign of local area networks. It introduces a mechanism for solution adaptation based on a hierarchy of possible actions, each of which is associated with background knowledge about its suitability. A novel similarity measure is used to rank actions where multiple alternative actions are found for an action that cannot be applied in the current problem context. The measure uses a heuristic weighting function between the degree of abstraction and the degree of specificity. It is shown how other measures for closeness may be derived as specializations of the one presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AMMP: an Automated Maintenance Manual Production system

    Page(s): 477 - 478
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    Generating maintenance documentation is traditionally a manual process. This is true for most industries, although products and customer documentation requirements are frequently revised. The result is frequent documentation revision and significant labor costs. This paper describes a system to automate the generation of maintenance documentation for mechanical assemblies, the Automated Maintenance Manual Production (AMMP) system. Product and customer requirement revisions are accessed from a central database. For a given maintenance task, a spatial planner module derives a sequence of operations to carry out the task based on the product CAD definition. A presentation manager module converts this sequence into formatted text and illustrations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge modeling environment for job-shop scheduling problem

    Page(s): 456 - 457
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    The scheduling model and method must be designed to be application-domain dependent so as to reflect a set of constraints, objectives and preferences which reside in the target problem. We analyzed the scheduling process of human experts in a knowledge-base level and have developed a task-specific shell named ARES/SCH. ARES/SCH possesses a primitive task library that is a collection of domain-independent and generic components of scheduling mechanisms. The whole scheduling method can be described as a combinational flow-chart of primitive tasks. Memory module mounting shop (MMS) scheduling is shown as an example of ARES/SCH applications. It was apparent that ARES/SCH contributes to the rapid development of scheduling systems and supports a wide range of scheduling domains View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimizing genetic algorithm parameters for multiple fault diagnosis applications

    Page(s): 434 - 440
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    Multiple fault diagnosis (MFD) is the process of determining the correct fault or faults that are responsible for a given set of symptoms. Exhaustive searches or statistical analyses are usually too computationally expensive to solve these types of problems in real-time. We use a simple genetic algorithm to significantly reduce the time required to evolve a satisfactory solution. We show that when using genetic algorithms to solve these kinds of applications, best results are achieved with higher than “normal” mutation rates. Schemata theory is used to analyze this data and show that even though schema length increases, the Hamming distance between binary representations of best-fit chromosomes is quite small. Hamming distance is then related to schema length to show why mutation rate becomes important in this type of application View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MeteoAssert: generation and organization of weather assertions from gridded data

    Page(s): 275 - 281
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    MeteoAssert, a system developed at the Forecast System Laboratory, analyzes gridded data sets and produces descriptions which are organized sets of assertions representing the content of weather messages. Each assertion conveys a single weather characteristic with a certain spatial and temporal scope. The assertions in a description are linked by discourse relations that predetermine the structure of the weather message: a natural language text, a piece of graphics, a table, or a mixture of these elements. The descriptions are generated in response to queries representing the information needs of the user. Three models drive the system: territory, time, and parameter. Each model defines the objects in terms by which the descriptions are created. MeteoAssert works as a server to several systems dealing with different applications and preparing various weather displays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of explanations for spreadsheet applications

    Page(s): 268 - 274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB)  

    Applications developed by end-users using spreadsheets cannot be effectively distributed to other users because of the need for adequate information on the functioning of the applications themselves. In fact, building an explanation facility to support an application is a time-consuming task. This paper illustrates the realization of a tool for the automatic generation of explanations in conventional spreadsheet applications. The system works in two stages: the first one corresponds to the construction of a knowledge base containing the information on the mathematical relations coded into a programmed spreadsheet; the second one consists of the generation of explanations (concerning the quantities used in the spreadsheet and their relationships) from the representation previously built View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of rule-based expert systems via state transition system construction

    Page(s): 320 - 326
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    Embedded rule-based expert systems must satisfy stringent timing constraints when applied to real-time environments. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is based on a construction of the reduced cycle-free finite state transition system corresponding to the input rule-based system. The method makes use of rule-base system decomposition, concurrency and state equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system (1) has fewer rule firings to reach the fixed point, (2) is inherently stable and (3) has no redundant rules. The synthesis method also determines the tight response time bound of the new system. The optimized system is guaranteed to compute correct results, independent of the scheduling strategy and execution environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using causal reasoning to validate stochastic models

    Page(s): 366 - 372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    An important problem of validating stochastic models is addressed. Validating stochastic models is necessary for modeling high-performance and highly dependable computers accurately. This paper develops a model validation methodology using causal reasoning. More specifically, this technique uses the structural and behavioral knowledge derived from the system specification and a causal reasoning mechanism for validation purposes. The scope of this research is limited to the conceptual validation of Markov models. Conceptual validation, as opposed to empirical validation, does not require the use of data. The validation process primarily involves generating a reference object, translating the given model into a common format, and comparing the two objects to identify holes and inconsistencies. Event trees are used as the common format. The effectiveness of this methodology is tested by validating models of five example systems. For testing purposes, errors are introduced into the models of these systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component ontological representation of function for diagnosis

    Page(s): 448 - 454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    Using function instead of fault probabilities for candidate discrimination during model based diagnosis has the advantages that function is more readily available, and facilitates explanation generation. However, current representations of function have been context dependent and state based, making them inefficient and time consuming. We propose classes as a scheme of representation of function for diagnosis based on component ontology principles, i.e., we define component functions (called classes) with respect to their ports. The scheme is space and time-wise linear in complexity, and hence, efficient. It is also domain-independent and scalable to representation of complex devices. We demonstrate the utility of the representation for the diagnosis of a printer buffer board View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using hybrid knowledge bases for missile siting problems

    Page(s): 141 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    Hybrid knowledge bases (HKBs) are a formalism for integrating multiple representations of knowledge and data. HKBs provide a uniform framework for integrating uncertain information (as is often the case in terrain reasoning), temporal information (needed for weather effects, etc.), and numeric constraint solving capabilities (for situation assessment). We show how the HKB formalism may be applied to solve the problem of placing Patriot and Hawk missile batteries in a specified terrain, subject to the requirement that various existing assets be afforded maximal protection. We formalize this problem in a clear, mathematical framework, using the HKB paradigm, and show how the problem is solved. This provides a mathematically sound, as well as a practically viable, scalable solution to the important problem of missile siting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Memory-based parsing with parallel marker-passing

    Page(s): 202 - 207
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    Presents a parallel memory-based parser called PARALLEL, which is implemented on a marker-passing parallel AI computer called the Semantic Network Array Processor (SNAP). In the PARALLEL memory-based parser, the parallelism in natural language processing is utilized by a memory search model of parsing. Linguistic information is stored as phrasal patterns in a semantic network knowledge base that is distributed over the memory of the parallel computer. Parsing is performed by recognizing and linking linguistic patterns that reflect a sentence interpretation. This is achieved via propagating markers over the distributed network. We have developed a system capable of processing newswire articles about terrorism with a large knowledge base of 12,000 semantic network nodes. This paper presents the structure of the system, the memory-based parsing method used and performance results obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Genetic algorithms for partitioning air space

    Page(s): 291 - 297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB)  

    In this paper, we show how genetic algorithms can be used to compute automatically a balanced sectoring of air-space to increase air traffic control capacity in high density areas View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic diagnostic reasoning: towards improving diagnostic efficiency

    Page(s): 441 - 447
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The author describes a new approximation method which can significantly improve the computational efficiency of Bayesian networks. He applies this technique to the diagnosis of acute abdominal pain, with good results. This approach is based on using a reduced set of the model parameters for diagnostic reasoning. The tradeoffs in diagnostic accuracy required to obtain increased computational efficiency (due to the smaller models) are carefully specified using a variety of statistical metrics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A tool for the acquisition of Japanese-English machine translation rules using inductive learning techniques

    Page(s): 194 - 201
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB)  

    Addresses the problem of constructing translation rules for ALT-J/E-a knowledge-based Japanese-English translation system developed at NTT. We introduce the system ATRACT, which is a semi-automatic knowledge acquisition tool designed to facilitate the construction of the desired translation rules through the use of inductive machine learning techniques. Rather than building rules by hand from scratch, a user of ATRACT can obtain good candidate rules by providing the system with a collection of examples of Japanese sentences along with their English translations. This learning task is characterized by two factors: (i) it involves exploiting a huge amount of semantic information as background knowledge; (ii) training examples are “ambiguous”. Currently, two learning methods are available in ATRACT. Experiments show that these methods lead to rules that are very close to those composed manually by human experts given only a reasonable number of examples. These results suggest that ATRACT will significantly contribute to reducing the cost and improving the quality of ALT-J/E translation rules View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Routing heuristics for Cayley graph topologies

    Page(s): 474 - 476
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    In general, a routing algorithm has to map virtual paths to sequences of physical data transfer operations. The number of physical transmission steps needed to transfer a particular data volume is proportional to the resulting transmission time. In the context of a corresponding optimization process, the Cayley graph model is used to generate and evaluate a large number of different interconnection topologies. Candidates are further evaluated with respect to fast and efficient routing heuristics using A* traversals. Simulated annealing techniques are used to find accurate traversal heuristics for each candidate. The results justify the application of these techniques to a large extent. In fact, the resulting heuristics provide a significant reduction in the number of expanded search nodes during the path-finding process at run-time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • COCOS-a tool for constraint-based, dynamic configuration

    Page(s): 373 - 380
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB)  

    The COCOS (COnfiguration through COnstraint Satisfaction) project was aimed at producing a tool that could be used for a variety of configuration applications. Traditionally, representation methods for technical configuration have focused either on reasoning about the structure of systems or the quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed a language based on an extension of the constraint satisfaction problem (CSP) model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. We briefly discuss the current implementation and the experiences obtained with a real-world knowledge base View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Qualitative spatial reasoning about objects in motion: application to physics problem solving

    Page(s): 238 - 245
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB)  

    Describes an ongoing project to develop a theory of qualitative spatial reasoning which merges a simple, intuitive description of the spatial extent, relative position, and orientation of objects with existing methods for qualitative reasoning about dynamically changing worlds. We are applying our theories within a system for problem solving about the magnetic fields domain. We describe methods for integrating diagram and test input to a problem solver, methods of abstraction for modeling the spatial extents of objects, and a method for modeling spatial relations between objects through inequalities on extremal points which directly allows reasoning about the effects of translational motion View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic classification of planktonic foraminifera by a knowledge-based system

    Page(s): 358 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    The identification of foraminifera is an important task in oil exploration. However, this task is tedious and time-consuming. In this work, a knowledge-based system is developed for the identification of planktonic foraminifera. The identification process is made automatic by means of computer vision techniques. Currently, the knowledge-based system, though just being a prototype in this stage of its development, is able to identify several important species of planktonic foraminifera based on the parameters obtained by the image analysis algorithms. An overview of our method and the main components of the knowledge-based system are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Protein structure prediction using hybrid AI methods

    Page(s): 471 - 473
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    Describes a new approach for predicting protein structures based on artificial intelligence methods and genetic algorithms. We combine nearest neighbor searching algorithms, neural networks, heuristic rules and genetic algorithms to form an integrated system to predict protein structures from their primary amino acid sequences. First, we describe our methods and how they are integrated, and then apply our methods to several protein sequences. The results are very close to the real structures obtained by crystallography. Parallel genetic algorithms are also implemented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network expert system shell

    Page(s): 502 - 508
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    Presents the architecture of a hybrid neural network expert system shell. The system, structured around the concept of a “network element”, is aimed at preserving the semantic structure of the expert system rules whilst incorporating the learning capability of neural networks into the inferencing mechanism. Using this architecture, every rule of the knowledge base is represented by a one- or two-layer neural network element. These network elements are dynamically linked up to form a rule-tree during the inferencing process. The system is also able to adjust its inferencing strategy according to different users and situations. A rule editor is also provided to enable easy maintenance of the neural network rule elements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating programs from connections of physical models

    Page(s): 224 - 230
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    Describes a system that constructs a computer program from a graphical specification provided by the user. The specification consists of diagrams that represent physical and mathematical models; connections between diagram ports signify that corresponding quantities must be equal. A program (in Lisp or C) is generated from the graphical specification by data flow analysis and algebraic manipulation of equations associated with the physical models. Equations, algebraic manipulations, and unit conversions are hidden from the user and are performed automatically. This system allows more rapid generation of programs than would be possible with hand coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OaSiS: integrating safety reasoning for decision support in oncology

    Page(s): 185 - 191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    Oncologists manage the treatment of cancer patients under a variety of protocol-based clinical trials. Each trial requires data to be collected for monitoring efficacy and toxicity. The life-threatening nature of cancer and the toxicity of therapy emphasise the safety-critical nature of oncology. OaSiS provides decision support for protocol-based treatment of cancer and contributes to better data management and safer, more consistent application of protocols. It offers a highly graphical interface, employs logic-based problem-solving and is implemented in PROLOG View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using background knowledge to improve inductive learning of DNA sequences

    Page(s): 351 - 357
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    Successful inductive learning requires that training data be expressed in a form where underlying regularities can be recognized by the learning system. Unfortunately, many applications of inductive learning-especially in the domain of molecular biology-have assumed that data are provided in a form already suitable for learning, whether or not such an assumption is actually justified. This paper describes the use of background knowledge of molecular biology to re-express data into a form more appropriate for learning. Our results show dramatic improvements in classification accuracy for two very different classes of DNA sequences using traditional “off-the-sheIf” decision-tree and neural-network inductive-learning methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Auto-MPS: an automated master production scheduling system for large volume manufacturing

    Page(s): 26 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    The Automated Master Production Scheduler (Auto-MPS) is a hybrid expert scheduling system which performs production scheduling of thousands of assemblies in a high-volume manufacturing environment. It generates schedules based on a set of rules and constraint satisfaction algorithms which reflect the scheduling strategies created by management to meet their customer demand while still controlling inventory and shipping costs. The Auto-MPS also identifies the existence of significant situations which need to be analyzed by management. A graphical user interface that includes sophisticated graphical displays and hypertext based editors allows the user to easily understand the status of the current production schedules and rapidly identify and analyze potential problems. The Auto-MPS has been in production for nearly two years and has significantly improved the scheduling processes at AlliedSignal Safety Restraint Systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.