Scheduled System Maintenance
On Friday, October 20, IEEE Xplore will be unavailable from 9:00 PM-midnight ET. We apologize for the inconvenience.
Notice: There is currently an issue with the citation download feature. Learn more.

Logic Programming:Proceedings of the 1996 Joint International Conference and Symposium on Logic Programming

Cover Image Copyright Year: 1996
Author(s): Michael Maher
Publisher: MIT Press
Content Type : Books & eBooks
Topics: Computing & Processing
  • Print

Abstract

September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes

  •   Click to expandTable of Contents

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Front Matter

      Page(s): i - xix
      Copyright Year: 1996

      MIT Press eBook Chapters

      This chapter contains sections titled: Half Title, Title, Copyright, Contents, Program Committee, The Association for Logic Programming, Series Foreword, Preface, Referees View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Invited Talks

      Page(s): 1
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Constraint Programming

      Page(s): 3
      Copyright Year: 1996

      MIT Press eBook Chapters

      Constraint Logic Programming is one of the most successful parts of Logic Programming when it comes to real world applications. In order to broaden the acceptance of CLP, we implemented CLP ideas into the C++ library ILOG Solver. We discuss here some of the tradeoffs we had to face, and show that by a large part the clear semantics of CLP is preserved. The bottom line of our work is to show that we can implement a CLP type of system which handles constraints on Booleans, integers, floating point intervals and finite sets, but not Herbrand terms. The availability of such a system in C++ leads to good acceptance, as shown by the real world applications developed with it. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Contributed Papers

      Page(s): 1
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Inferring Left-terminating Classes of Queries for Constraint Logic Programs

      Page(s): 7 - 21
      Copyright Year: 1996

      MIT Press eBook Chapters

      This paper presents an approach for universal left-termination of constraint logic programs, based on approximations. An approximation is basically an algebraic morphism between two constraint structures. By moving from the original domain to natural numbers, we compute inter-argument relations and some control information about a program. By moving from the natural numbers to the booleans, we compute a boolean term called a termination condition such that if the boolean approximation of a goal entails the termination condition, then the Prolog computation tree for that goal is finite. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      CLP (Rlin) Revised>

      Page(s): 22 - 36
      Copyright Year: 1996

      MIT Press eBook Chapters

      This article presents a novel implementation of constraint logic programming language over linear real constraints. Contrary to most existing implementations winch use the tableau method and trailing, the new system, called Athena, is based on a revised simplex algorithm over bounded variables supporting both constraint addition and constraint removal. Athena is the first, implementation of CLP(Rlin) whose space requirement for the numerical solver is independent of the number of choice points. In addition, on standard CLP(Rlin) benchmarks, Athena produces significant time speed-ups (up to a factor of 7) and memory reduction (up to a factor of 23) compared to existing implementations. These speed-ups can be even more substantial on large sparse problems. The main technical contributions underlying these results are a number of implementation techniques to obtain an efficient dynamic revised simplex method. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Effectiveness of Optimizing Compilation for CLP(R)

      Page(s): 37 - 51
      Copyright Year: 1996

      MIT Press eBook Chapters

      Constraint Logic Programming (CLP) languages extend logic programming by allowing constraints from different domains such as real numbers or Boolean functions. They have proved to be ideal for expressing problems that require interactive mathematical modelling and complex combinatorial optimization problems. However, CLP languages have mainly been considered as research systems, useful for rapid prototyping, but not really competitive with more conventional programming languages when performance is crucial. One promising approach to improving the performance of CLP systems is the use of powerful program optimizations to reduce the cost of constraint solving. We extend work in this area by describing a new optimizing compiler for the CLP language CLP(R). The compiler implements six powerful optimizations: reordering of constraints, bypass of the constraint solver, splitting and dead code elimination, removal of redundant constraints, removal of redundant variables, and specialization of constraints which cannot fail. We systematically evaluate the effectiveness of each optimization in isolation and also in combination. Our empirical evaluation of the compiler verifies that optimizing compilation can be made efficient enough to allow compilation of real-world programs and that it is worth performing such compilation because it gives significant time and space performance improvements. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Framework for Modal Logic Programming

      Page(s): 52 - 66
      Copyright Year: 1996

      MIT Press eBook Chapters

      In this paper we present a framework for developing modal extensions of logic programming, which are parametric with respect to the properties chosen for the modalities and which allow sequences of modalities of the form [t], where t is a term of the language, to occur in front of clauses, goals and clause heads. The properties of modalities are specified by a set A of wiclusion axioms of the form [t1] . . . [tn]α ⊂ [s1] . . . [sm]α. The language can deal with many of the wellknown modal systems and several examples are provided. Due to its features, it is particularly suitable for performing epistemic reasoning, defining parametrzc and nested modules, describing inheritance in a hierarchy of classes and reasoning about actions. A goal directed proof procedure of the language is presented, which is modular with respect to the properties of modalities. Moreover, we define a fixpoint semantics, by generalizing the standard construction for Horn clauses, which is used to prove soundness and completeness of the operational semantics with respect to model theoretic semantics, and it works for the whole class of logics identified by the inclusion axioms. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Linear Logic Calculus of Objects

      Page(s): 67 - 81
      Copyright Year: 1996

      MIT Press eBook Chapters

      This paper presents a linear logic programming language, called O⊸ that gives a complete account of an object-oriented calculus with inheritance and override. This language is best understood as a. logical counterpart the object and record extensions of functional programming that have recently been proposed in the literature. From these proposals, O⊸ inherits the representation of objects as composite data structures, with attribute and method fields, as well as their interpretation as first-class values. O⊸ also gives a direct logical modeling of the self-application semantics of method invocation that justifies the view of objects as elements of recursive types. As such, the design of O⊸ appears interesting, in perspective, as a basis for developing flexible and powerful type systems for logical object-based languages. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Representing Priorities in Logic Programs

      Page(s): 82 - 96
      Copyright Year: 1996

      MIT Press eBook Chapters

      Reasoning with priorities is a central topic in knowledge representation. A number of techniques for prioritized reasoning have been developed in the field of AI, but existing logic programming lacks the mechanism of explicit representation of priorities in a program. In this paper, we introduce a framework for representing priorities in logic programming. Prioritized logic programmimg represents preference knowledge more naturally than stratified programs, and is used to reduce non-determinism in logic programming. Moreover, it can realize various forms of commonsense reasoning such as abduction, default reasoning, and prioritized circumscription. The proposed framework increases the expressive power of logic programming and exploits new applications in knowledge representation. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Novel Implementation Method of Delay

      Page(s): 97 - 111
      Copyright Year: 1996

      MIT Press eBook Chapters

      The efficiency of delay depends to a large extent on the following four basic operations: delay, wakeup, interrupt, and resume. Traditional implementations of delay in the WAM are slow because three out of the four basic operations need to save or restore the argument registers. In this paper, we present a novel method for implementing delay in a Prolog machine called ATOAM. The main idea is to store delayed calls as frames, called suspension frames, on the control stack rather than as records on the heap. Since delayed calls, after being woken, can be executed directly by using their suspension frames, the four basic operations become very simple. This method had been predicated to cost a large amount of control stack space. However, with tail-recursion elimination, the control stack space requirement can be reduced dramatically. This method has been implemented in the B-Prolog system. For several benchmark programs where delay is used, the experimental resuits show that B-Prolog is significantly faster and sometimes consumes much less total space than SICStus, a WAM-based Prolog system. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Thread in Time Saves Tabling Time

      Page(s): 112 - 126
      Copyright Year: 1996

      MIT Press eBook Chapters

      The use of tabling in logic programming has been recognized as a powerful evaluation technique. Currently available tabling systems are mostly based on variant checks and hence are limited in their ability to recognize reusable subcomputations. Tabling systems based on call subsumption can yield superior performance by recognizing a larger set of reusable computations. However, a straightforward adaptation of the mechanisms used in variant-based systems to reuse these computations can in fact result in considerable performance degradation. In this paper we propose a novel organization of tables using Dynamic Threaded Sequential Automata (DTSA) which permits efficient reuse of previously computed results in a subsumptive system. We describe an implementation of the tables using DTSA and the associated access mechanisms. We also report experimental results which show that a subsumptive tabling system implemented by extending the XSB logic programming system with our table access techniques can perform significantly better than the original variant-based system. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Interprocedural register allocation for the WAM based on source-to-source transformations

      Page(s): 127 - 141
      Copyright Year: 1996

      MIT Press eBook Chapters

      An approach for interprocedural register allocation for the WAM is presented which is based on source-to-source transformations of an intermediary language called Continuation Prolog. Continuation Prolog fills the conceptual gap between Prolog source code and the underlying abstract machine. Our approach does not require an analysis of the whole program. Only the definition of a predicate must be analyzed, but not its use. For certain kinds of predicates like DCGs no analysis is required at all. An implementation of our approach has been integrated into the WAM code generation of SICStus- Prolog. Our approach yields speedups up to 30% for existing benchmarks. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Concurrency and Communication in Transaction Logic

      Page(s): 142 - 156
      Copyright Year: 1996

      MIT Press eBook Chapters

      In previous work, we developed Transaction Logic (or TR), which deals with state changes in deductive databases. TR provides a logical framework in which elementary database updates and queries can be combined into complex database transactions. TR accounts not only for the updates themselves, but also for important related problems, such as the order of update operations, non-determinism, and transaction failure and rollback. In the present paper, we propose Concurrent Transaction Logic (or CTR), which extends Transaction Logic with connectives for modeling the concurrent execution of complex processes. Concurrent processes in CTR execute in on interleaved fashion and can communicate and synchronize themselves. Like classical logic, CTR has a “Horn” fragment that has both a procedural and a declarative semantics, in which users can program and execute database transactions. CTR is thus a deductive database language that integrates concurrency, communication, and updates. All this is accomplished in a completely logical framework, including a natural model theory and a proof theory. Moreover, this framework is flexible enough to accommodate many different semantics for updates and deductive databases. For instance, not only can updates insert and delete tuples, they can also insert and delete null values, rules, or arbitrary logical formulas. Likewise, not only can databases have a classical semantics, they can also have the well-founded semantics, the stable-model semantics, etc. Finally, the proof theory for CTR has an efficient SLD-style proof procedure. As in the sequential version of the logic, this proof procedure not only finds proofs, it also executes concurrent transactions, finds their execution schedules, and updates the database. A main result is that the proof theory is sound and complete for the model theory. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      An Extension of SLD by Abduction and Integrity Maintenance for View Updating in Deductive Databases

      Page(s): 157 - 169
      Copyright Year: 1996

      MIT Press eBook Chapters

      We present SLDAI, an SLD-based proof procedure extended by Abduction and Integrity maintenance. Its purpose is to solve the view update problem in deductive databases. For an update request in a database which satisfies its integrity theory, SLDAI computes updates (hypothetical insertions or deletions of facts about base predicates) which satisfy the request while maintaining integrity. We argue to have improved the state of the art. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Realistic Experiment in Knowledge Representation in Open Event Calculus: Protocol Specification

      Page(s): 170 - 184
      Copyright Year: 1996

      MIT Press eBook Chapters

      This paper presents one of the first realistic experiments in the use of Event Calculus in Open Logic Programming: the specification of a process protocol. The specification task involves most of the common complications of temporal reasoning: the representation of context dependent actions, of preconditions and ramifications of actions, object creation, the modelling of system faults, and most of all, the representation of uncertainty of actions. As the underlying language, the Open Logic Programming formalism, an extension of Logic Programming, is used. The experiment shows that Event Calculus is a promising candidate for the specification of dynamic systems. A comparison between specification of process protocols in Event, Calculus and in the more commonly used process algebras shows fundamental differences between the two approaches. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A declarative view of modes

      Page(s): 185 - 199
      Copyright Year: 1996

      MIT Press eBook Chapters

      Mode information in logic programming is concerned with such things as inputs and outputs of procedures, producers and consumers of variable bindings, instantiation states of calls during execution and the order of execution. Modes seem inextricably tied to the procedural rather than the declarative view of logic programs. Despite this, we argue that purely declarative information can actually express the essence of modes remarkable well. The declarative view allows a high level notion of correctness which is independent of how the information is used. We start from a framework which includes types and show how a set of ground atoms related to the success set can be used to express mode information. We introduce constrained regular trees to define such sets and show how they can be used as the basis for a polymorphic mode system. The mode system can express directional types, back communication and linearity and can be used to infer lower level mode information for languages such as Mercury. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Type Synthesis for Logic Programs

      Page(s): 200 - 214
      Copyright Year: 1996

      MIT Press eBook Chapters

      We present an efficient framework of type inference for logic programs by generalizing the idea of type synthesis for Ground Prolog. Given a program and its top-level goals to be analyzed, this dataflow analysis can efficiently approximate the datatypes of the program variables in O(n2) time where n is the number of the variables. If the mode information is available by declarations or automatic inference, the framework can be specialized to improve the precision of the analysis and has time complexity of O(n3). In particular we show this specialization for Prolog programs. When the mode and type information is as specific as that for Ground Prolog, this framework reduces to the original type synthesis. To evaluate this idea, we have implemented a prototype system for inference of mode, dereferencing and type information of Prolog programs. Based on the results of our evaluation, it appears that the generalized type synthesis is a promising approach to practical type inference for logic programs. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Diagnosing Non-Well-Moded Concurrent Logic Programs

      Page(s): 215 - 229
      Copyright Year: 1996

      MIT Press eBook Chapters

      Strong moding and constraint-based mode analysis are expected to play fundamental roles in debugging concurrent logic/constraint programs as well as in establishing the consistency of communication protocols and in optimization. Mode analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode constraints, and can he solved efficiently by unification over feature graphs. In practice, however, it is important to be able to analyze non-well-moded programs (programs whose mode constraints are inconsistent) and present plausible “reasons” of inconsistency to the programmers in the absence of mode declarations. This paper discusses the application of strong moding to systematic and efficient static program debugging. The basic idea, which turned out to work well at least for small programs, is to find a minimal inconsistent subset from an inconsisteilt set of mode constraints and indicate the symbol( occurrence)s in the program text that imposed those constraints. A bug can he pinpointed better by finding more than one overlapping minimal subset. These ideas can be readily extended to finding multiple hugs at once. For large programs, stratification of predicates narrows search space and produces more intuitive explanations. Stratification plays a fundamental role in introducing mode polymorphism as well. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Declarative Logic Programming with Primitive Recursive Relations on Lists

      Page(s): 230 - 243
      Copyright Year: 1996

      MIT Press eBook Chapters

      In a previous paper we introduced a system of recursion operators for formulating pure logic programs, dispensing with explicit recursions. The recursion operators, some of which are similar to higher-order functions known from functional programming, take the form of quasi-higher order predicates. In this paper we identify a comprehensive class of logic programs called primitive recursive relations over lists (including primitive recursive functions) using the so called fold recursion operators. We formulate and prove a duality theorem connecting our relational fold operators. We show how correct well-moded procedural interpretations using any fixed computation rule can be obtained from a declarative logic program. This is accomplished in a principled manner by a simplified data flow analysis enabled by the recursion operator formulation and the duality theorem. The recursion operators are handled in ordinary clauses by means of established metalogic programming techniques. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Engineering Transformations of Attributed Grammars in λProlog

      Page(s): 244 - 258
      Copyright Year: 1996

      MIT Press eBook Chapters

      An abstract representation for grammar rules that permits an easy implementation of several attributed grammar transformations is presented. It clearly separates the actions that contribute to evaluating attribute values from the circulation of these values, and it makes it easy to combine the representations of several rules in order to build the representation of new rules. This abstract form applies well to such transforms as elimination of left-recursion, elimination of empty derivation, unfolding and factorization. Finally, the technique is applied to DCGs and a λProlog implementation of the abstract form and of the transforms is described. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Unification via Explicit Substitutions: The Case of Higher-Order Patterns

      Page(s): 259 - 273
      Copyright Year: 1996

      MIT Press eBook Chapters

      Following the general method and related completeness results on using explicit substitutions to perform higher-order unification proposed in [5], we investigate in this paper the case of higher-order patterns as introduced by Miller. We show that our general algorithm specializes in a very convenient way to patterns. We also sketch an efficient implementation of the abstract algorithm and its generalization to constraint simplification. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      An Abstract Machine for Computing the Well-Founded Semantics

      Page(s): 274 - 288
      Copyright Year: 1996

      MIT Press eBook Chapters

      The well-founded semantics has gained wide acceptance partly because it is a skeptical semantics. That is, the well-founded model posits as unknown atoms which are deemed true or false in other formalisms such as stable models. This skepticism makes the well-founded model not only useful in itself, hut also suitable as a basis for other forms of non-monotonic reasoning. For instance, since algorithms to compute stable models are intractable, the atoms relevant to such algorithms can be limited to those undefined in the well-founded model. This paper presents an implementation of the well-founded semantics in the SLC-WAM of XSB. To compute the well-founded semantics, the SLO-WAM adds three operations to its tabling engine — negative loop detection, delay and simplification — which serve to detect, to break and to resolve cycles through negation that may arise in evaluating normal programs. We describe fully the addition of these operations to our tabling engine, and demonstrate the efficiency of our implementation in two ways. First, we present a theorem that bounds the need for delay to those literais which are not dynamically stratified for a fixed-order computation. Secondly, we present performance results that indicate that the overhead of delay and simplification to Prolog — or tabled — evaluations is minimal. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Efficient Implementation of the Well-founded and Stable Model Semantics

      Page(s): 289 - 303
      Copyright Year: 1996

      MIT Press eBook Chapters

      An implementation of the well-founded and stable model semantics for range-restricted function-free normal programs is presented. It includes two modules: an algorithm for implementing the two semantics for ground programs and an algorithm for computing a grounded version of a range-restricted function-free normal program. The latter algorithm does not produce the whole set of ground instances of the program but a subset which is sufficient in the sense that no stable models are lost. The implementation of the stable model semantics for ground programs is based on bottom-up backtracking search. It works in linear space and employs a powerful pruning method based on an approximation technique for stable models which is closely related to the well-founded semantics. The implementation includes an efficient algorithm for computing the well-founded model of a ground program. The implementation has been tested extensively and compared with a state of the art implementation of the stable model semantics, the SLG system. In tests involving ground programs it clearly outperforms SLG. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Adding Flexibility to Query Evaluation for Modularly Stratified Databases

      Page(s): 304 - 318
      Copyright Year: 1996

      MIT Press eBook Chapters

      An efficient and flexible evaluation method for modularly stratified databases is presented which avoids complicated synchronization schemes. By simple parameterization, any evaluation strategy in the range from pure top-down to pure bottom-up can be specified on a predicate by predicate basis. The flexibility obtained this way is particularly advantageous for optimization purposes. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Conceptual Embedding of Folding into Partial Deduction: Towards a Maximal Integration

      Page(s): 319 - 332
      Copyright Year: 1996

      MIT Press eBook Chapters

      The relation between partial deduction and the unfold/fold approach has been a matter of intense discussion. In this paper we consolidate the advantages of the two approaches and provide an extended partial deduction framework in which most of the tupling and deforestation transformations of the fold/unfold approach, as well the current partial deduction transformations, can be achieved. Moreover, most of the advantages of partial deduction, e.g. lower complexity and a more detailed understanding of control issues, are preserved. We build on well-defined concepts in partial deduction and present a conceptual embedding of folding into partial deduction, called conjunctive partial deduction. Two minimal extensions to partial deduction are proposed: using conjunctions of atoms instead of atoms as the principle specialisation entity and also renaming conjunctions of atoms instead of individual atoms. Correctness results for the extended framework (with respect to computed answer semantics and finite failure semantics) are given. Experiments with a prototype implementation are presented, showing that, somewhat to our surprise, conjunctive partial deduction not only handles the removal of unnecessary variables, but also leads to substantial improvements in specialisation for standard partial deduction examples. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Demand transformation analysis for concurrent constraint programs

      Page(s): 333 - 347
      Copyright Year: 1996

      MIT Press eBook Chapters

      This paper presents a demand transformation analysis that maps a predicate's output demands to its input demands. This backward dataflow analysis for concurrent constraint programs is constructed in the framework of abstract interpretation. In the context of stream parallelism, this analysis identifies an amount of input data for which predicate execution can safely wait without danger of introducing deadlock. We have constructed an implementation of this analysis and tested it on some small, illustrative programs and have determined that it gives useful results in practice. We identify several applications of the analysis results to distributed implementations of concurrent constraint languages, including thread construction and communication granularity control. This analysis will enable existing computational cost estimation analyses to be applied to stream-parallel logic languages. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Complementation of Abstract Domains made Easy

      Page(s): 348 - 362
      Copyright Year: 1996

      MIT Press eBook Chapters

      In standard abstract interpretation theory, the inverse of the reduced product of abstract domains was recently defined and called complementation. Given two domains C and D such that D abstracts C, the complement C ∼ D is the most abstract domain whose reduced product with D gives C back. We show that, when C is a continuous complete lattice, there is a particularly simple method for computing C ∼ D. Since most domains for abstract interpretation are (complete and) continous, this method is widely applicable. In order to demonstrate its relevance, we apply this result and some of its consequences to Cousot and Cousot's domain for integer interval analysis of imperative programs, and to several wellknown domains for the static analysis of logic languages, viz., Pos, Def and Sharing. In particular, we decompose Sharing in three more abstract domains whose reduced product gives back Sharing, and such that each component corresponds to one of the three properties that coexist in the elements of Sharing: ground-dependency, pair-sharing (or equivalently variable independence) and set-sharing. Using our theory, we minimize each component of this decomposition obtaining in some case domains that are surprisingly simpler than the corresponding original components. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Cumulative Scheduling with Task Intervals

      Page(s): 363 - 377
      Copyright Year: 1996

      MIT Press eBook Chapters

      This paper presents a set of propagation rules to solve cumulative constraints. As in our previous paper on jobshop scheduling [8], our goal is to propose to the CLP community techniques that allow a constraint satisfaction program to obtain performances which are competitive with ad-hoc approaches. The rules that we propose are a mix of an extension of the concept of task intervals to the cumulative case and the use of a traditional resource histogram. We also explain how to use a branching scheme inherited from operations research to address complex multi-resources problems (similar to the use of edge-finding for jobshop scheduling). We show that the complex propagation patterns associated with our rules make a strong arguments for using logic programming. We also identify a phenomenon of phase transition in our examples that illustrates why cumulative scheduling is hard. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Boosting the Interval Narrowing Algorithm

      Page(s): 378 - 392
      Copyright Year: 1996

      MIT Press eBook Chapters

      Interval narrowing techniques are a key issue for handling constraints over real numbers in the logic programming framework. However, the standard fixed-point algorithm used for interval narrowing may give rise to cyclic phenomena and hence to problems of slow convergence. Analysis of these cyclic phenomena shows: 1) that a large number of operations carried out during a cycle are unnecessary; 2) that many others could be removed from cycles and performed only once when these cycles have been processed. What is proposed here is a revised interval narrowing algorithm for identifying and simplifying such cyclic phenomena dynamically. First experimental results show that this approach improves performance significantly. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Completeness Results for Basic Narrowing in Non-copying Implementations

      Page(s): 393 - 407
      Copyright Year: 1996

      MIT Press eBook Chapters

      Narrowing and rewriting play an important role in giving the operational semantics of languages that integrate functional and logic programming. These two operations are usually implemented using tree representation of terms and atoms. Such implementations do not allow sharing of similar structures. In contrast to this, implementations which use (directed acyclic) graph representations of terms and atoms allow sharing of similar structures. Such sharing saves space and avoids repetition of computations. Term graph rewriting is one of the nice models proposed in the literature to facilitate sharing of similar structures. In this paper, we study completeness of basic narrowing in term graph rewriting. Our results show that term graph rewriting not only improves efficiency but even facilitates more general results on the completeness of basic narrowing than the results known in term rewriting (which use tree representations). View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Extremal problems in logic programming and stable model computation

      Page(s): 408 - 422
      Copyright Year: 1996

      MIT Press eBook Chapters

      We study the following problem: given a class of (disjunctive) logic programs C, determine the maximum number of stable models (answer sets) of a program from C. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtain similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most in, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Logic Programs with Contested Information

      Page(s): 423 - 437
      Copyright Year: 1996

      MIT Press eBook Chapters

      We generalize reasoning with logically inconsistent information to reasoning with contested information. We provide a semantics for normal logic programs with contested information in terms of C4, a four-valued logic. In terms of this semantics we define strong and weak entailments of normal logic programs with contested information. C4 also provides a new semantics for normal logic programs. We show that a normal logic program strongly entails a sentence under C4 if, and only if, that sentence is also entailed by the well-founded semantics and in case the program has a two-valued stable model, the program weakly entails a sentence under C4 if, and only if, it is also entailed by the stable model semantics. We use this result to argue that the difference between the well-founded semantics and the stable model semantics can be characterized in terms of what we call completely contested information. We also use C4 to provide an intuitively satisfactory semantics for databases that violate denial integrity constraints. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Asserting Lemmas in the Stable Model Semantics

      Page(s): 438 - 452
      Copyright Year: 1996

      MIT Press eBook Chapters

      The stable model semantics for normal programs has the problem that logical consequences of programs cannot, in general, be stored as lemmas, because the set of stable models of the resulting program may change. We argue that it is possible to assert a conclusion A as a lemma in the stable model semantics, if asserting at the same time a set of facts supporting the conclusion (that we call a base set for A). The effect on the meaning of the program is that of selecting some of the stable models containing A. The collection of all base sets for A generates all the stable models containing A. We propose a characterization of base sets that identifies minimal ones, i.e. the fewest and smallest base sets for A. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Compositional Semantics for Logic Programs and Deductive Databases

      Page(s): 453 - 467
      Copyright Year: 1996

      MIT Press eBook Chapters

      Considering integrity constraints and program composition, it is argued that a semantics for logic programs and deductive databases should not accommodate inconsistencies globally like in classical logic, but locally. It is shown that minimal logic, a weakening of classical logic which precludes refutation proofs, is sufficient to provide for a proof theory for generalized programs corresponding to deductive databases and disjunctive logic programs. A (nonclassical) model theory is proposed for these programs, which allows local inconsistencies. The proposed semantics naturally extends the minimal model and completion semantics of positive programs and is compositional. Arguably, it appropriately conveys a practician's intuition. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Compositional Semantics for Normal Open Programs

      Page(s): 468 - 482
      Copyright Year: 1996

      MIT Press eBook Chapters

      Modular programs are built as a combination of separate modules, which may be developed and verified separately. Therefore, in order to reason over such programs, cornpositionality plays a crucial role: the semantics of the whole program must be obtainable as a simple function from the semantics of its individual modules. In this paper we propose a compositional semantics for first-order programs. This semantics is correct with respect to the set of logical consequences of the program. Moreover, – in contrast with other approaches – it is always computable. Furthermore, we show how our results on first-order programs may be applied in a straightforward way to normal logic programs, in which case our semantics might be regarded as a compositional counterpart of Runen's semantics. Finally we discuss and show how these results have to be modified in order to be applied to normal CLP. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Nonmonotonic Disputation-Based Semantics and Proof Procedure for Logic Programs

      Page(s): 483 - 497
      Copyright Year: 1996

      MIT Press eBook Chapters

      Logic programs with nonmonotonic negation are embedded in a general, abstract disputation-based framework for nonmonotonic logics. This formalization induces a particular semantics, which is proved to extend well-founded semantics and whose expanded expressiveness is illustrated by different examples involving reasoning by cases. Moreover, we develop a formal proof procedure for skeptical reasoning in the general disputation framework. Its adaption to the logic programming context provides a goal-oriented and local proof procedure for the induced semantics. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Visualizing Parallel Logic Program Execution for Performance Tuning

      Page(s): 498 - 512
      Copyright Year: 1996

      MIT Press eBook Chapters

      A variety of visualization tools are available to help application programmers ascertain the behaviour of parallel logic programs. Often, however, these tools provide information which is implementation oriented, making it difficult to understand or use at the source level. This paper describes a visualization tool, ParSee, specifically designed to help the application programmer. Abstracting from implementation-oriented information, ParSee provides high-level, static views of parallel logic program computations. These views portray several performance metrics, each metric diagnosing the influence of a (source-level) specification of parallelism on runtime performance. This information is easy to understand and directly useful for diagnosis and performance tuning. The views are scalable over the number of processors and over time, making them suitable for long and highly parallel traces. ParSee is implemented in and for the ECLiPSe parallel constraint logic programming language, hut the principles behind it are also applicable to other high-level parallel languages. Currently, several alternative graphical interfaces are supported. This paper also describes a small case study of using ParSee, and work in-progress on coupling ParSee with other trace tools (e.g. VisAndOr). View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Initial Results from the Parallel Implementation of DASWAM

      Page(s): 513 - 527
      Copyright Year: 1996

      MIT Press eBook Chapters

      The Dynamic Dependent And-parallel Scheme (DDAS) is a parallel execution scheme for Prolog that is designed to exploit independent and dependent and-parallelism in full Prolog programs. DASWAM is a prototype implementation of this scheme. Initially implemented ata simulator, this prototype has now been parallelised for a range of shared-memory multi-processors. In this paper, the initial results from this parallel implementation are presented. These results are quite encouraging, and show that DASWAM is able to effectively exploit the and-parallelism extracted by DDAS. The original simulator is instrumental in helping to interpret the results obtained from the parallel system. The results also suggest that DDAS is able to extract (and DASWAM able to exploit) and-parallelism across a wider range of programs than many other proposed and-parallel schemes. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Poster Abstracts

      Page(s): 529
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Applications of Efficient Lazy Set Expression

      Page(s): 531
      Copyright Year: 1996

      MIT Press eBook Chapters

      We add more expressive power to PROLOG in the form of lazily evaluable set expressions. The need for lazy set expressions arises from the design of integrated logic+functiortal programming languages. It turns out that lazy set expressions are advantageous for a wide range of applications. This poster presents examples from the areas of logic+functional programming, extended logic programs and CLP(FD). Previous work on set transformations focuses on the substitution of orparallelism with and-parallelism. We use one of these transformations to implement lazy set expressions, the uniform transformation by Manen and Demoen, which is especially well-suited for lazy evaluation. However, this transformation can show a polynomial slowdown wrt. plain findall/3. So we introduce some optimisations which enable us to achieve a performance close to a manually compiled set expression for several relevant programs. These optimisation steps are partially based on mode information and consist of a technique called lazy accumulator optimisation, similar to lambda lifting used for fully lazy evaluation in functional languages, reordering of test literals including the introduction of continuation passing, and the derivation of control information (delay clauses) for lazy evaluation. Programming techniques using set expressions include incomplete and infinite data structures, which are familiar in the functional programming community. A step beyond the functional context, these techniques can now be applied to the solutions of nondeterministic procedures. Another major application of lazy set expressions is the elegant and efficient treatment of negations introduced by the Lloyd/Topor transformations of extended logic programs, e. g. for universal quantifiers. Finally, we have shown that lazy set expression are well-suited for the implementation of generalised propagation on top of CLP(F D). View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Logic Programming Tools Applied to Fire Detection in Hard-coal Mines

      Page(s): 532
      Copyright Year: 1996

      MIT Press eBook Chapters

      Detection of underground fires is an important security task in hard-coal mining. In order to enable a rapid detection, the underground carbon monoxide (CO) concentration is constantly monitored using several hundreds of sensors in an average coal mine. Current monitoring systems, however, produce a large number of false alarms. We developed an improved system for CO-monitoring in hard-coal mines using adaptive techniques to forecast the CO-concentrations in the mine, achieving a significant reduction of false alarms. Logic programming permits a. descriptive and transparent formulation of the underlying algorithms and heuristics, leading to improved security and easy maintenance of the resulting system. Both the main monitoring process and the graphical user interface are implemented using logic programming techniques. For each, a Prolog-based tool has been developed. Simultaneous monitoring of a large number of CO-sensors is implemented in the rule-based specification language SEA (Streams iii EA). SEA is an extension of Evolving Algebras (EAs) providing modules and stream parallelism. Special emphasis is on the compiler, which generate efficient Prolog code from SEA specifications. As a basis for the mouse-driven graphical user interface of the monitoring system we developed the library PAT (Prolog and Tcl/Tk) which smoothly couples Tcl/Tk to Prolog. In contrast to existing Tcl/Tk interfaces like ProTcXl, which require extensive knowledge of Tcl/Tk commands, PAT handles all user interface elements as Prolog terms, tightly integrating Tk's functionality and being conform to the logic programming paradigm. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Recognition of 3D Objects in Aerial Images Based on Generic Models

      Page(s): 533
      Copyright Year: 1996

      MIT Press eBook Chapters

      Availability of actual three-dimensional data for geo-information systems has become of great importance for an increasing number of tasks. Since the acquisition of such data is mainly done with the help of semi-automatic tools so far, a large research program called “Semantic Modeling” was started 3 years ago with the aim of improving image interpretation by incorporating application domain knowledge represented by explicit models. In our sub-project we apply CLP for the recognition of 3D objects (i.e. buildings) in aerial images. Logic programming constitutes the platform for the representation of image and object models and the control strategy of the reasoning process. Generic 3D models (constructive solid geometry (CSG), augmented by constraints) are applied to represent the large number of different building types on the one hand. Image segmentation results in features of different classes, giving a symbolic 2D image description on the other hand. In order to match object models to image data, a third kind of model (aspect graph) is used, bridging the gap between the 3D volumetric and 2D image data. Such aspect graphs are transformed to CLP clauses, and matching is done by solving the resp. CSP. Our current prototype is based on ECLIPSE and extends the built-in CLP(FD) solver to cope with complex objects. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Tracing Prolog without a tracer

      Page(s): 534
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      PROCALOG — Programming with Constraints and Abducibles in Logic

      Page(s): 535
      Copyright Year: 1996

      MIT Press eBook Chapters

      PROCALOG is being designed as a programming language (a prototype implementation exists) based on the CALOG framework [1], The CALOG framework is a unifying framework for standard LP, Abductive Logic Programming (ALP), Constraint Logic Programming (CLP), and Semantic Query Optimization (SQO). The framework combines the use of definitions, as in ordinary logic programming, with the use of integrity constraints, as in ALP and SQO. The programmer can choose to represent knowledge in either form subject to the condition that the integrity constraints be “properties” of the definitions. PROCALOG executes definitions in conventional logic programming goal reduction manner and integrity constraints in forward reasoning style to check potential answers for consistency. The integrity constraints are used to process goals when the definitions cannot be used, either because they are not accessible (as in ALP and SQO) or because their use would be computationally explosive (as in CLP and more generally). After briefly outlining the main elements and the semantics of the CALOG framework, the presentation describes a variety of applications which make use of some of PROCALOG's features. For standard LP applications, integrity constraints can be used to provide reasoning shortcuts and prune the search tree. Configuration problems can easily be solved, whether represented in an abductive, constraint or database context. Planning and constraint satisfaction problems constitute other potential applications. Finally, a list of related frameworks is given. PROCALOG may provide a unifying programming platform and help to identify interdisciplinary solutions. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Attempto Controlled English (ACE) A Seemingly Informal Bridgehead in Formal Territory

      Page(s): 536
      Copyright Year: 1996

      MIT Press eBook Chapters

      Attempto Controlled English (ACE) - a subset of English with a restricted grammar and a domain-specific vocabulary allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. ACE has a principled structure: declarative sentences are combined by constructors (e.g. negation, if-then, and-lists, or-lists) to powerful composite sentences while certain forms of anaphora and ellipsis render the language concise and natural. We have developed the Attempto system that unambiguously translates complete ACE specifications into discourse representation structures a structured form of first-order predicate logic - and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be used to answer queries in ACE about the specification, and it can be executed for simulation, prototyping and validation of the specification. Tools like a paraphraser, a lexical editor, a spelling checker, and a metainterpreter for query answering and execution complement Attempto. Using Attempto we have successfully processed the non-trivial specification of an automated teller machine. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Optimizing Constraint-Intensive Problems Using Early Projection

      Page(s): 537
      Copyright Year: 1996

      MIT Press eBook Chapters

      This chapter contains sections titled: References View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Logic Programming and Databases Integrated at Last?

      Page(s): 538
      Copyright Year: 1996

      MIT Press eBook Chapters

      This chapter contains sections titled: References View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      How tp Extend Partial Deduction to Derive the KMP String-Matching Algorithm from a Naive Specification

      Page(s): 539
      Copyright Year: 1996

      MIT Press eBook Chapters

      This chapter contains sections titled: References View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A Hypertext Based Environment to Write Literate Logic Programs

      Page(s): 540
      Copyright Year: 1996

      MIT Press eBook Chapters

      Hyperpro is an experimental hypertext programming environment for Prolog based dialects and its application to logic program development methodology as defined in [1, 2, 3], It is built upon the Grif-Thot tool [4, 5], a powerful WYSIWYG structured editor that includes hypertext features. The gener-icity of the tool makes it easily adaptable to other logic programming languages and to other applications in the field of logic programs development, in particular to handle logic programs with constraints. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Probabilistic Disjunctive Deductive Databases

      Page(s): 541
      Copyright Year: 1996

      MIT Press eBook Chapters

      There are two separate approaches to incorporate uncertainty treatment into deductive databases/logic programming: disjunctive deductive databases allows disjunctive forms [2] while various quantitative logic programming frameworks allow quantitative beliefs. In this paper we propose a probabilistic disjunctive deductive databases (PDDB) framework which allows for the mixing of both probabilistic uncertainty and indefiniteness in the same deductive database. Our framework is an combination of Barbara, Garcia-Molina and Porter (BGP)'s Probabilistic Databases [1] and disjunctive logic programming [2]. BGP's model is a simple extension of Relational Database model. A probabilistic database is a finite set of relations. Each relation has a deterministic key. That is, each tuple represents a known entity and nonkey attributes describe the properties of the entities. A nonkey attribute may be deterministic or probabilistic. Given a PDDB, we want to compute the probability bounds of an arbitrary ground formula. Because of the presence of disjunctions, we usually have probability bounds on disjunctions only. We use the principle of maximum entropy to assign probability to each ground atom, and hence, to an arbitrary ground formula. One way to compute probabilities is solving a system of linear inequalities. To reduce the complexity of the problem, we define the concepts of full explanations and partial explanations We have developed a fixpoint theory for PDDB. We also have generalized the concept of model trees [2], which is used to represent, the set of minimal models of a disjunctive deductive database, and used it in a procedure to compute the probability bounds. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Dependent And-Parallelism Revisited

      Page(s): 542
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      A General Framework for Integrating HCLP and PCSP

      Page(s): 543
      Copyright Year: 1996

      MIT Press eBook Chapters

      Our work has two starting points. On the one hand, the Hierarchical Constraint Logic Programming (HCLP) scheme of Borning, Wilson, and others greatly extends the expressibility of the general CLP scheme. On the other hand, the Partial Constraint Satisfaction (PCSP) scheme of Freuder and Wallace is an interesting extension of CSP, which allows the relaxation and optimisation of problems. Both these systems have advantages. HCLP allows a fine-grained and declarative expression of preferences, but commits the user to labelling all the constraints in the problem. PCSP only requires the specification of a single global distance function, which can be used for optimisation or to find instances with a particular problem structure, but it can be very difficult to find the correct function. We present a general framework, Gocs, of which HCLP and PCSP are instances, and in which it is also possible to use both approaches simultaneously: an easy-to-find global function can be used, fine-tuned with a small number of labels, or more labels can be used with the global function relegated to a secondary role. Furthermore our framework is compositional. The full description of this work, which contains further references, can be found at the address “http://www.info.fundp.ac.be/httpdocs/pub.html” as the article “A General Framework for Integrating HCLP and PCSP” by M. Jampel, J.-M. Jacquet, and D. Gilbert. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Specification of Complex Systems with Definite Clause Grammar

      Page(s): 544
      Copyright Year: 1996

      MIT Press eBook Chapters

      A clean and concise logical specification of complex reactive systems with Definite Clause Grammar (DCG) is given. The mathematical formalism of the visual language is based on a hierarchical state machine model. Progressing from parsing to code generation involves tree manipulation, for which Prolog is particularly well suited. The use of free logic variables in partially instantiated terms allows composite states of complex systems to be represented concisely. The use of anonymous variables further allows our hierarchical model to have different levels of abstraction of details. Through syntax-directed translation, each state transition is mapped to a DCG rule. The intermediate code generated is a set of DCG rules. The well-known correspondence between grammar rule and Prolog clause allows the use of a homogeneous framework. As a result, a rigorous mathematical formalism of a visual specification language is mapped to a logical implementation. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Resource Management Method for a Compiler System of a Linear Logic Programming Language

      Page(s): 545
      Copyright Year: 1996

      MIT Press eBook Chapters

      Linear logic developed by J.-Y. Girard [2] is expected to be applied for various fields in computer science. There have been several proposals for logic programming language based on linear logic: LO [1], ACL [4], Lolli [3], Lygon [6], and Forum [5], But, none of them have not been implemented as a compiler system. This paper describes a resource management method of LLPAM which is an extended WAM for a linear logic programming language LLP. LLP is a subset of Lolli. The extension of WAM is mainly for efficient resource management: especially for resource look-up and deletion. In our design, only one table is maintained to keep resources during the execution. Looking-up of a resource is done through a hash table. Deletion of a resource is done by just “marking” the entry in the table. Our prototype compiler produces 20 times faster code compared with a Prolog program which manages resources by a list structure. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      On Merging Theorem Proving and Logic Programming Paradigms

      Page(s): 546
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Generating Rational Models

      Page(s): 547
      Copyright Year: 1996

      MIT Press eBook Chapters

      This chapter contains sections titled: Acknowledgements, References View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Colour Tagging for Prolog Visualization

      Page(s): 548
      Copyright Year: 1996

      MIT Press eBook Chapters

      The main focus of systems for graphically visualizing Prolog execution [1, 2] is typically on portraying the success/failure of clauses and backtracking. Most use some form of augmented AND/OR-tree and show the unification between a goal and a clause head, but provide very little visual information on how this unification effects data over the entire AND/OR-tree. Unification plays a critical role in a Prolog program's execution, and some means is needed to help visualize this role. This poster presents colour tagging, a method that uses colour to visualize the effects of unification. Colour tagging allows a user to tag a term with a colour. As the tagged item is unified with other terms, its colour is propagated. This allows a user to trace the influence of the tagged item during program execution by following the progression of its colour. To support colour tagging, colour attributes are associated with (Prolog) terms and unification is extended to unify these attributes. Such an extended unification algorithm is presented. The interesting case is when colour (attribute) conflicts arise. Several possible solutions to this problem are outlined. A prototype system supporting colour tagging and a graphical tracing interface is described. Currently, the system is being used to investigate cases where colour tagging is useful for debugging or understanding program execution. Example cases include programs employing difference lists or complex data manipulation. Results from these investigations are presented. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      GUPU: A Prolog course environment and its programming methodology

      Page(s): 549
      Copyright Year: 1996

      MIT Press eBook Chapters

      GUPU is a programming environment specialized for Prolog programming courses which supports a novel way to teaching Prolog. The major improvement in teaching Prolog concerns how programs are read and understood. While the traditional approach covers Prolog's execution mechanism and its relation to mathematical logic we confine ourselves to reading programs informally as English sentences. The student's attention remains focused on a program's meaning instead of details like proof trees or execution traces. Informal reading is limited to short predicates. Larger predicates translate into incomprehensible sentences cluttered with referents and connectives. To overcome this problem a simple reading technique is presented that does not translate the whole predicate at once into English. Only parts of a predicate are considered. The remainder (e.g. some clauses, goals, arguments) is neglected for the moment. In this manner incomprehensible sentences are avoided. Our reading technique extends well to the more procedural aspects of Prolog like termination and resource consumption. The reading technique allows to reason about a program (e.g. understanding, detecting errors) in an efficient static manner avoiding reference to superfluous details of the computation. GUPU supports this approach by providing a side effect free programming environment. Programs are subject to restrictions which ease informal reading and catch many mostly syntactic and stylistic errors. The cumbersome “type and forget”-style top-level shell is replaced by a side effect free mode of interaction which also improves coding style by allowing to write tests before coding a predicate. The partial evaluator Mixtus is seamlessly integrated into GUPU. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Author Index

      Page(s): 551 - 552
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Back Matter

      Page(s): 553
      Copyright Year: 1996

      MIT Press eBook Chapters

      September 2-6, 1996, Bonn, Germany Every four years, the two major international scientific conferences on logic programming merge in one joint event. JICSLP'96 is the thirteenth in the two series of annual conferences sponsored by The Association for Logic Programming. It includes tutorials, invited lectures, and refereed papers on all aspects of logic programming including: Constraints, Concurrency and Parallelism, Deductive Databases, Implementations, Meta and Higher-order Programming, Theory, and Semantic Analysis. The contributors are international, with strong contingents from the United States, United Kingdom, France, and Japan.Logic Programming series, Research Reports and Notes View full abstract»