By Topic

Knowledge and Data Engineering, IEEE Transactions on

Issue 4 • Date Jul/Aug 1998

Filter Results

Displaying Results 1 - 11 of 11
  • Navigational accesses in a temporal object model

    Page(s): 656 - 665
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    A considerable research effort has been devoted in past years to query languages for temporal data in the context of both the relational and the object oriented model. Object oriented databases provide a navigational approach for data access based on object references. We investigate the navigational approach to querying object oriented databases. We formally define the notion of temporal path expression, and we address on a formal basis issues related to the correctness of such expressions. In particular, we focus on static analysis and give a set of conditions ensuring that an expression always results in a correct access at runtime View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient differential timeslice computation

    Page(s): 599 - 611
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Transaction-time databases support access to not only the current database state, but also previous database states. Supporting access to previous database states requires large quantities of data and necessitates efficient temporal query processing techniques. Previously, we presented a log based storage structure and algorithms for the differential computation of previous database states. Timeslices-i.e., previous database states-are computed by traversing a log of database changes, using previously computed and cached timeslices as outsets. When computing a new timeslice, the cache will contain two candidate outsets: an earlier outset and a later outset. The new timeslice can be computed by either incrementally updating the earlier outset or decrementally “downdating” the later outset using the log. The cost of this computation is determined by the size of the log between the outset and the new timeslice. The paper proposes an efficient algorithm that identifies the cheaper outset for the differential computation. The basic idea is to compute the sizes of the two pieces of the log by maintaining and using a tree structure on the timestamps of the database changes in the log. The lack of a homogeneous node structure, a controllable and high fill factor for nodes, and of appropriate node allocation in existing tree structures (e.g., B+ trees, Monotonic B+ trees, and Append only trees) render existing tree structures unsuited for our use. Consequently, a specialized tree structure, the pointer-less insertion tree, is developed to support the algorithm. As a proof of concept, we have implemented a main memory version of the algorithm and its tree structure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New approach to requirements trade-off analysis for complex systems

    Page(s): 551 - 562
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    We propose a faceted requirement classification scheme for analyzing heterogeneous requirements. The representation of vague requirements is based on L.A. Zadeh's (1986) canonical form in test score semantics and an extension of the notion of soft conditions. The trade-off among vague requirements is analyzed by identifying the relationship between requirements, which could be either conflicting, irrelevant, cooperative, counterbalance, or independent. Parameterized aggregation operators, fuzzy and/or, are selected to combine individual requirements. An extended hierarchical aggregation structure is proposed to establish a four-level requirements hierarchy to facilitate requirements and criticalities aggregation through the fuzzy and/or. A compromise overall requirement can be obtained through the aggregation of individual requirements based on the requirements hierarchy. The proposed approach provides a framework for formally analyzing and modeling conflicts between requirements, and for users to better understand relationships among their requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bottom-up construction of ontologies

    Page(s): 513 - 526
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    Presents a particular way of building ontologies that proceeds in a bottom-up fashion. Concepts are defined in a way that mirrors the way their instances are composed out of smaller objects. The smaller objects themselves may also be modeled as being composed. Bottom-up ontologies are flexible through the use of implicit and, hence, parsimonious part-whole and subconcept-superconcept relations. The bottom-up method complements current practice, where, as a rule, ontologies are built top-down. The design method is illustrated by an example involving ontologies of pure substances at several levels of detail. It is not claimed that bottom-up construction is a generally valid recipe; indeed, such recipes are deemed uninformative or impossible. Rather, the approach is intended to enrich the ontology developer's toolkit View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Declustering and load-balancing methods for parallelizing geographic information systems

    Page(s): 632 - 655
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1100 KB)  

    Declustering and load balancing are important issues in designing a high performance geographic information system (HPGIS), which is a central component of many interactive applications such as real time terrain visualization. The current literature provides efficient methods for declustering spatial point data. However, there has been little work toward developing efficient declustering methods for collections of extended objects, like chains of line segments and polygons. We focus on the data partitioning approach to parallelizing GIS operations. We provide a framework for declustering collections of extended spatial objects by identifying the following key issues: (1) work load metric; (2) spatial extent of the work load; (3) distribution of the work load over the spatial extent; and (4) declustering method. We identify and experimentally evaluate alternatives for each of these issues. In addition, we also provide a framework for dynamically balancing the load between different processors. We experimentally evaluate the proposed declustering and load balancing methods on a distributed memory MIMD machine (Cray T3D). Experimental results show that the spatial extent and the work load metric are important issues in developing a declustering method. Experiments also show that the replication of data is usually needed to facilitate dynamic load balancing, since the cost of local processing is often less than the cost of data transfer for extended spatial objects. In addition, we also show that the effectiveness of dynamic load balancing techniques can be improved by using declustering methods to determine the subsets of spatial objects to be transferred during runtime View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating broadcast programs that support range queries

    Page(s): 668 - 672
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    To disseminate information via broadcasting, a data server must construct a broadcast “program” that meets the needs of the client population. Existing works on generating broadcast programs have shown the effectiveness of nonuniform broadcast programs in reducing the average access times of objects for nonuniform access patterns. However, these broadcast programs perform poorly for range queries. The article presents a novel algorithm to generate broadcast programs that facilitate range queries without sacrificing much on the performance of single object retrievals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistency checking in complex object database schemata with integrity constraints

    Page(s): 576 - 598
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    Integrity constraints are rules that should guarantee the integrity of a database. Provided an adequate mechanism to express them is available, the following question arises: is there any way to populate a database which satisfies the constraints supplied by a database designer? That is, does the database schema, including constraints, admit at least a nonempty model? This work answers the above question in a complex object database environment, providing a theoretical framework, including the following ingredients: (1) two alternative formalisms, able to express a relevant set of state integrity constraints with a declarative style; (2) two specialized reasoners, based on the tableaux calculus, able to check the consistency of complex objects database schemata expressed with the two formalisms. The proposed formalisms share a common kernel, which supports complex objects and object identifiers, and which allow the expression of acyclic descriptions of: classes, nested relations and views, built up by means of the recursive use of record, quantified set, and object type constructors and by the intersection, union, and complement operators. Furthermore, the kernel formalism allows the declarative formulation of typing constraints and integrity rules. In order to improve the expressiveness and maintain the decidability of the reasoning activities, we extend the kernel formalism into two alternative directions. The first formalism, OLCP, introduces the capability of expressing path relations. Because cyclic schemas are extremely useful, we introduce a second formalism, OLCD, with the capability of expressing cyclic descriptions but disallowing the expression of path relations. In fact, we show that the reasoning activity in OLCDP (i.e., OLCP with cycles) is undecidable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The knowledge acquisition and representation language, KARL

    Page(s): 527 - 550
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    The Knowledge Acquisition and Representation Language (KARL) combines a description of a knowledge based system at the conceptual level (a so called model of expertise) with a description at a formal and executable level. Thus, KARL allows the precise and unique specification of the functionality of a knowledge based system independent of any implementation details. A KARL model of expertise contains the description of domain knowledge, inference knowledge, and procedural control knowledge. For capturing these different types of knowledge, KARL provides corresponding modeling primitives based on Frame Logic and Dynamic Logic. A declarative semantics for a complete KARL model of expertise is given by a combination of these two types of logic. In addition, an operational definition of this semantics, which relies on a fixpoint approach, is given. This operational semantics defines the basis for the implementation of the KARL interpreter, which includes appropriate algorithms for efficiently executing KARL specifications. This enables the evaluation of KARL specifications by means of testing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework for learning in search-based systems

    Page(s): 563 - 575
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    We provide an overall framework for learning in search based systems that are used to find optimum solutions to problems. This framework assumes that prior knowledge is available in the form of one or more heuristic functions (or features) of the problem domain. An appropriate clustering strategy is used to partition the state space into a number of classes based on the available features. The number of classes formed will depend on the resource constraints of the system. In the training phase, example problems are run using a standard admissible search algorithm. In this phase, heuristic information corresponding to each class is learned. This new information can be used in the problem solving phase by appropriate search algorithms so that subsequent problem instances can be solved more efficiently. In this framework, we also show that heuristic information of forms other than the conventional single valued underestimate value can be used, since we maintain the heuristic of each class explicitly. We show some novel search algorithms that can work with some such forms. Experimental results have been provided for some domains View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge representation using fuzzy Petri nets-revisited

    Page(s): 666 - 667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    In the paper by S. Chen et al. (see ibid., vol.2, no.3, p.311-19, 1990), the authors proposed an algorithm which determines whether there exists an antecedent-consequence relationship from a fuzzy proposition d s to proposition dj and if the degree of truth of proposition ds is given, then the degree of truth of proposition dj can be evaluated. The fuzzy reasoning algorithm proposed by S. Chen et al. (1990) was found not to be working with all types of data. We propose: (1) a modified form of the algorithm, and (2) a concept of hierarchical fuzzy Petri nets for data abstraction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal synchronization models for multimedia data

    Page(s): 612 - 631
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Multimedia information systems are considerably more complex than traditional ones in that they deal with very heterogeneous data such as text, video, and audio-characterized by different characteristics and requirements. One of the central characteristics of multimedia data is that of being heavily time-dependent, in that they are usually related by temporal relationships that must be maintained during playout. We discuss problems related to modeling temporal synchronization specifications for multimedia data. We investigate the characteristics that a model must possess to properly express the timing relationships among multimedia data, and we provide a classification for the various models proposed in the literature. For each devised category, several examples are presented, whereas the most representative models of each category are illustrated in detail. Then, the presented models are compared with respect to the devised requirements, and future research issues are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Knowledge and Data Engineering (TKDE) informs researchers, developers, managers, strategic planners, users, and others interested in state-of-the-art and state-of-the-practice activities in the knowledge and data engineering area.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Jian Pei
Simon Fraser University