Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Knowledge and Data Engineering, IEEE Transactions on

Issue 6 • Date Nov.-Dec. 2000

Filter Results

Displaying Results 1 - 9 of 9
  • Author index

    Publication Year: 2000 , Page(s): 998 - 1000
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • Subject index

    Publication Year: 2000 , Page(s): 1000 - 1008
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • ASEP: a secure and flexible commit protocol for MLS distributed database systems

    Publication Year: 2000 , Page(s): 880 - 899
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB)  

    The classical Early Prepare (EP) commit protocol, used in many commercial systems, is not suitable for use in multi-level secure (MLS) distributed database systems that employ a locking protocol for concurrency control. This is because EP requires that read locks are not released by a participant during their window of uncertainty; however, it is not possible for a locking protocol to provide this guarantee in a MLS system (since the read lock of a higher-level transaction on a lower-level data object must be released whenever a lower-level transaction wants to write the same data). The only available work in the literature, namely the Secure Early Prepare (SEP) protocol, overcomes this difficulty by aborting those distributed transactions that release their low-level read locks prematurely. We see this approach as being too restrictive. One of the major benefits of distributed processing is its robustness to failures, and SEP fails to take advantage of this. In this paper, we propose the Advanced Secure Early Prepare (ASEP) commit protocol to solve the above problem, together with a number of language primitives that can be used as system calls in distributed transactions. These primitives permit features like partial rollback and forward recovery to be incorporated within the transaction model, and allow a distributed transaction to proceed even when a participant has released its low-level read locks prematurely. This not only offers flexibility, but can also be used, if desired, by a sophisticated programmer to trade off consistency for atomicity of the distributed transaction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating security and real-time requirements using covert channel capacity

    Publication Year: 2000 , Page(s): 865 - 879
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Database systems for real-time applications must satisfy timing constraints associated with transactions in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multi-level security requirements introduce a new dimension to transaction processing in real-time database systems. In this paper, we argue that, due to the conflicting goals of each requirement, tradeoffs need to be made between security and timeliness. We first define mutual information, a measure of the degree to which security is being satisfied by a system. A secure two-phase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. Analytical expressions for the mutual information of the resultant covert channel are derived, and a feedback control scheme is proposed that does not allow the mutual information to exceed a specified upper bound. Results showing the efficacy of the scheme obtained through simulation experiments are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of parallel query processing algorithms for object-oriented databases

    Publication Year: 2000 , Page(s): 979 - 996
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    Two types of parallel processing and optimization algorithms for processing object-oriented databases are the hybrid-hash pointer-based (HHP) algorithms and multi-wavefront (MWF) algorithms. We analyze these two algorithms and develop analytical formulas to capture their main performance features. We study their performance in three application environments, characterized by large databases having many object classes, each of which, respectively, (1) contains a large number of instances; (2) contains a relatively small number of instances; and (3) is of varying size. A horizontal data partitioning strategy is used in (1). A class-per-node assignment strategy is used in (2). In (3), object classes are partitioned horizontally and assigned to a varying number of processors depending on their different sizes. The MWF algorithm has three distinguishing features which contribute to its better performance: (a) a two-phase processing strategy, (b) vertical partitioning of horizontal segments, and (c) dynamic determination of the collision point in MWF propagations, which results in an optimized query execution plan. If these features are adopted by an HHP algorithm, its performance is comparable with that of the MWF algorithm because the difference in CPU time between them is negligible. The computing environment is a network of workstations having a shared-nothing architecture. The schema and some queries selected from the OO7 benchmark are used in the performance analyses and comparisons. The queries are modified slightly in different data environments in order to reflect the features of diverse database applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic query optimization for query plans of heterogeneous multidatabase systems

    Publication Year: 2000 , Page(s): 959 - 978
    Cited by:  Papers (8)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting spatial indexes for semijoin-based join processing in distributed spatial databases

    Publication Year: 2000 , Page(s): 920 - 937
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1240 KB)  

    In a distributed spatial database system, a user may issue a query that relates two spatial relations not stored at the same site. Because of the sheer volume and complexity of spatial data, spatial joins between two spatial relations at different sites are expensive in terms of computational and transmission costs. In this paper, we address the problems of processing spatial joins in a distributed environment. We propose a semijoin-like operator, called the spatial semijoin, to prune away objects that do not contribute to the join result. This operator also reduces both the transmission and local processing costs for a later join operation. However, the cost of the elimination process must be taken into account, and we consider approaches to minimize these overheads. We also study and compare two families of distributed join algorithms that are based on the spatial semijoin operator. The first is based on multi-dimensional approximations obtained from an index such as the R-tree, and the second is based on single-dimensional approximations obtained from object mapping. We have conducted experiments on real data sets and report the results in this paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure databases: constraints, inference channels, and monitoring disclosures

    Publication Year: 2000 , Page(s): 900 - 919
    Cited by:  Papers (31)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    Investigates the problem of inference channels that occur when database constraints are combined with non-sensitive data to obtain sensitive information. We present an integrated security mechanism, called the Disclosure Monitor, which guarantees data confidentiality by extending the standard mandatory access control mechanism with a Disclosure Inference Engine. This generates all the information that can be disclosed to a user based on the user's past and present queries and the database and metadata constraints. The Disclosure Inference Engine operates in two modes: a data-dependent mode, when disclosure is established based on the actual data items, and a data-independent mode, when only queries are utilized to generate the disclosed information. The disclosure inference algorithms for both modes are characterized by the properties of soundness (i.e. everything that is generated by the algorithm is disclosed) and completeness (i.e. everything that can be disclosed is produced by the algorithm). The technical core of this paper concentrates on the development of sound and complete algorithms for both data-dependent and data-independent disclosures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object-based selective materialization for efficient implementation of spatial data cubes

    Publication Year: 2000 , Page(s): 938 - 958
    Cited by:  Papers (36)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    With a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integrating nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for the computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies being proposed, including approximation and selective materialization of the spatial objects resulting from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization (proposed in previous studies of nonspatial data cube construction). Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level: that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes, and a performance study has demonstrated the effectiveness of these techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Knowledge and Data Engineering (TKDE) informs researchers, developers, managers, strategic planners, users, and others interested in state-of-the-art and state-of-the-practice activities in the knowledge and data engineering area.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Jian Pei
Simon Fraser University

Associate Editor-in-Chief
Xuemin Lin
University of New South Wales

Associate Editor-in-Chief
Lei Chen
Hong Kong University of Science and Technology