By Topic

Data Engineering, 2000. Proceedings. 16th International Conference on

Date Feb. 29 2000-March 3 2000

Filter Results

Displaying Results 1 - 25 of 103
  • Proceedings of 16th International Conference on Data Engineering (Cat. No.00CB37073)

    Save to Project icon | Request Permissions | PDF file iconPDF (1540 KB)  
    Freely Available from IEEE
  • Data Mining: Niche Market or Killer App?

    Page(s): 361
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • XML + Databases = ? (panel session)

    Page(s): 657
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • Tutorial 1: Web Information Retrieval

    Page(s): 693
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5 KB)  

    Summary form only given, as follows. The Web explosion offers a bonanza of algorithmic problems. In particular, information retrieval in the Web context requires methods and ideas that have not been addressed in the classic IR literature. This tutorial will survey emerging techniques for IR in the Web context and discuss some of the pertinent open problems. The list of topics includes search engine technology, ranking and classification methods, Web measurements (usage, size, connectivity) and new graph and data structure problems arising in the Web IR context. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tutorial 2: Mobile and Wireless Database Access for Pervasive Computing

    Page(s): 694 - 695
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10 KB)  

    Summary form only given, as follows. We are in the midst of a wireless and mobile revolution. In the near future, a typical computing environment - business, personal, scientific or educational - will provide wireless network connectivity between powerful data servers and mobile, sometimes disconnected, computers and devices. This has created exciting opportunities for developing a wide range of innovative database applications and systems. However, an open question remains: What kind of system will be capable of offering scalable data services and exhibit scalable performance? Besides advances in communications and hardware, does achieving pervasive mobile computing require innovative theories and paradigms in data management or new data engineering techniques? The objective of this tutorial is to provide an answer to the above questions by presenting the current state-of-the-research and contrasting it with the state-of-the-practice. Towards this, it will provide an overview of the commercial-state-of-the-art for supporting mobile database access and present a summary of the significant research advances in theories and techniques for mobile and wireless data access. It will also discuss some future directions in the context of pervasive and invisible computing applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tutorial 5: Indexing high-dimensional spaces: database support for next decade's applications

    Page(s): 698 - 699
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10 KB)  

    Summary form only given. The tutorial is structured as follows: In the first section, we describe two examples of new database applications, which demonstrate the need for efficient query processing techniques in highdimensional spaces. In the second section, we discuss the effects occurring in high-dimensional spaces ?? first from a pure mathematical point of view and then from a database perspective. Next, we describe the different approaches for modeling the costs of processing queries on high-dimensional data. The description of the different approaches demonstrates nicely what happens if we ignore the special properties of high-dimensional spaces. In the fourth section, we then provide a structured overview of the proposed querying and indexing techniques, discussing their advantages and drawbacks. In this section, we also cover a number of additional techniques dealing with optimization and parallelization. In concluding the tutorial, we try to stir further research activities by presenting a number of interesting research problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 701 - 703
    Save to Project icon | Request Permissions | PDF file iconPDF (23 KB)  
    Freely Available from IEEE
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power conservative multi-attribute queries on data broadcast

    Page(s): 157 - 166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Studies power conservation techniques for multi-attribute queries on wireless data broadcast channels. Indexing data on broadcast channels can improve the client filtering capability, while clustering and scheduling can reduce both the access time and the tune-in time. Thus, indexing techniques should be coupled with clustering and scheduling methods to reduce the battery power consumption of mobile computers. In this study, three indexing schemes for multi-attribute queries, namely the index tree, signature and hybrid index, are discussed. We develop cost models for these three indexing schemes and evaluate their performance based on multi-attribute queries on wireless data broadcast channels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The IDEAL approach to Internet-based negotiation for e-business

    Page(s): 666 - 667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    With the emergence of e-business as the next killer application for the Web, automating bargaining-type negotiations between clients (i.e., buyers and sellers) has become increasingly important. With IDEAL (Internet based Dealmaker for e-business), we have developed an architecture and framework, including a negotiation protocol, for automated negotiations among multiple IDEAL servers. The main components of IDEAL are a constraint satisfaction processor (CSP) to evaluate a proposal, an Event-Trigger-Rule (ETR) server for managing and triggering the execution of rules which make up the negotiation strategy (rules can be updated at run-time to deal with the dynamic nature of negotiations), and a cost-benefit analysis to help in the selection of alternative strategies. We have implemented a fully functional prototype system of IDEAL to demonstrate automated negotiations among buyers and suppliers participating in a supply chain View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taming the downtime: high availability in Sybase ASE 12

    Page(s): 111 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    The new companion architecture in Sybase Adaptive Server Enterprise (ASE) 12 for high availability is supported on a 2-node cluster with each node running a separate ASE 12 server in a companion configuration. This architecture is designed to withstand a single point of failure for unplanned outages, and allow both nodes to be used for productive workload during normal operation. It enables fast failover and data recovery, supports automatic client migration during failure, and integrates seamlessly with adjoining layers in multi-tier architecture. It supports single system presentation of data for applications, and presents a rich set of features/infrastructure to reduce the planned downtime. During failover and failback, only the persistent data component is moved between the companion ASEs, making it fast and efficient. Introducing the proxy databases, this architecture enables user databases to be visible and accessible from either of the companions by shipping the queries to the appropriate node and returning the results to the client View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rules of thumb in data engineering

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    This paper reexamines the rules of thumb for the design of data storage systems. Briefly, it looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance. Amdahl's ratio laws for system design need only slight revision after 35 years-the major change being the increased use of RAM. An analysis also indicates storage should be used to cache both database and Web data to save disk bandwidth, network bandwidth, and people's time. Surprisingly, the 5-minute rule for disk caching becomes a cache-everything rule for Web caching View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • READY: a high performance event notification service

    Page(s): 668 - 669
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    READY is an event notification service that provides efficient, decoupled, and asynchronous event notifications. READY supports: consumer specifications that match over single and compound event patterns; communication sessions that manage quality of service for event delivery; grouping constructs for sessions and specifications; and event zones and boundary routers that bound the scope of event distribution and control the mapping of events across zones View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lineage tracing in a data warehousing system

    Page(s): 683 - 684
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (60 KB)  

    Some commercial data warehousing systems support schema-level lineage tracing, or provide specialized drill-down and/or drill-through facilities for multi-dimensional warehouse views. Our lineage tracing system supports more fine-grained instance-level lineage tracing for arbitrarily complex relational views, including aggregation. At view definition time, our system automatically generates lineage tracing procedures and supporting auxiliary views. At lineage tracing time, the system applies the tracing procedures to the source tables and/or auxiliary views to obtain the lineage results and to illustrate the specific view data derivation process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate estimation of the cost of spatial selections

    Page(s): 123 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    Optimizing queries that involve operations on spatial data requires estimating the selectivity and cost of these operations. In this paper, we focus on estimating the cost of spatial selections, or window queries, where the query windows and data objects are general polygons. Cost estimation techniques previously proposed in the literature only handle rectangular query windows over rectangular data objects, thus ignoring the very significant cost of exact geometry comparison (the refinement step in a “filter and refine” query processing strategy). The cost of the exact geometry comparison depends on the selectivity of the filtering step and the average number of vertices in the candidate objects identified by this step. In this paper, we introduce a new type of histogram for spatial data that captures the complexity and size of the spatial objects as well as their location. Capturing these attributes makes this type of histogram useful for accurate estimation, as we experimentally demonstrate. We also investigate sampling-based estimation approaches. Sampling can yield better selectivity estimates than histograms for polygon data, but at the high cost of performing exact geometry comparisons for all the sampled objects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining recurrent items in multimedia with progressive resolution refinement

    Page(s): 461 - 470
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Despite the overwhelming amounts of multimedia data recently generated and the significance of such data, very few people have systematically investigated multimedia data mining. With our previous studies on content-based retrieval of visual artifacts, we study in this paper the methods for mining content-based associations with recurrent items and with spatial relationships from large visual data repositories. A progressive resolution refinement approach is proposed in which frequent item-sets at rough resolution levels are mined, and progressively, finer resolutions are mined only on the candidate frequent items-sets derived from mining rough resolution levels. Such a multi-resolution mining strategy substantially reduces the overall data mining cost without loss of the quality and completeness of the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oracle8i-the XML enabled data management system

    Page(s): 561 - 568
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (60 KB)  

    XML is here as the Internet standard for information exchange among e-businesses and applications. With its dramatic adoption and its ability to model structured, unstructured and semi-structured data, XML has the potential of becoming the data model for Internet data. In recent years, Oracle has evolved its DBMS to support complex, structured, and un-structured data. Oracle has now extended that technology to enable the storage and querying of XML data by evolving its DBMS to an XML enabled DBMS, Oracle8i. We present Oracle's XML-enabling database technology. In particular, we discuss how XML data can be stored, managed and queried in the Oracle8i database View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multimedia information server with mixed workload scheduling

    Page(s): 670 - 671
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    In contrast to specialized video servers, advanced multimedia applications for tele-shopping, tele-teaching and news-on-demand exhibit a mixed workload with massive access to conventional, “discrete” data such as text document, images and indexes as well as requests for “continuous data” such as video. The paper briefly describes the prototype of a multimedia information server that stores discrete and continuous data on a shared disk pool and is able to handle a mixed workload in a very efficient way View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object/Database Standards Soup

    Page(s): 315
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing range queries on spatial data

    Page(s): 525 - 534
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Analysis of range queries on spatial (multidimensional) data is both important and challenging. Most previous analysis attempts have made certain simplifying assumptions about the data sets and/or queries to keep the analysis tractable. As a result, they may no be universally applicable. This paper proposes a set of five analysis techniques to estimate the selectivity and number of index nodes accessed in serving a range query. The underlying philosophy behind these techniques is to maintain an auxiliary data structure called a density file, whose creation is a one-time cost, which can be quickly consulted when the query is given. The schemes differ in what information is kept in the density file, how it is maintained and how this information is looked up. It is shown that one of the proposed schemes, called “cumulative density” (CD), gives very accurate results (usually less then 5% error) using a diverse suite of point and rectangular data sets, that are uniform or skewed, and a wide range of query window parameters. The estimation takes a constant amount of time, which is typically lower than 1% of the time that it would take to execute the query, regardless of data set or query window parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Query plans for conventional and temporal queries involving duplicates and ordering

    Page(s): 547 - 558
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Most real-world database applications contain a substantial portion of time references, or temporal data. Recent advances in temporal query languages show that such database applications could benefit substantially from built-in temporal support in the DBMS. To achieve this, temporal query representation, optimization and processing mechanisms must be provided. This paper presents a general algebraic foundation for query optimization that integrates conventional and temporal query optimization and is suitable for providing temporal support both via a stand-alone temporal DBMS and via a layer on top of a conventional DBMS. By capturing duplicate removal and retention and order preservation for all queries, as well as coalescence for temporal queries, this foundation formalizes and generalizes existing approaches View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.