By Topic

Research Issues in Data Engineering, 1996. Interoperability of Nontraditional Database Systems. Proceedings. Sixth International Workshop on

Date 26-27 Feb. 1996

Filter Results

Displaying Results 1 - 18 of 18
  • Author index

    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • The Dependency Manager: a base service for transactional workflow management

    Page(s): 86 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1056 KB)  

    The management of long lived applications is a great challenge to transaction processing systems. On the other side, workflow management systems (WFMS) provide mechanisms for basic navigation and synchronization of activities in workflows. However, WFMS do not allow reliable execution of transactional workflows since they have no provisions for activity coordination. We introduce the dependency manager (DM) which implements a mechanism for reliable synchronization and coordination of the state transitions of generic activities. The DM is designed as a new component of an extended X/Open architecture, which offers its services to both workflow management and transaction processing systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistent view removal in transparent schema evolution systems

    Page(s): 138 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB)  

    We have developed the transparent schema evolution (TSE) system that, simulating schema evolution using object-oriented views, allows for the interoperability of applications with diverse and even changing requirements. TSE relieves users of the risk of making existing application programs obsolete when run against the modified schema, because the old view schema is maintained while a new view schema is generated to capture the changes desired by the user. However TSE may be generating a large number of schema versions (object-oriented view schemata) over time, resulting in an excessive build-up of classes and underlying object instances-some of which may potentially no longer be in use. We propose to solve this problem by developing techniques for effective and consistent schema removal. First, we characterize four potential problems of schema consistency that could be caused by removal of a single virtual class; and then outline our solution approach for each of these problems. Second, we demonstrate that view schema removal is sensitive to the order in which individual classes are processed. Our solution to this problem is the development of a dependency graph model for capturing the class relationships, used as a foundation for selecting among removal sequences. Designed to optimize the performance of the TSE system by effective schema version removal, the proposed techniques will enable more effective interoperability among evolving software applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On heterogeneous distributed geoscientific query processing

    Page(s): 98 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1020 KB)  

    Geoscience studies produce data from various observations, experiments, and simulations at an enormous rate. With proliferation of geographic applications, scientific data formats, and storage systems, interoperability remains an important geoscientific data management issue that is often overlooked in current geoscientific query processing research. We present how some issues concerning interoperability in geoscientific query processing are addressed in the Conquest parallel geoscientific query processing system under development at UCLA. The design of Conquest is based on the Volcano extensive query processing system which encapsulates parallel computation through the exchange operator. Conquest extends the exchange operator to support multicasting of data streams and hides the heterogeneity in hardware and operating system platforms from users developing parallel geoscientific applications. The Conquest data model captures some important structural and semantic properties of common geoscientific datasets and is used as a canonical model for a wide variety of scientific and non-scientific datasets. In addition, Conquest supports a uniform interface to a variety of scientific data sources. Access to data managed by a remote data repository is optimized by “pushing” Conquest operators into data repositories to maximize use of local database capability and reduce the volume of data transfer between the data repositories and Conquest View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Queries from outer space

    Page(s): 118 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (884 KB)  

    Decision makers need high-level information on a wide variety of topics. In particular they are not constrained by the current contents of available information sources. They often ask for data that is not present. On the other hand there is often a large body of relevant, detailed data that could be usefully summarized or abstracted for the decision maker. We articulate major issues that arise with these “queries from outer space” and present the framework for vertical information management developed in response to these issues. The term vertical refers to the delivery of information upwards to decision makers at higher and higher levels of the management hierarchy. The framework supports the specification of: the request for high-level information; the extraction of relevant data; and the derivation of the high-level information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An architecture for consumer-oriented online database services

    Page(s): 50 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (804 KB)  

    We introduce an architecture for online database services oriented towards consumers. We identify two types of costs-access cost and communication cost. We demonstrate that dynamic allocation of data can minimize these costs. We do so by presenting efficient algorithms based on dynamic allocation; these algorithms optimize access and communication costs for various cost models, access patterns, and retrieval protocols View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • History merging as a mechanism for concurrency control in cooperative environments

    Page(s): 76 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (932 KB)  

    Cooperative applications need proper transactional support for coordinating joint activities, sharing of data, and semantically correct exchange of information of collaborating users. Conventional transaction models based on the ACID properties do not meet the typical requirements of cooperative applications. The CoACT model (M. Rusinkiewicz et al., 1995) is designed for supporting cooperative work in multi user environments. CoACT includes a novel algorithm implementing the semantically correct exchange of information among concurrent activities of cooperating users by means of merging histories of user activities. This technique of merging activity histories is applicable in various fields beyond CSCW systems, e.g., in the area of mobile computing to manage disconnected operation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling algorithms for real-time agent systems

    Page(s): 32 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    The paper addresses the scheduling issues of real time software agent systems. Different from the traditional client/server approach, an agent contains executable codes and is activated on a server computer after the agent is shipped through the network to the server computer. The advantage of the agent approach is to efficiently utilize the resources and the power of the server computers as well as to reduce the load on the network. A real time agent model is proposed to handle distributed real time applications that dispatch intelligent agents to remote server locations. We investigate different scheduling algorithms on the server computer and compare their performances by simulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DERBY: a memory management system for distributed main memory databases

    Page(s): 150 - 159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    This paper describes a main memory data storage system for a distributed system of heterogeneous general purpose workstations. We show that distributed main memory storage managers are qualitatively different from distributed disk-based storage managers. Specifically, we show that load balancing, which is crucial to disk-based systems, has little effect on the performance of a memory-based system. On the other hand, we show that saturation prevention in cases where the server exceeds its memory capacity or becomes overloaded is crucial to smooth performance. Finally, we show that distributed memory-based storage results in performance more than one order of magnitude better than their disk-based counterparts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SKIPPER: a tool that lets browsers adapt to changes in document relevance to its user

    Page(s): 61 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB)  

    We present a tool, SKIPPER, which personalizes browsers of large information bases by: autonomously detecting and recording documents that are of apparent personal interest to the current user; and by highlighting and presenting relevant browser options to the user in a hierarchically structured fashion. Thereby browser options are organized according the a relevance ranking of the respective documents, and our tool is designed to take into account that document relevance realistically changes over time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multidatabase system implementation on CORBA

    Page(s): 2 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (952 KB)  

    METU Interoperable DBMS (MIND) is a multidatabase system based on OMG's distributed object management architecture. It is implemented on top of a CORBA compliant ORB, namely, DEC's ObjectBroker. In MIND, all local databases are encapsulated in a generic database object. The interface of the generic database object is defined in CORBA IDL and multiple implementations of this interface, one for each component DBMS, namely Sybase, Adabas D and MOOD are provided. MIND provides its users with a common data model and a single global query language based on SQL. The main components of MIND are a global query manager, a global transaction manager, a schema integrator, interfaces to supported database systems and a graphical user interface. The integration of export schemas is currently performed by using an object definition language (ODL) which is based on OMG's interface definition language. MIND global query optimizer aims at maximizing the parallel execution of the intersite operations of the global subqueries. Through the MIND global transaction manager, the serializable execution of the global transactions (both nested and flat) is provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OASIS: an open architecture scientific information system

    Page(s): 107 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1592 KB)  

    Motivated by the premise that heterogeneity of software applications and hardware systems is here to stay, we are developing OASIS, a flexible, extensible, and seamless environment for scientific data analysis, knowledge discovery, visualization, and collaboration. We discuss our OASIS design goals and present the system architecture and major components of our prototype environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Augmented inherited multi-index structure for maintenance of materialized path query views

    Page(s): 128 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB)  

    Materialized complex object-oriented views are a promising technique for the integration of heterogeneous databases and the development of powerful data warehousing systems. Path query views are virtual classes formed from selection queries that specify a predicate upon the value of an aggregation hierarchy path. The primary difference between previous work regarding OODB indexing and the efficient implementation of materialized path query views addressed in this paper lies in the nature of their usage. For OODB indexing, query usage is the primary purpose of index structures. Because the materialized view data itself can be used to answer queries, the primary use of index structures with regard to materialized path query views is for the incremental maintenance of views in the face of updates. We have developed an augmented inherited multi-index (AIM) strategy that is specifically tailored for the maintenance of materialized path query views. We find that we can improve update performance by augmenting traditional inherited multi-indices with structured representations of the path queries that use them. This enables us to use class hierarchy relationships to prune the number of aggregation paths that must be re-instantiated during update propagation and also to support complex path queries that include cycles View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interoperating with DIF data

    Page(s): 24 - 31
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB)  

    Most computerized data is not under the purview of traditional database management systems, but, rather, it is stored in files. Data Interchange Formats (DIFs) are the basis of an important class of such files, containing massive volumes of data, and serving a large user community. Initially designed for data exchange, DIFs are increasingly being used for data storage. Collections of DIF files have become de facto databases, and the software tools surrounding them have evolved to become ad hoc data management systems. But these systems are primitive compared with traditional database management technology. They lack flexibility, and are unlikely to scale as DIF data volumes grow. An acute need exists to provide more robust DBMS style support for data in such files, and to support interoperability in heterogeneous environments that include this data. We suggest that object oriented database techniques can be used to provide effective data management support for individual DIFs and provide a platform for interoperating with heterogeneous DIF data, both in a scalable fashion. We present architectures for implementing this support, and outline the research issues they engender. We present initial results of an experimental evaluation of one such approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A declarative language for querying and restructuring the Web

    Page(s): 12 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB)  

    World Wide Web is a hypertext based, distributed information system that provides access to vast amounts of information in the Internet. A fundamental problem with the Web is the difficulty of retrieving specific information of interest to the user, from the enormous number of resources that are available. We develop a simple logic called WebLog that is capable of retrieving information from HTML (Hypertext Markup Language) documents in the Web. WebLog is inspired by SchemaLog, a logic for multidatabase interoperability. We demonstrate the suitability of WebLog for: querying and restructuring Web information; exploiting partial knowledge users might have on the information being queried; and dealing with the dynamic nature of information in the Web. We illustrate the simplicity and power of WebLog using a variety of applications involving real life information in the Web View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contracting transaction hierarchies

    Page(s): 70 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    Existing transaction models are capable of providing flow control and data control in a single transaction hierarchy. The notion of contract transaction is introduced to support interaction between multiple transaction hierarchies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ADEPTE project: an individual federated database on a smart card

    Page(s): 44 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    In the ADEPTE project, we consider a new domain of communication, that between individuals and organisations. An individual may expect to access their own data, created, managed and located on autonomous and heterogeneous existing information systems. We propose a system using smart cards, where each card houses a personal DB, which federates data located on logically separate systems in such a manner that data location is transparent to its user. The problems of DB schema and transaction management are therefore discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design management in CONCORD: combining transaction management, workflow management and cooperation control

    Page(s): 160 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB)  

    Design management is an encompassing task supporting cooperative design processes entirely. This requirement can best be fulfilled by exploiting concepts from transaction management, workflow management and cooperation control. On the one hand, each of these areas must be adapted to the field of design, while on the other hand, the necessity of an adequate interplay of these components reveals multiple facets of interoperability. The CONCORD (CONtrolling COopeRation in Design environments) processing model faces all these problems and allows a straightforward mapping of the processing structures predominating in design and, thus, serves as a cooperative and interoperable processing model for CAD frameworks, e.g. the PRIMA framework, our testbed and prototype system. The major contribution of the CONCORD model is to enhance design flow management by cooperation control facilities. Design flow management controls design tool applications and the interplay of tools, and supports the user in fulfilling his/her local design goal. For that purpose, it covers both transaction processing and workflow management. Cooperation control, in contrast, handles the interplay of collaborating designers or design tasks. In this paper, we report on the capabilities of the CONCORD processing model, thereby focussing on implementation aspects. The feasibility of already approved transaction concepts and workflow management as implementation basis is investigated and discussed by drawing relationships between design and transaction/workflow processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.