By Topic

Cooperative Database Systems for Advanced Applications, 2001. CODAS 2001. The Proceedings of the Third International Symposium on

Date 24-24 April 2001

Filter Results

Displaying Results 1 - 25 of 26
  • Proceedings of the Third International Symposium on Cooperative Database Systems for Advanced Applications. CODAS 2001

    Save to Project icon | Request Permissions | PDF file iconPDF (164 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 200
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Set-based access conflicts analysis of concurrent workflow definition

    Page(s): 172 - 176
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    An error-comprising workflow definition might provoke serious problems to an enterprise, especially when it is involved with mission critical business processes. Concurrency of workflow processes is known as one of the major sources causing such an invalid workflow process definition. So, the conflicts caused by concurrent workflow processes should be considered deliberately when defining concurrent workflow processes. However, it is very difficult to ascertain whether a workflow process is free from conflicts or not without any experimental executions at runtime; this would be very tedious and time consuming work for process designers. If we can analyze the conflicts immanent in concurrent workflow definition prior to runtime, it would be very helpful to business process designers and many other users of workflow management systems. The authors propose a set-based constraint system to analyze possible read-write conflicts and write-write conflicts between activities which read and write to the shared variables in a workflow process definition. The system is composed of two phases. In the first phase, it generates set constraints from a structured workflow definition. In the second phase, it finds the minimal solution of the set constraints View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-weighted tree based query optimization method for parallel relational database systems

    Page(s): 186 - 193
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    A multi-weighted tree based query optimization method for parallel relational databases is proposed. The method consists of a multi-weighted tree based parallel query plan model, a cost model for parallel query plans and a query optimizer. The parallel query plan model models three types of parallelism of query execution, processor and memory allocation to operations, memory allocation to buffers in pipelines and data redistribution among processors. The cost model takes the waiting time of operations in pipelining execution into consideration and is computable in a bottom-up fashion. The query optimizer addresses the query optimization problem in the context of Select-Project-Join queries. Heuristics for determining the processor allocation to operations and the memory allocation to operations and buffers in pipelines are derived and used in the query optimizer. In addition, the query optimizer considers multiple join algorithms, and can make an optimal choice of join algorithm for each join operation in a query View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative content analysis agents for online multimedia indexing and filtering

    Page(s): 118 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    New integrated services are emerging from the rapid technological advances in networking, multi-agents, media and broadcasting technologies. This advancement allows for large amounts of multimedia information to be distributed and shared on the Internet. To fulfill the goal of the efficient searching and effective redistribution of online multimedia, automatic segmentation, efficient content analysis and effective semantic concept interpretation of visual data are required. In this research, we propose a system with cooperative content analysis agents for automatic video analysis to support fast online multimedia indexing and filtering. An independent metadata channel is also proposed as the communication mechanism among the content agents. The proposed system is evaluated and applied to CNN news distribution and sharing over the Mbone Multicast View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An automated integration approach for semi-structured and structured data

    Page(s): 12 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB) |  | HTML iconHTML  

    As data access beyond the traditional intranet boundary is popular on the Internet these days, the demand for an integrated and uniform method for accessing Web data sources that are different in structures and semantics is increasing. This demand is partly driven by users who want to access more diverse information, such as up-to-date information on stock market, entertainment, news, and science. The demand is also partly driven by information providers who provide information service to customers on the Web. The authors present an approach to integrate semi-structured data sources and structured data sources by using an automated structure resolution approach. The structure resolution approach can easily be adopted to i) integrate existing relations in the relational database model into semi-structured data sources, and ii) merge sets of semi-structured data that have different structures with no human intervention. The integration of multiple data sources by using our approach results in the unified view (UV) of the data sources, which is presented in an XML DTD format. UV can be used for query optimization on heterogeneous data sources View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Range sum queries in dynamic OLAP data cubes

    Page(s): 74 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB) |  | HTML iconHTML  

    Range sum queries play an important role in analyzing data in data cubes. Many application domains require that data cubes should be updated quite often to provide real time information. Previous techniques for range sum queries, however, can incur an update cost of O(nd) in the worst case, where d is the number of dimensions of the data cube and n is the size of each dimension. To address this dynamic data cube problem, a technique called double relative prefix sum (DRPS), was proposed which achieves a query cost of O(n1/3) and an update cost of O(nd/3 ) in the worst case. The total cost of DRPS is the smallest compared with other techniques under two cost models. However, this technique causes considerable space overhead which is about nd+dnd-1/3. While low query cost and update cost are critical for analysis in dynamic OLAP data cubes, growing data collections increase the demand for space-efficient approaches. We propose a new technique which promises the same query cost and update cost as DRPS while the additional space requirement is only nd View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enriched relationship processing in object-relational database management systems

    Page(s): 50 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1016 KB) |  | HTML iconHTML  

    The authors bring together two important topics of current database research: enhancing the data model by refined relationship semantics and exploiting ORDBMS extensibility to equip the system with new functionality. Regarding the first topic, we introduce a framework to capture diverse semantic characteristics of application-specific relationships. Then, in order to integrate the conceptual extensions with the data model provided by SQL: 1999, the second topic comes into play. Our efforts to realize semantically rich relationships by employing current ORDB technology clearly point out the benefits as well as the shortcomings of its extensibility facilities. Unfortunately, deficiencies still prevail in the OR-infrastructure, since the features specific to the extensions cannot sufficiently be taken into account by DBMS-internal processing such as query optimization, and there are very limited mechanisms for adequately supporting the required properties, e.g., by adjusted index and storage structures as well as suitable operational units of processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concealing a secret image using the breadth first traversal linear quadtree structure

    Page(s): 194 - 199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB) |  | HTML iconHTML  

    This paper presents an image hiding method which embeds the secret image in the least significant bits (LSB) of the pixels selected from a cover image. Generally, LSB-related methods are rather sensitive to the modifications on the cover image. To abate the damage to the reconstructed cover image, this proposed method hence embeds the vital data of the cover image in more significant bits. This paper also provides a breadth first traversal (BFT) linear quadtree representation to characterize a compressed binary image. The data in the front of this structure are vital data; yet, those in the rear are the trivial ones. The strategy of this representation does facilitate image embedding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Customized atomicity specification for transactional workflows

    Page(s): 140 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB) |  | HTML iconHTML  

    We introduce a new approach for specifying transaction management requirements for workflow applications. We propose independent models for the specification of workflow and transaction properties. Although we distinguish multiple transaction properties in our approach, we focus on atomicity. We propose an intuitive notation to specify atomicity and provide generic rules to integrate the workflow specification and the atomicity specification into one single model based on Petri nets. The integrated model can be checked for correctness. We call this correctness criterion relaxed soundness as a weaker notion of the existing soundness criterion. We can relax the correctness criterion because we rely on run-time transaction management. A real life example shows the applicability of the concepts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object-oriented representation for XML data

    Page(s): 40 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB) |  | HTML iconHTML  

    XML is a new standard for representing and exchanging data on the Internet. How to model XML data for Web applications and data management is a hot topic in the XML research area. The paper presents an object representation model for XML data. A set of transformation rules and steps are established to transform DTDs, as well as XML documents with DTDs, into this model. This model capsulizes elements of XML data and manipulation methods. This pure object oriented model considers the features and usage of XML data and is suitable for Web applications as well as XML data management. DTD-Tree is defined to represent DTD and describe the procedure to use transformation rules. DTD-Tree can also be used as a logical interface for DTD processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible data management and execution to support cooperative workflow: the COO approach

    Page(s): 124 - 131
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB) |  | HTML iconHTML  

    This paper introduces an evolution to classical workflow that allows more flexible execution of processes while retaining its simplicity. On the one hand it allows describing processes in the same way that they are in design and engineering manuals, on the other hand it allows to control these processes in a way that is close to the way they are actually enacted. Flexible execution is combined with the COO transaction protocol to provide an environment supporting an advanced form of coordination in a cooperative environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constraints for information cooperation in virtual enterprise systems

    Page(s): 159 - 166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB) |  | HTML iconHTML  

    In recent years, research about distributing information sources over a network has become increasingly important. We concentrate on the issues of information cooperation among heterogeneous virtual enterprise information sources. When a virtual enterprise was built up, enterprise information will be bound together with some constraints enforcing virtual enterprise logic across several enterprises. We explore the needs on the constraints in cooperative virtual enterprise information systems, and present a cooperative model suitable for constructing a virtual enterprise. This is done through the constraint definition and constraint model developed in a virtual enterprise information integration system named ViaScope. In order to manage these constraints effectively, the properties of distributed constraints are studied deeply. Further some novel implementation issues in the ViaScope system are provided including the cooperative architecture, constraint evaluation and an active rule based maintenance strategy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying event-condition-action mechanism in healthcare: a computerised clinical test-ordering protocol system (TOPS)

    Page(s): 2 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB) |  | HTML iconHTML  

    The paper addresses issues of active database application in the challenging healthcare area: the management and execution of computerised clinical practice guidelines/protocols. The problem of how to efficiently and effectively query and manipulate the computerised clinical protocols/guidelines has posed a major challenge, but received little research attention until very recently. By proposing a declarative modeling language (PLAN) with an event-condition-action (ECA) mechanism for clinical test-ordering protocols, and an automatic mapping and management system (TOPS), the paper addresses this issue, in an important medical domain, from a unified approach based on an active rule mechanism. The work presented is part of an on-going research effort that investigates a new application domain for active databases, and proposes some new requirements towards the enhancements of active DBMS functionalities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realizing temporal XML repositories using temporal relational databases

    Page(s): 60 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    This paper addresses development of general storage and retrieval methods for changes of XML documents. In this approach, histories of XML documents are modeled by the TXPath data model, which is a temporal extension of the XPath data model. Given an XML document, changes of the document are modeled by the TXPath data model, and it is then mapped into valid-time relational tables of temporal relational databases. We show the fundamental operations for XML documents, which are based on DOM (Document Object Model), and also show how the operations are mapped into those on valid-time relations. Finally, we can retrieve changes of XML documents using SQL View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The personalized index service system in digital library

    Page(s): 92 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    China Digital Library is a federated digital library, which is composed of library components such as: local digital libraries, publishing companies, and so on. The objective of the federated digital library is to share resources and services of its components, therefore users can access distributed information transparently. The immense distributed electronic information available in the federated digital library raises two problems: how to find all information that the user needs accurately; and how to provide information that the user is interested. The personalized index service system presented in this paper holds the promise of resolving these problems, which is composed of an index service system and a personalized service system. The former facilitates federated resource discovery. The latter learns a model of the user's interests and uses the model to push interesting information and sort the search results, so that users are able to easily and quickly find interesting objects. The personalized service system learns the model of the user's initial interests after the user provides his/her interests, and the system updates the model according to user's access log and feedback to the search results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incorporating adaptive interaction property in cooperative agent for emergent process management

    Page(s): 110 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB) |  | HTML iconHTML  

    An adaptive agent interaction strategy, is presented. It has been trialled in a system for emergent process management. That system was constructed using generic process agents. Their architecture is a three-layered BDI hybrid architecture. The agent interaction strategy substantially determines the performance of a multi-agent system. The interaction strategy described is derived by observing the way that humans interact in process management applications. Individuals are driven principally, by, their goals but this is tempered by social interaction attributes such as whether the individuals they are dealing with are cooperative, friendly, trustworthy, and so on. The strategy described takes into account sentiments such as a friendship and trust by making estimates of them for each other agent in the system. Using this strategy, each agent continually adapts its behavior on the basis of its observations of other agents and a continual revision of these estimates. The experiment using this approach led to high performance of the process management system in the context of the overall operation of the organization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting voluntary disconnection in WFMSs

    Page(s): 132 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB) |  | HTML iconHTML  

    With the network and computing environment improvement in both wireless and wired area, mobile or portable devices like palmtops and notebooks have become prevalent. They are even considered as essential ingredients of the business workplace. Thus this mobile business environment should be accommodated in the business automation systems like workflow management systems (WFMSs). Researchers have focused on disconnected operation to provide continuous and safe services for the mobile devices in various fields. However disconnected operation has not been fully addressed in the area of WFMSs at all. We analyze the type and scope of disconnected service in WFMSs through the in-depth analysis of workflow task models. Based on this analysis, we show and discuss four general issues that should be addressed to support disconnected operation in WFMSs. The discussions are mainly focused on voluntary disconnection in the wired environment and issues like task classification, task relevant data handling, application handling, and task state emulation are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving performance of the k-nearest neighbor classifier by tolerant rough sets

    Page(s): 167 - 171
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    The authors report on efforts to improve the performance of k-nearest neighbor classification by introducing the tolerant rough set. We relate the tolerant rough relation with object similarity. Two objects are called similar if and only if these two objects satisfy the requirements of the tolerant rough relation. Hence, the tolerant rough set is used to select objects from the training data and constructing the similarity function. A genetic algorithm (GA) algorithm is used for seeking optimal similarity metrics. Experiments have been conducted on some artificial and real world data, and the results show that our algorithm can improve the performance of the k-nearest neighbor classification, and achieve a higher accuracy compared with the C4.5 system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enterprise model as a basis of administration on role-based access control

    Page(s): 150 - 158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    Access control is one of the important security issues for large enterprise organizations. The role-based access control (RBAC) model is well known and recognized as a good security model for the enterprise environment. Though RBAC is a good model, the administration of RBAC including building and maintaining access control information remains a difficult problem in large companies. The RBAC model itself does not tell the solution. Little research has been done on the practical ways of finding information that fills RBAC components such as role, role hierarchy, permission-role assignment, user-role assignment, and so on from the real world. We suggest model-based administration of RBAC in an enterprise environment. Model-based administration methods allow the security administrator to manage access control by a GUI that supports a graphical enterprise model. If the security administrator creates or changes some of the components of the graphical enterprise model, then it is translated to RBAC schema information by the administration tool. We focus on a practical way of deriving access control information from the real world. It is a core of model-based administration. We show the derivation method and implementation experiences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective and efficient boundary-based clustering for three-dimensional geoinformation studies

    Page(s): 82 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (796 KB) |  | HTML iconHTML  

    Due to their inherent volumetric nature, underground and marine geoinformation studies and even astronomy demand clustering techniques capable of dealing with three-dimensional data. However, most robust and exploratory spatial clustering approaches for GIS only consider two dimensions. We extend robust argument-free two-dimensional boundary-based clustering (Estivill-Castro and Lee, 2000) to three dimensions. The progression to 3D demands manipulation of one argument from users and the encoding of proximity and density information in different proximity graphs. Fortunately, the input argument allows exploration of weaknesses in clusters, and detection of regions for potential merge or split. We also provide an effective heuristic to obtain good initial values for the input argument. This maximizes user friendliness and minimizes exploration time. Experimental results demonstrate that for two popular proximity graphs (Delaunay tetrahedrization and undirected k-nearest neighbor graph) our approach is robust to the presence of noise and is able to detect high-quality, volumetric clusters for complex situations such as non-convex clusters, clusters of different densities and clusters of different sizes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • To a man with an ORDBMS everything looks like a row in a table

    Page(s): 65 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB) |  | HTML iconHTML  

    In large software projects it is required to manage all project-related artefacts in a shared database in order to support cooperation of developers and reuse of design. Unfortunately, such projects have to be supported by various development tools using proprietary strategies for storing their persistent data. Since we need a strong query language to analyse the project-related data we choose an object-relational database system (ORDBMS) as an integration platform. We discuss the possibilities of how to integrate external data in an ORDBMS. Further, we introduce a reference architecture for discussing the architectural options of an ORDBMS-based integration environment. Finally, we present our own system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An intelligent agent for Web advertisements

    Page(s): 102 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB) |  | HTML iconHTML  

    The rapid growth of Internet users attracts advertisers to post their advertisements on the Internet. The probabilistic selection algorithm was not satisfactory; while other advertising agents are unable to guarantee the quality due to insufficient and unstable user information. We describe a new advertising agent based on user information. The users' interests are discovered by the order pattern mining algorithm first, then applied to the Gaussian curve transformation to represent their profiles. For the advertisements, we use the keywords from different categories to construct the advertisement profiles as Gaussian curves also. This allows us to select advertisements based on the intersections of the different profiles according to the users' preferences in an effective and efficient mechanism. A prototype of the Intelligent Advertising Agent has been developed with Java and Oracle. From our evaluations, we observed that about 80% of the test cases are successful in making predictions which generated the most favorable category that the users are interested View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel join algorithms based on parallel B+-trees

    Page(s): 178 - 185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB) |  | HTML iconHTML  

    Within the last several years, a number of parallel algorithms for the join operation have been proposed. However, almost all of these algorithms do not take advantage of the underlying parallel storage structures or data declustering methods of the operand relations. The paper introduces the concept of parallel storage structure or declustering aware parallel join algorithm. Two classes of parallel join algorithms, which take advantage of the underlying parallel B+-tree index, are proposed and analyzed. One class is based on the range-partition strategy. The other is based on the hash-partition strategy. The parallel execution times of the algorithms are linearly proportional to max{N/P, M/P}, where N and M are the numbers of tuples of the operand relations and P is the number of processing nodes. The proposed parallel join algorithms are compared with well known parallel join algorithms in practice. Theoretical and experimental results show that the proposed algorithms are more efficient than others in case of at least one operand relation having a parallel B+-tree index on the join attributes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computing repairs for inconsistent databases

    Page(s): 30 - 37
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB) |  | HTML iconHTML  

    The paper addresses the problem of managing inconsistencies derived from the integration of multiple autonomous information sources. We propose a general framework for computing repairs and consistent answers over inconsistent databases, i.e. databases which violate integrity constraints. A repair for a database is a minimal set of insert and delete operations which makes the database consistent. In our framework different types of rules defining general integrity constraints, repair constraints (i.e. rules defining conditions on the insertion or deletion of atoms) and prioritized constraints (i.e. rules defining priorities among updates and repairs) are considered. We propose a technique based on the rewriting of constraints into (prioritized) extended disjunctive rules with two different forms of negation (negation as failure and classical negation). The disjunctive program can be used for two different aims: compute 'repairs' for the database, and produce consistent answers, i.e. maximal set of atoms which do not violate the constraints. The technique proposed is sound and complete and more general than techniques previously proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.