By Topic

Advances in Databases Knowledge and Data Applications (DBKDA), 2010 Second International Conference on

Date 11-16 April 2010

Filter Results

Displaying Results 1 - 25 of 47
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (841 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (11 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (173 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (70 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): xiii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • Analysis of the Quality of Life after an Endoscopic Thoracic Sympathectomy: A Business Intelligence Approach

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    Primary hyperhidrosis, a disorder characterized by an excessive sweating, has been treated by endoscopic thoracic sympathectomy. As a consequence of the surgery, patients improved their overall quality of life. Their day-by-day activities are not affected, or are less affected, by this disorder, and their emotional state verifies a significant improvement, from a situation of shame and self-punishing to what we could say a normal life. This paper presents the analysis of the quality of life of 227 patients that were treated by an endoscopic thoracic sympathectomy. The study was based on the use of business intelligence technologies, which allowed the storage, the analysis and the reporting of all the relevant findings. In technological terms, this paper illustrates the database and data analysis developments needed in a specific healthcare application domain. For data storage, a data mart was designed addressing the relevant attributes. For data analysis, on-line analytical processing and data mining technologies were used to show the evolution of the patients', health condition and the incidence of complications or side effects as consequence of the surgery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Topological Relations between Uncertain Spatial Regions in Geo-spatial Databases: Uncertain Intersection and Difference Topological Model

    Page(s): 7 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1314 KB) |  | HTML iconHTML  

    Topological relations have played important roles in spatial query, analysis and reasoning in Geographic Information Systems (GIS) and geospatial databases. The topological relations between crisp, uncertain and fuzzy spatial regions based upon the 9-intersections model have been identified. The research issue of topological relations, particularly, between spatial regions with uncertainties, has gained a lot of attention during the past two decades. However, the formal representation and calculation of the topological relations between uncertain regions is still an open issue and needs to be further developed. The paper provides a theoretical framework for modeling topological relations between uncertain spatial regions based upon a new uncertain topological model called the Uncertain Intersection and Difference (UID) Model. In order to derive all topological relations between two spatial regions with uncertainties, the spatial object of type Region (A) is decomposed in four components: the Interior, the Interior's Boundary, the Object's Boundary, and the Exterior's Boundary of A. By use of this definition of spatial region with uncertainties, new 4*4-Intersection and Uncertain Intersection and Difference (UID) models are proposed as a qualitative model for the identification of all topological relations between two spatial regions with uncertainties. These two new models are compared with other models studied in the literature. 152 binary topological relations can be identified by these two models. Then, the topological complexity and distance of the 152 relations will be study in details by using the UID model. Based upon this study of topological complexity and distance, a conceptual neighborhood graph for the 152 relations can be obtained. Examples are provided to illustrate the utility of these two models presented in this paper with results which can be applied for modeling GIS, geospatial databases and satellite image processing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interoperable and Easy-to-Use Web Services for the Bioinformatics Community - A Case Study

    Page(s): 16 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    In the field of bioinformatics, there exists a large number of web service providers and many competing standards regarding how data should be represented and interfaced. However, these web services are often hard to use for a non-expert programmer and it can be especially hard to see how different services can be used together to create scientific workflows. In this paper we have performed a literature study to identify problems involved in developing interoperable web services for the bioinformatics community and steps taken by other projects to, at least in part, address them. We have also conducted a case study by developing our own bioinformatic web service to further investigate these problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring the Use of Mixed Abstractions in SQL:1999 - A Framework-Based Approach

    Page(s): 22 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    SQL: 1999 introduced the capability to support object concepts. It is now possible to design an SQL database schema using relational and object models. Each model represents a different abstraction. We use a framework to understand the implications of this change and to explore the use of mixed abstractions in an SQL: 1999 schema. We describe two new kinds of schema. We find that current object-relational mapping strategies differ in terms of the kind of database schema they produce from the same class model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Social Network Extraction Using a Graph Database

    Page(s): 28 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    In the enterprise context, an important amount of information is stored in relational databases. Therefore, relational database can be a rich source to extract social network. Moreover, it is not very suitable to present and store a social network. On the other hand, a graph database canmodel data in natural way and facilitates the query of data using graph operations. In this way, we propose a social network extraction approach from relational, and present mechanisms for transforming relational database into graph databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimistic Synchronization of Cooperative XML Authoring Using Tunable Transaction Boundaries

    Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    Design applications, e.g., CAD or media production, often require multiple users to work cooperatively on shared data, e.g., XML documents. Using explicit transactions in such environments is difficult, because designers usually do not want to consider transactions or ACID. However, applying transactions in order to control visibility of changes or specify recovery units, is reasonable, but determining transaction boundaries must be transparent for the designer. For this reason we propose a novel approach for the automatic determination of transaction boundaries which considers the degree of cooperation designers want to achieve. Furthermore, we present an optimistic synchronization model based on the traditional backward oriented concurrency control (BOCC) algorithm, in order to synchronize the determined transactions in multi-user environments. It exploits the semantics of tree operations on XML data and enforces a correctness criterion weaker than serializability. As our evaluation shows, when multiple users work cooperatively on shared data, this model significantly reduces the number of transaction aborts in comparison to the traditional BOCC approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sudoku Bit Arrangement for Combined Demosaicking and Watermarking in Digital Camera

    Page(s): 41 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1349 KB) |  | HTML iconHTML  

    In this paper, an enhanced combined demosaicking and watermarking (CDW) method is proposed. Such a combination leads to lower power and time consumption comparing with being performed separately. In the proposed method, bits are arranged in a special order employing a Sudoku pattern. Results indicate that this arrangement increases robustness against JPEG compression attack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementing Optimistic Concurrency Control for Persistence Middleware Using Row Version Verification

    Page(s): 45 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    Modern web-based applications are often built as multi-tier architecture using persistence middleware. Middleware technology providers recommend the use of Optimistic Concurrency Control (OCC) mechanism to avoid the risk of blocked resources. However, most vendors of relational database management systems implement only locking schemes for concurrency control. As consequence a kind of OCC has to be implemented at client or middleware side. A simple Row Version Verification (RVV) mechanism has been proposed to implement an OCC at client side. The implementation of RVV depends on the underlying database management system and the specifics of the middleware. For performance reasons the middleware uses buffers (cache) of its own to avoid network traffic and possible disk I/O. This caching however complicates the use of RVV because the data in the middleware cache may be stale (outdated). We investigate various data access technologies, including the new Java Persistence API (JPA) and Microsoft's LINQ technologies for their ability to use the RVV programming discipline. The use of persistence middleware that tries to relieve the programmer from the low level transaction programming turns out to even complicate the situation in some cases. Programmed examples show how to use SQL data access patterns to solve the problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Different Solutions for Solving the Optimization Problem of Large Join Queries

    Page(s): 51 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    The article explores the optimization of queries using genetic algorithms and compares it with the conventional query optimization component. Genetic algorithms (GAs), as a data mining technique, have been shown to be a promising technique in solving the ordering of join operations in large join queries. In practice, a genetic algorithm has been implemented in the PostgreSQL database system. Using this implementation, we compare the conventional component for an exhaustive search with the corresponding module based on a genetic algorithm. Our results show that the use of a genetic algorithm is a viable solution for optimization of large join queries, i.e., that the use of such a module outperforms the conventional query optimization component for queries with more than 12 join operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Understanding Linked Open Data as a Web-Scale Database

    Page(s): 56 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    While Linked Open Data (LOD) has gained much attention in the recent years, requirements and the challenges concerning its usage from a database perspective are lacking. We argue that such a perspective is crucial for increasing acceptance of LOD. In this paper, we compare the characteristics and constraints of relational databases with LOD, trying to understand the latter as a Web-scale database. We propose LOD-specific requirements beyond the established database rules and highlight research challenges, aiming to combine future efforts of the database research community and the Linked Data research community in this area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring Social Patterns in Mobile Data

    Page(s): 62 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (547 KB) |  | HTML iconHTML  

    To compete with other telecom providers, it is important to understand the behavior of the customers and predict their needs. In order to realize this, it is required to explore the customers details based on their mobile usage behavior into social patterns (segments) and target the suitable segments for advertising. In our approach, the usage data of the customers in association with their browsing behavior is used to form the segments considered to be an important addition. From the analysis of their usage rates with respect to a certain domain, the operator can drill down to the sub domain level interests and target them with specific customized services. This can be done by performing latent semantic analysis using Gibbs sampling algorithm and K-Means clustering on the description of their accessed web pages with their usage and spend data. The traditional method involves forming web communities using link based approach. Our method based on identifying social communities could produce an alternative approach for the mobile operators. The usage rates within a certain cluster, and the customers' interest towards a specific domain can help to determine their extent of willingness to spend in specific areas. Our approach produces better results than the traditional methods by enabling the telecom providers to target a specific group of consumers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent Network Communications for Distributed Database Systems

    Page(s): 69 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB) |  | HTML iconHTML  

    Customizing network sites have become an increasingly important issue in distributed database systems. This will improve the network system performance by reducing the number of communications required for query processing in terms of retrieval and update transactions. This paper presents an intelligent clustering method for distributed database system that provides a structure for organizing large number of network sites into a set of useful clusters to minimize transactions processing communications. It has been designed to divide the database network sites into a set of disjoint clusters based on a high performance clustering technique. This can reduce the amount of redundant data to be accessed and transferred among different sites, definitely increase the transaction performance, significantly improve database system response time, and result in better distributed network decision support. Experimental validations on real database applications at different networks connectivity are performed and the results demonstrate that the proposed method leads to precise solutions for the problems of data communication, allocation, and redundancy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Evaluation of an Optimistic Concurrency Control Algorithm for Temporal Databases

    Page(s): 75 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    We propose in this paper a performance study of an access concurrency control algorithm for temporal databases. This algorithm is based on the optimistic approach, which is, in our opinion, more suitable for temporal databases than the pessimistic methods. Indeed, our optimistic algorithm, in the contrary to the pessimistic ones, can exploit the temporal specifications to reduce the granule size and then to minimize the conflict degree. Moreover, it can detect, as soon as possible, all the conflict cases. By using the end of transaction marker technique, it has the merit to reduce to the maximum the period during which resources are locked in the validation phase. By carrying out a formal verification, based first on the serialization theory and next on the SPIN model checker, we have ensured that our algorithm operate correctly. Now, we proceed to its experimental evaluation vis-à-vis of other well-known concurrency control mechanisms based on an optimistic and pessimistic approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Algorithm for the Containment Problem of Conjunctive Queries with Safe Negation

    Page(s): 82 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    Many queries about real databases have a particular form, e.g., the negated part consists of one single literal or they contain just a single binary relation, etc. For a particular class of queries, it is useful to construct algorithms for the containment problem, that are better than those for the whole class of queries. The paper is about the problem of query containment for conjunctive queries with safe negation property. A new algorithm to test the containment problem of two queries is given. Several aspects of the time complexity for the proposed algorithm are specified. From this point of view, the new algorithm proves to be better than the previous for some classes of queries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Can Queries Help to Validate Database Design?

    Page(s): 91 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    The design of a conceptual database schema is a critical task. The more methods a conceptual database designer has in order to communicate with the end user, the better it is for the quality of the conceptual schema. This paper focuses on the question: Can queries be used for checking missing concepts in a conceptual database schema? The usefulness of queries for schema checking will be presented in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trend-Based Similarity Search in Time-Series Data

    Page(s): 97 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (517 KB) |  | HTML iconHTML  

    In this paper, we present a novel approach towards time-series similarity search. Our technique relies on trends in a curve's movement over time. A trend is characterized by a series', values channeling in a certain direction (up, down, sideways) over a given time period before changing direction. We extract trend-turning points and utilize them for computing the similarity of two series based on the slopes between their turning points. For the turning point extraction, well-known techniques from financial market analysis are applied. The method supports queries of variable lengths and is resistant to different scaling of query and candidate sequence. It supports both subsequence searching and full sequence matching. One particular focus of this work is to enable simple modeling of query patterns as well as efficient similarity score updates in case of appending new data points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Maintenance of k-Dominant Skyline for Frequently Updated Database

    Page(s): 107 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    Skyline queries retrieve a set of skyline objects so that the user can choose promising objects from them and make further inquiries. However, a skyline query often retrieves too many objects to analyze intensively. To solve the problem, k-dominant skyline queries have been introduced, which can reduce the number of retrieved objects by relaxing the definition of the dominance. Though it can reduce the number of retrieved objects, the k-dominant skyline objects are difficult to maintain if the database is updated. This paper addresses the problem of maintenance of k-dominant skyline objects of frequently updated database. We propose an algorithm for maintaining k-dominant skyline objects. Intensive experiments using real and synthetic datasets demonstrated that our method is efficient and scalable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.