By Topic

Database and Expert Systems Application, 2009. DEXA '09. 20th International Workshop on

Date Aug. 31 2009-Sept. 4 2009

Filter Results

Displaying Results 1 - 25 of 100
  • [Front cover]

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (174 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (59 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (104 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): v - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (172 KB)  
    Freely Available from IEEE
  • Conference Information

    Publication Year: 2009 , Page(s): xiv - xxxvi
    Save to Project icon | Request Permissions | PDF file iconPDF (202 KB)  
    Freely Available from IEEE
  • Formal Verification of Systems-on-Chip - Industrial Experiences and Scientific Perspectives

    Publication Year: 2009 , Page(s): 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (87 KB)  

    Even after years of progress in the field of formal property checking many system designers in industry still consider simulation as their most powerful and versatile instrument when verifying complex systems-on-chip (SoCs). Often, formal techniques are only conceded a minor role. At best, they are viewed as nice-to-have and may be employed in addition to simulation, e.g. for "bug hunting'' in corner cases. Fortunately, in some parts of industry, a paradigm shift can be observed. Verification methodologies have emerged that involve property checking comprehensively, and in a systematic way. This has led to major innovations in industrial design flows. There are more and more applications where formal property checking does not only complement but replace simulation. In this talk, experiences from large-scale industrial projects are reported documenting this emancipation process of property checking. A systematic methodology is presented as it has established in some industries. Furthermore, there will be an attempt to identify the bottlenecks of today's technology and to outline specific scientific challenges. While formal property checking for individual SoC modules can be considered quite mature it is well-known that there are tremendous obstacles when moving from modules to the entire system. These problems do not only result from the sheer size of the system but also from the different nature of the verification problems. The presented analysis will also relate to well-known abstraction approaches and to techniques for state space approximation. More specifically, as a first step towards formal chip-level verification, the talk will discuss techniques for verifying communication structures (interfaces) between the individual SoC modules. New ideas will be outlined how certain abstraction techniques can be tailored towards a specific verification methodology such that correctness proofs become tractable even for complex SoC interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Clustering and Non-clustering Effects in Flash Memory Databases

    Publication Year: 2009 , Page(s): 4 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    Flash memory has its unique characteristics: the write operation is much more costly than the read operation, and in-place updating is not allowed. In this paper, we analyze how these characteristics affect the performance of clustering and non-clustering in record management, and shows that non-clustering is more suitable in flash memory environment, which does not hold in disk environment. Also, we discuss the problems of the existing non-clustering method, and identify design factors to be considered with record management method in flash memory environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specialized Embedded DBMS: Cell Based Approach

    Publication Year: 2009 , Page(s): 9 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1275 KB) |  | HTML iconHTML  

    Data management is fundamental to data-centric embedded systems with high resources scarcity and heterogeneity. Existing data management systems are very complex and provide a multitude of functionality. Due to complexity and their monolithic architecture, tailoring these data management systems for data-centric embedded systems is tedious, cost-intensive, and error-prone. In order to cope with complexity of data management in such systems, we propose a different approach to DBMS architecture, called Cellular DBMS, that is inspired by biological systems. Cellular DBMS is a compound of multiple simpler DBMS, called DBMS Cells, that typically provide differing functionality(e.g., persistence storage, indexes, transactions, etc.). We illustrate how the software product line approach is useful to generate different individual DBMS cells from a commonest of features and how generated atomic DBMS cells can be used together for data management on data-centric embedded systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable Group Communication for Dynamic and Resource-Constrained Environments

    Publication Year: 2009 , Page(s): 14 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB) |  | HTML iconHTML  

    In the field of embedded systems there is a tendency towards distributed mobile environments relying on wireless networking. These inherently dynamic environments present new challenges for system reliability. One basic building block for reliability support at the software level is reliable group communication.However, traditional implementations of such a service are unsuitable for wireless and embedded environments, in which bandwidth and power consumption limitations exist. In this paper we present a group communication service optimized for mobile ad-hoc networks which provides reliable communication for a subset of stable nodes. These stable nodes can then use reliable communication to implement higher-level reliability services such as replication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern Recognition with Embedded Systems Technology: A Survey

    Publication Year: 2009 , Page(s): 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (130 KB) |  | HTML iconHTML  

    Pattern Recognition (PR) tasks are natural candidates for embedded systems, since they usually interact with humans and other complex processes in the real world. Often regarded as the part of Artificial Intelligence (AI) closer to perception, a typical PR application reacts to external events that the system perceives through physical sensors or input devices and produces a response using actuators or information display subsystems. Being usually far from trivial, very demanding from the computational point of view, and requiring a fast reaction time, PR algorithms constitute a real challenge to the embedded system designer. In this talk, some of the main application domains and optimization approaches proposed to deal with these relevant issues, along with many open problems and paths to improvement, are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acceleration of RSA Cryptographic Operations Using FPGA Technology

    Publication Year: 2009 , Page(s): 20 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    In Embedded Systems, the calculation of RSA cryptographic operations is sometimes hard to achieve if time constraints must be observed. In the following, we present an approach to increase processing power regarding cryptographic operations using FPGA (Field Programmable Gate Array) technology. The FPGA, which is present in many designs anyway, computes parts of the operations, allowing the embedded processor to do concurrent calculations. We will have a closer look at RSA, which is an example of a time-consuming asymmetric cryptographic algorithm. We will see that multiplication and squaring are basic operations of a modern RSA implementation and thus have to be computed in an efficient way. We implement those basic operations on an FPGA, which computes them faster than the processor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Data-Centric Security in Ubiquitous Computing Environments

    Publication Year: 2009 , Page(s): 26 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (529 KB) |  | HTML iconHTML  

    The vision of data-centric security promises to enable efficient security in future ubiquitous computing environments, which are heavily pervaded with embedded devices and generally to complex to manage manually. We survey the existing work of various areas needed for data-centric security, point out their relationships, and comment on their applicability in these future environments. Furthermore, we present two concepts that explicitly employ the distinct UbiComp characteristics to foster data-centric security. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative Reputation-based Voice Spam Filtering

    Publication Year: 2009 , Page(s): 33 - 37
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    We propose a collaborative reputation-based voice spam filtering framework. Our approach uses the cumulative online duration of a VoIP user to derive his reputation value. And we leverage user feedback to mark unsolicited calls. For each unwanted call, our voice spam filter charges the caller a reputation point, and transfers this reputation point to the callee. To avoid VoIP users to manually label nuisance calls, our voice spam filter automatically marks all VoIP calls with short call durations as unsolicited. The preliminary simulation results show that our approach is effective to counter voice spam. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multi-Agent Approach to Testing Anti-Spam Software

    Publication Year: 2009 , Page(s): 38 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (171 KB) |  | HTML iconHTML  

    SpamTestSim is a multi-agent based simulator in which agents represent spammers and legitimate email users. The purpose of SpamTestSim is to provide a realistic environment for testing anti-spam software. The agents send and receive mail (both ham and spam) and create and maintain contact lists. We demonstrate the simulation by testing four well-known spam filters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mail-Shake

    Publication Year: 2009 , Page(s): 43 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1300 KB) |  | HTML iconHTML  

    Many different methods to mitigate spam in the internet have been proposed. However, the most promising ones require fundamental changes to mail protocol itself. Other methods are based on filtering, but still require the end-user to verify the results. We propose a different approach, that requires email senders to traverse a kind of handshake before sending an initial email to a new contact. Our method, called Mail-Shake, is based on two facts. First,spammers need valid email addresses to deliver their spam to.Second, spammers do not require real inboxes for their sender addresses, as replies are not expected. This allows complete automation of the spamming process, sending email at almost no cost. If we can decrease the number of valid email addresses a spammer can collect and increase the cost of sending email,spamming becomes uninteresting as the effort is too high in contrast to the win. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trustnet Architecture for E-mail Communication

    Publication Year: 2009 , Page(s): 48 - 52
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (198 KB) |  | HTML iconHTML  

    In this paper we discuss a new architecture to reduce unsolicited e-mail messages. We propose a system architecture that introduces two classes of messages-trusted e-mail and e-mail from untrusted sources. Trusted e-mail messages are signed with an S/MIME signature. To address usability problems that occurred previously with S/MIME signatures, outgoing e-mail messages are automatically signed on the e-mail server without any user interaction. A validation of the signature by the receiving server classifies the message either as trusted or untrusted, which enables the receiver to employ additional security checks for untrusted messages or to omit these checks for trusted messages. A comparison of the proposed system to a common setup with spam and anti-virus filtering shows that the trustnet architecture not only reduces processing time but also significantly reduces the amount of data transfered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Biologically Inspired Method of SPAM Detection

    Publication Year: 2009 , Page(s): 53 - 56
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    Many traditional SPAM filters work by analyzing the content of each email message in turn against a set of rules that are used to measure the spaminess of the message. Unfortunately, because spammers have access to these rules, the content of SPAM messages continually changes to evade detection. This is similar to the difficulties the immune system faces in identifying and clearing the Human Immuno-Deficiency Virus (HIV). Intriguingly, some individuals are resistant to HIV. We explore the parallels between HIV and SPAM in order to deduce a method of identifying SPAM that transcends the polymorphic nature of the SPAM message body. This proposed method is based on the group behavior of SPAM messages, rather than on the content of a SPAM message. We are in the process of implementing a SPAM filter that uses the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Utilizing Semantic Web Technologies for Efficient Data Lineage and Impact Analyses in Data Warehouse Environments

    Publication Year: 2009 , Page(s): 59 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (455 KB) |  | HTML iconHTML  

    Data warehouse (DWH) systems play an important role in the IT landscapes of today's enterprises. In order to cope with the complex transformation processes and big data volumes, they have to be supplemented by metadata. Two main uses of such metadata are data lineage analyses, allowing data elements within the DWH to be traced back to their origin, and impact analyses, revealing which other elements may be impacted by planned modifications to certain data elements. As these types of queries operate on graphs (data elements being transformed through the different DWH layers), Semantic Web technologies should provide a perfect toolset. However, today's metadata management tools utilize proprietary or regular SQL storage and querying mechanisms. Hence, this paper reviews the applicability of Semantic Web technologies for performing efficient data lineage or impact analyses in DWH environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Personalized Handling of Semantic Data with MIG

    Publication Year: 2009 , Page(s): 64 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    This paper shows a semantically-enabled Web application named MIG used to create user profiles which enhances users accessibility by allowing the creation of an user interface adapted to the user needs, the device used, and its preferences. This approach exploits the Semantic Web technologies and the infrastructure and applications created in previous work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal Semantics Extraction for Improving Web Search

    Publication Year: 2009 , Page(s): 69 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    Current Web-search engines can be improved through techniques that consider a temporal dimension in both query formulation and information extraction processes. An accurate recognition of temporal expressions in data sources is required, as well as extracting semantic embedded meta-information concerning time in a standard format that allows reasoning without ambiguity. Temporal Web searching entails an interesting advance on the way to the Semantic Web. We propose a temporal expressions recognition and normalization (TERN) system for contents in Spanish, which has been integrated into a Web-search engine prototype. The contribution of this system lies in its capability of taking into account the semantics of time in the user queries as well as in the search collections, independently of its representation. Evaluation figures show that the inclusion of temporal information management aptitudes to Web searching means considerable improvements in its performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Service-Based Semantic Querying to Distributed Heterogeneous Databases

    Publication Year: 2009 , Page(s): 74 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB) |  | HTML iconHTML  

    The amount of semantic data on the Web has increased exponentially in the last years. One of the main reasons for this is the use of RDB2RDF systems, which generate RDF data from relational databases. Most of the work on these systems has focused on increasing the expressivity of query languages, analyzing their coverage over databases and generating sound, complete and efficient query plans. However, there are still important problems associated to them, especially in terms of robustness and in the handling of data distribution. In this paper we describe how we have integrated one RDB2RDF system (ODEMapster) with OGSA-DAI in order to overcome these problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RDFStats - An Extensible RDF Statistics Generator and Library

    Publication Year: 2009 , Page(s): 79 - 83
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (782 KB) |  | HTML iconHTML  

    In this paper RDFStats is introduced, which is a generator for statistics of RDF sources like SPARQL endpoints and RDF documents. RDFStats does not only provide a statistics generator, but also a powerful API for persisting and accessing statistics including several estimation functions that also support SPARQL filter-like expressions. For many Semantic Web applications like the Semantic Web Integrator and Query Engine (SemWIQ), which is currently developed at the University of Linz, detailed statistics about the contents of RDF data sources are very important. RDFStats has been primarily designed and implemented for the SemWIQ federator and optimizer, but it can also be used for other applications like linked data browsers, aggregators, or visualization tools. It is based on the popular Semantic Web framework Jena developed by HP Labs Bristol and can be easily extended and integrated into other applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching Semantic Web Resources

    Publication Year: 2009 , Page(s): 84 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2486 KB) |  | HTML iconHTML  

    In this paper, we propose knowledge-chunk based techniques for instance matching and mapping discovery of Semantic Web resources. Knowledge chunks provide a synthetic representation of semantic descriptions of individuals with the aim to improve matching efficiency also in presence of rich and articulated RDF/OWL resource descriptions for the domain. The main implementation choices in the HMatch 2.0 system and the evaluation results obtained on a corpus of web resource descriptions in the Athletics domain are also presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DSNotify - Detecting and Fixing Broken Links in Linked Data Sets

    Publication Year: 2009 , Page(s): 89 - 93
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    The Linking Open Data (LOD) initiative has motivated numerous institutions to publish their data on the Web and to interlink them with those of other data sources. But since LOD sources are subject to change, links between resources can break and lead to processing errors in applications that consume linked data. The current practice is to ignore this problem and leave it to the applications what to do when broken links are detected. We believe, however, that LOD data sources should provide the highest possible degree of link integrity in order to relieve applications from this issue, similar to databases that provide mechanisms to preserve referential integrity in their data. As a possible solution, we propose DSNotify, an add-on for LOD sources that detects broken links and assists the data source in fixing them, e.g., when resources were moved to other Web locations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.