By Topic

Semantic Computing and Applications, 2008. IWSCA '08. IEEE International Workshop on

Date 10-11 July 2008

Filter Results

Displaying Results 1 - 25 of 42
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (55 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (59 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (108 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (78 KB)  
    Freely Available from IEEE
  • Message from General Chairs

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • Acknowledgements

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Ontological Approach to Integration of Event-Centric Logistics Information into EPC Network

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    As business organizations have become more complicated and large-scale, so have collaboration among business partners and enterprise information integration become more common. A number of integration approaches to semantic collaboration have been addressed. Over the past few decades, ontology-based integration approaches have been the focus. Ontology is considered to be one of the typical semantic representation methods. As logistics enables business networks, and as logistics data is stored in distributed information systems, semantic information integration is required among logistics partners. In the present study, we addressed the issue of an ontological approach to integration of event-centric logistics information into EPC-based logistics. We formalized the logistics and logistics events, and generalized and extended a previously developed logistics ontology. In this paper, we introduce an ontological method for integrating distributed logistics information into the EPC network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Change Management in Semantic Engineering Design Environment

    Page(s): 9 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    A concurrent engineering design environment has been devised based on semantic Web technologies. The domain knowledge of design objects can be logically expressed by OWL ontology language. Design rules regarding change management are encoded in ECA (event-condition-action) rules. The logical expressions in condition clauses of the ECA rules are reasoned with the domain knowledge expressed by OWL. Data involved in engineering design are characterized by their large amount and complicated relationships. In this paper, we use properties in OWL in order to model those relationships among design data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced Search Method for Ontology Classification

    Page(s): 12 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    The Web ontology language (OWL) has become a W3C recommendation to publish and share ontologies on the semantic web. In order to derive hidden information (classification, satisfiability and realization) of OWL ontology, a number of OWL reasoners have been introduced. Most of reasoners use both top-down and bottom-up search for ontology classification. In this paper, we propose an enhanced method of optimizing the ontology classification process of ontology reasoning. One goal of this paper is to provide such a available algorithm for future implementers of ontology reasoning system. Building the optimization method that came off best into ontology reasoning system greatly enhanced its efficiency. Our work focuses on two key aspects: The first and foremost, we describe classical methods for ontology classification. As subsumption testing to classify ontology is costly, it is important to ensure that the classification process uses the smallest number of tests. Therefore, we consider enhanced method and evaluate their effect on four different types of test ontology. The result of the experiment was that the enhanced search method increases performance improvement 30% something like that compare with the classical method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Approach to Ontology-Based Semantic Integration for PLM Object

    Page(s): 19 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (713 KB) |  | HTML iconHTML  

    In this paper, for integrating of data on car parts we model information of parts that PDM system manages. Ontology of car parts applies existing ontology mapping research to integrate into car ontology. We propose a method for semantic integration of PLM object of MEMPHIS based on the integrated ontology. Through our method, we introduce C# ontology model to apply existing C# applications with ontology. We also classify ontology integration into three through examples and explain them. While semantically integrating PLM objects based on the integrated ontology, we explain the need for change of PLM object type and describe the process of change for PLM object type by examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • OntoSonomy: Ontology-Based Extension of Folksonomy

    Page(s): 27 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1084 KB) |  | HTML iconHTML  

    While collaborative tagging is attractive because of their easy and simple way to tag and search, tag based search suffers from no semantic descriptions for contents. Semantic Web technologies provide a new way to annotating and retrieving images. We address the problems of both folksonomy and semantic Web annotation systems for image retrieval in particular and argue that this can be improved by giving meanings to users' tags. This paper considers using both minimized domain-specific ontology and generic ontology. We show how less detailed ontologies can help end users to annotate and search with the simple user interface. Our prototype system has the capability to describe the semantics for contents of an image. Also, no previous extra work to create instances according to the ontology for later annotations is needed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CE-RSS for Reactive Validation of Design Rules

    Page(s): 33 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (241 KB) |  | HTML iconHTML  

    In the process of cooperative engineering design, various events occur from design units. In order to effectively manage design updates among collaborators, we use publish/subscribe mechanism. However existing publish/subscribe systems only have the simple subscription mechanism. In this paper we introduce a concurrent engineering environment where the publish/subscribe relationship can be established based on the product structure. Information about design changes is encoded in feed format using proposed CE-RSS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Study on the Methods to Establish Construction Project Information Metadata Using UN/CEFACT Standard

    Page(s): 36 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Since the existing metadata define the structure of information using non-unified semantics, syntax, and representation methods, they set a limit in the search for exact information. To address this problem, metadata for construction project information that are frequently used in the construction industry were developed in this study, for the first time in Korea, by referring to the information standard development guidelines of UN/CEFACT. In addition, to establish reciprocal compatibility with the metadata which will be developed in the future, metadata development guidelines were formulated. The results of this study can be used in accurately expressing the semantic relationship between the essential meanings and their representations, such as RDF (resource description framework), in the establishment of an information system for the asset management of public facilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization and Load Balancing of the Semantic Tagging and Searching System

    Page(s): 43 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    In the Web 2.0 Age, Web bloggers have become major Web content providers, and the volume of blogs continues to increase rapidly. Although keyword-based tags are widely used to classify submitted blogs and to search for retrieval, this keyword-based approach provides less optimal classification, leading to less satisfactory search results because this approach is not a semantic approach. To address this problem, prior to this paper, a system called STSS was developed in order to facilitate users to build semantic tags for their blogs and to retrieve more relevant search results. The STSS test results demonstrated that use of the semantic approach led to more accurate content classification and more relevant search results although there were performance issues. In this paper, we describe our efforts to improve STSS by employing an optimization and caching approach. The test results show that the performance of new approach exceeds that of the previous one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic Programming of Web-Enabled Database Applications

    Page(s): 51 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    This paper presents a declarative programming language SOBL for data-intensive Web applications. A SOBL program separates the application data from the composition and navigation of UI data. Static Web pages are automatically generated from UI requirements, and an executable behavior specification is derived from the behavior requirements in SOBL automatically. SOBL is employed to represent the behavior requirements that involve a series of actions in a specific order, which are expressed by the compositions of certain types of control structures and triggers. It assists non-technical people to describe the scenarios of systems without the knowledge of technical details. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • List Based Matching Algorithm for Classifying News Articles in NewsPage.com

    Page(s): 61 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (150 KB) |  | HTML iconHTML  

    This research proposes an alternative approach to machine learning based ones for categorizing news articles given as in plain texts. In order to use one of machine learning based approaches for the task, documents should be encoded into numerical vectors; it causes two problems: huge dimensionality and sparse distribution. The proposed approach is intended to address the two problems. In other words, the two problems are avoided by encoding a document or documents into a table, instead of numerical vectors. Therefore, the goal of the research is to improve the performance of text categorization by solving the two problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Table Based Single Pass Algorithm for Clustering Electronic Documents in 20NewsGroups

    Page(s): 66 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (153 KB) |  | HTML iconHTML  

    This research proposes a modified version of single pass algorithm which is specialized for text clustering. Encoding documents into numerical vectors for using the traditional version of single pass algorithm causes the two main problems: huge dimensionality and sparse distribution. Therefore, in order to address the two problems, this research modifies the single pass algorithm into its version where documents are encoded into not numerical vectors but alternative forms. In the proposed version, documents are mapped into tables and a similarity of two documents is computed by comparing their tables with each other. The goal of this research is to improve the performance of single pass algorithm for text clustering by modifying it into the specialized version. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Personalized Summarization Using Non-negative Matrix Factorization and Relevance Measure

    Page(s): 72 - 77
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    In this paper, a new automatic personalized summarization method, which uses non-negative matrix factorization (NMF) and relevance measure (RM), is introduced to extract meaningful sentences from to retrieve documents in Internet. The proposed method can improve the quality of personalized summaries because the inherent semantics of the documents are well reflected by using the semantic features calculated by NMF and the sentences most relevant to the given query are extracted efficiently by using the semantic variables derived by NMF. Besides, it uses RM to summarize generic summary so that it can select sentences covering the major topics of the document. The experimental results using Yahoo-Korea News data show that the proposed method achieves better performance than the other methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anomaly Detection over Clustering Multi-dimensional Transactional Audit Streams

    Page(s): 78 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (131 KB) |  | HTML iconHTML  

    In anomaly detection, one important issue how to model the normal behavior of activities performed by a user is an important issue. To extract the normal behavior from the activities of a user, conventional data mining techniques are widely applied to a finite audit data set. However, these approaches can only model the static behavior of a user in the audit data set. This drawback can be overcome by viewing the continuous activities of a user as an audit data stream. This paper proposes an anomaly detection method that continuously models the normal behavior of a user over the multi-dimensional audit data stream. Each cluster represents the frequent range of the activities with respect to a set of features. As a result, without physically maintaining any historical activity of a user, the new activities of the user can be continuously reflected onto the on-going result. At the same time, various statistics of the activities related to the identified clusters are additionally modeled to improve the performance of anomaly detection. The proposed algorithm is analyzed by a series of experiments to identify various characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessment of Temperature Sensitivity Analysis and Temperature Regression Model for Predicting Seasonal Bank Load Patterns

    Page(s): 81 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (213 KB) |  | HTML iconHTML  

    The aim of this paper is to investigate the potential of air conditioning load management by solve the temperature regression model of load patterns for Banks and the temperature sensitivity depends on temperature change. The load survey system has been applied to record the Bank load of sampling Banks in Korea power system. To analyze the impact of temperature rise to the Bank load data, we executed statistic polynomial regression and the temperature sensitivity analysis on the Bank load data. Before that, we applied data preprocessing to make the data clear. It found that the week time is more sensitive than weekend and when the temperature is less deviated from the main tendency, the regression model can predict the load patterns with higher accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining the Weighted Frequent XML Query Pattern

    Page(s): 85 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (177 KB) |  | HTML iconHTML  

    XML data are being a standard in many areas such as internet and public documentation. Therefore, there are many kinds of documentation or web sites which are using XML expressions. To extract some useful data among multiple XML data, we need to research data mining algorithm to XML data. And many kinds of techniques have been researched to speed up the query performance about XML data. In this paper, we analyze the XML query pattern and propose the data mining technique which extracts the similar XML query pattern. The proposed method based on Weighted-FP-growth algorithm is applied to XML query subtrees. And we experimented our technique compared with FP-growth algorithm and Apriori algorithm. The proposed method outperforms any other methods in query result of the repeatedly occurring queries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic Analysis of User Behaviors for Detecting Spam Mail

    Page(s): 91 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    According to continuous increasing of spam email, 92.6% of recent total email is known spam email. In this research, we will show an adaptive learning system that filter spam emails based on user's action pattern as time goes by. In this paper, we consider relationship between user's actions such as what action is took after one action and how long does it take. They analyze that each action has how much meaning, and that it has an effect on filtering spam emails. And that in turn determines weight for each email. In experimentation, we will compare results of system of this research and weighted Bayesian classifier using real email data set. Also, we will show how to handle personalization for concept drift and adaptive learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A System for Contextual Search

    Page(s): 96 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (725 KB) |  | HTML iconHTML  

    In this paper, we present a system that takes an input document and searches semantically-related documents on the Web. Relatedness is measured based on two criteria. One is to check whether some representative words inside the input document appear in candidate documents. The other is to check whether words that describe the contextual information about the input document appear in candidate documents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.