Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Semantic Computing, 2007. ICSC 2007. International Conference on

Date 17-19 Sept. 2007

Filter Results

Displaying Results 1 - 25 of 109
  • International Conference on Semantic Computing - Cover

    Publication Year: 2007 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (271 KB)  
    Freely Available from IEEE
  • International Conference on Semantic Computing-Title

    Publication Year: 2007 , Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • International Conference on Semantic Computing-Copyright

    Publication Year: 2007 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (60 KB)  
    Freely Available from IEEE
  • International Conference on Semantic Computing - TOC

    Publication Year: 2007 , Page(s): v - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • Message from General Chairs

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (27 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Greetings from the Technical Program Committee Chairs

    Publication Year: 2007 , Page(s): xv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organization

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Steering Committee

    Publication Year: 2007 , Page(s): xx
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • Technical Program Committee

    Publication Year: 2007 , Page(s): xxi
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • Robust Classification of Dialog Acts from the Transcription of Utterances

    Publication Year: 2007 , Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    This paper presents a robust classification of dialog acts from text utterances. Two different types, namely, bag-of-words and syntactic relationship among words, were used to extract the discourse level features from the transcript of utterances. Subsequently a number of feature mining methods have been used to identify the most relevant features and their roles in classifying dialog acts. The selected features are used to learn the underlying models of dialog acts using a number of existing machine learning algorithms from the WEKA toolbox. Empirical analyses using the HCRC Map Task Corpus dialog data was conducted to evaluate the performance of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Timelines from Text: Identification of Syntactic Temporal Relations

    Publication Year: 2007 , Page(s): 11 - 18
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    We propose and evaluate a linguistically motivated approach to extracting temporal structure necessary to build a timeline. We considered pairs of events in a verb-clause construction, where the first event is a verb and the second event is the head of a clausal argument to that verb. We selected all pairs of events in the TimeBank that participated in verb-clause constructions and annotated them with the labels before, overlap and after. The resulting corpus of 895 event-event temporal relations was then used to train a machine learning model. Using a combination of event-level features like tense and aspect with syntax-level features like the paths through the syntactic tree, we were able to train a support vector machine (SVM) model which could identify new temporal relations with 89.2% accuracy. High accuracy models like these are a first step towards automatic extraction of timeline structures from text. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lexical and Discourse Analysis of Online Chat Dialog

    Publication Year: 2007 , Page(s): 19 - 26
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB)  

    One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Generation of Multi-Modal Dialogue from Text Based on Discourse Structure Analysis

    Publication Year: 2007 , Page(s): 27 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    In this paper, we propose a novel method for generating engaging multi-modal content automatically from text. Rhetorical structure theory (RST) is used to decompose text into discourse units and to identify rhetorical discourse relations between them. Rhetorical relations are then mapped to question-answer pairs in an information preserving way, i.e., the original text and the resulting dialogue convey essentially the same meaning. Finally, the dialogue is "acted out" by two virtual agents. The network of dialogue structures automatically built up during this process, called DialogueNet, can be reused for other purposes, such as personalization or question-answering. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Software Birthmark Based on Dynamic Opcode n-gram

    Publication Year: 2007 , Page(s): 37 - 44
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    A kind of dynamic opcode n-gram software birthmark is proposed in this paper based on Myles' software birthmark (in which static opcode n-gram set is regarded as the software birthmark). The dynamic opcode n-gram set is regarded as the software birthmark which is extracted from the dynamic executable instruction sequence of the program. And the new birthmark can not only keep the advantages of feature n-gram set based on static opcode, but also possesses high robustness to code compression, encryption, packing. The algorithm which is to evaluate the similarity of the birthmarks of two programs is improved employing the theory of Probability and Statistic. As a result, the time complexity of the improved algorithm decreases to 0(n) from O(n2), while the space complexity keeps unchanged. Finally, the validity of the scheme is proved by experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Framework to build an object oriented mathematical tool with computer algebra system (CAS) capability

    Publication Year: 2007 , Page(s): 45 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (693 KB) |  | HTML iconHTML  

    Computer algebra system (CAS) applications are mathematical applications developed with the purpose of solving mathematical problems which are too difficult or even impossible to solve by hand. Modern versions of CAS applications are known for their rather large set of features such as support for graphical representations of results, symbolic manipulation, big-integer calculations, and complex-number arithmetic. The proposed idea in developing framework for a mathematical tool with CAS capability is derived from the motivation of object oriented design that integrate each application process in terms of an independent object module. Additional motivation is scalability and interface, meaning more functionality can be integrated without changing any core existed module, that is just create a new object and plug in. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specifying and Verifying Cases Retrieval System Combining Event B and Spin

    Publication Year: 2007 , Page(s): 53 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    This paper presents a complete study for the specification and mechanical verification of cases retrieval systems (CRS) within the generic framework that supports the many-to-many connection of formal development environments and model checkers. We aim at combining on an example, refinement techniques, verification by theorem proving and model checking in an entire development, to guarantee software correctness properties. We first build a underlying abstract system using a roles-based collaboration model, then describe a practical approach for increasingly developing flexible and reliable formal specifications of CRS using event B, exemplified on contract net protocol (CNP) as interaction contract. A proper translator is introduced as the bridge between formal specifications and model checkers. This entire development is mechanically proved with respect to safety properties using B tool and, complementally, with respect to liveness properties using the SPIN tool. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Procuring Requirements for ERP Software Based on Semantic Similarity

    Publication Year: 2007 , Page(s): 61 - 70
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    Enterprise resource planning (ERP) system is one of the most widely accepted choices to gain competitive edge for manufacturing related enterprises. However, the rate of successful implementation is lower than expected. One of the main reasons is that there is a gap between application domain and software discipline. In this paper, a methodology is proposed to procure the requirements for ERP software, by which both sides do their jobs in their familiar domain. The methodology consists three phases: business modeling, gap analysis, requirement analysis for software. This paper focuses on the second phase, which aims at identifying the business processes beyond the capability of ERP software. Based on ontology an algorithm to measure the similarity of business process models is designed and Hungarian algorithm is used to reduce its time complexity. Finally, an experiment is given to evaluate the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmenting Photo Streams in Events Based on Optical Metadata

    Publication Year: 2007 , Page(s): 71 - 78
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1404 KB) |  | HTML iconHTML  

    Traditional methods for event segmentation of photo streams use time and/or content-based information. In this paper, we present event segmentation from a novel perspective. We propose to segment photo streams in events based on the scene brightness of photos by assuming that big scene brightness change implies an event transition of interest. The scene brightness is derived from camera parameters that are automatically set when photos are taken and recorded with each photo as metadata in standard forms like EXIF data. This information is available from metadata and is very inexpensive computationally resulting in fast segmentation. Hierarchical agglomerative clustering method is applied to build the event hierarchy of the photo stream based on the scene brightness difference. The proposed approach has been tested on several photo streams and very promising results have been obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach to Improving Semantic Extraction of News Video

    Publication Year: 2007 , Page(s): 79 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    In this paper we describe a hybrid approach to improving semantic extraction from news video. Experiments show the value of careful parameter tuning, exploiting multiple feature sets and multilingual linguistic resources, applying text retrieval approaches for image features, and establishing synergy between multiple concepts through undirected graphical models. No single approach provides a consistently better result for every concept detection, which suggests that extracting video semantics should exploit multiple resources and techniques rather than a single approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CDIP: Collection-Driven, yet Individuality-Preserving Automated Blog Tagging

    Publication Year: 2007 , Page(s): 87 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (625 KB) |  | HTML iconHTML  

    With the success of blogs as popular information sharing media, searches on blogs have become popular. In the blogosphere, tagging is used as a means of annotating blog entries with contextually meaningful keywords, which enable users more easily locate blog content. Yet, although tags provided by bloggers are effective for organizing blog entries, in many cases, they are not always sufficient in properly capturing the semantics of the blog content. In our previous work, we observed that there exists large degree of content overlap (not only in the form of quotation/commentary pairs, but also as content borrowing across media outlets) among blog entries, which makes it hard for effective, discriminating keyword searches. In this paper, we further note that these implicit or explicit quotations could be leveraged to identify the contexts in which entries occur; thus, resulting in more effective tagging. Thus, we propose CDIP (a collection-driven, yet individuality- preserving tagging system) which relies on relationships provided by quotation/reuse detection and semantic-focus analysis to automatically tag the blogs in such a way that, not-only the related blogs share tags, but also individuality of the entries is preserved for discriminating tag-based accesses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eventory -- An Event Based Media Repository

    Publication Year: 2007 , Page(s): 95 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    This paper focuses on the development of an event driven media sharing repository to facilitate community awareness. In this paper, an event refers to a real-world occurrences that unfolds over space and time. Our event model implementation supports creation of events using the standard facets of who, where, when and what. A key novelty in this research lies in the support of arbitrary event-event semantic relationships. We facilitate global as well as personalized event relationships. Each relationship can be unary or binary and can be at multiple granularities. The relationships can exist between events, between media, and between media and events. We have implemented a web based media archive system that allows people to create, explore and mange events. We have implemented an RSS based notification system that promotes awareness of actions. The initial user feedback has been positive and we are in the process of conducting a longitudinal study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge Integration in OpenWorlds: Utilizing the Mathematics of Hierarchical Structure

    Publication Year: 2007 , Page(s): 105 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Semantic Web services present a new challenge in distributed knowledge integration. Under the Simple Semantic Web Architecture and Protocol (SSWAP, http://sswap.info), Web resources publish information about their data and services in terms of OWL ontologies. With semantic tagging, the output of one service can drive the input of another, presenting the alluring prospect of machines autonomously assessing the suitability of Web resources for distributed workflows. Ontological reasoning under OWL greatly expands the potential for semantic Web service interoperability by guaranteeing the compatibility of resources using such techniques as inferencing, typing, and class subsumption. Formal methods for managing such structures are urgently needed. Recent advances in the mathematics of hierarchies can formalize the construction of synthetic hierarchies from unstructured information, identifying mutual subsumption architectures. Here we examine the problem space and describe the promise of mathematical order theory (formal concept analysis and lattice metrics) for advancing knowledge integration in open worlds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Interactive Diagnosis and Repair of OWL Ontology

    Publication Year: 2007 , Page(s): 113 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB) |  | HTML iconHTML  

    Ontology diagnosis is an important phase in the process of knowledge base development and management. In the various applications of semantic computing, ontology is not static, but keeps changing over time due to changes in the local environments. A change is typically effected by adding new axioms or modifying parts of existing axioms in the ontology. Such arbitrary changes might result in conflicts in the knowledge base and how to diagnose and repair the conflict in the knowledge base is an interesting but hard problem. In this paper, we propose an interactive strategy for maintaining knowledge base consistency by identifying the unsatisfiable concepts in the knowledge base, and eliminating them using suggestions obtained from the knowledge engineer (user). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sub-Symbolic Semantic Layer in Cyc for Intuitive Chat-Bots

    Publication Year: 2007 , Page(s): 121 - 128
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB) |  | HTML iconHTML  

    The work presented in this paper aims to combine Latent Semantic Analysis methodology, common sense and traditional knowledge representation in order to improve the dialogue capabilities of a conversational agent. In our approach the agent brain is characterized by two areas: a "rational area", composed by a structured, rule-based knowledge base, and an "associative area", obtained through a data- driven semantic space. Concepts are mapped in this space and their mutual geometric distance is related to their conceptual similarity. The geometric distance between concepts implicitly defines a sub-symbolic relationship net, which can be seen as a new "sub- symbolic semantic layer" automatically added to the Cyc ontology. Users queries can also be mapped in the same conceptual space, and evoke similar ontology concepts. As a result the agent can exploit this feature, attempting to retrieve ontological concepts that are not easily reachable by means of the traditional ontology reasoning engine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Method for the Construction of a Probabilistic Hierarchical Structure Based on a Statistical Analysis of a Large-scale Corpus

    Publication Year: 2007 , Page(s): 129 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    The purpose of this study is to develop a method of constructing a probabilistic hierarchical structure based on a statistical analysis of a Japanese corpus using a combination of Kameya and Sato's statistical language analysis and Rose's model. First, the co-occurrence frequencies of adjectives and nouns are calculated from a Japanese corpus based on modification relations. Second, latent classes are extracted from a statistical language analysis of the cooccurrence data. Third, the centroid vectors of the latent classes are calculated from the analysis results and a probabilistic hierarchical structure of the latent classes is constructed by utilizing Rose's model. Finally, the conditional probabilities of the categories given the latent classes are computed as the association probabilities of the concepts to the categories and the conditional probabilities of the categories given the concepts are computed as the association probabilities of the concepts to the categories. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.