By Topic

Information Reuse and Integration (IRI), 2012 IEEE 13th International Conference on

Date 8-10 Aug. 2012

Filter Results

Displaying Results 1 - 25 of 122
  • [USB Start]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (499 KB)  
    Freely Available from IEEE
  • [Front cover]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (299 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): i - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • The 2012 IEEE international conference on information reuse and integration FORWARD

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • Message from Program Co-chairs

    Page(s): xiv - xv
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • Conference organizers

    Page(s): xvi - xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • International technical program committee

    Page(s): xviii - xx
    Save to Project icon | Request Permissions | PDF file iconPDF (56 KB)  
    Freely Available from IEEE
  • Outline of a restriction-centered theory of reasoning and computation in an environment of uncertainty and imprecision

    Page(s): xxi - xxii
    Save to Project icon | Request Permissions | PDF file iconPDF (155 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Information neighborhoods for visualization and monitoring strategies of real stochastic behavior trajectories: Computational aspects

    Page(s): xxiii
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE
  • Knowledge synthesizing and reusing by cognitive computing moving beyond “prescriptive programming” and von neumann

    Page(s): xxiv - xxv
    Save to Project icon | Request Permissions | PDF file iconPDF (134 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Panel: Using information re-use and integration principles in big data

    Page(s): xxvi
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xxvii
    Save to Project icon | Request Permissions | PDF file iconPDF (27 KB)  
    Freely Available from IEEE
  • Freely Available from IEEE
  • Freely Available from IEEE
  • Program Committee

    Page(s): xxx
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 743 - 746
    Save to Project icon | Request Permissions | PDF file iconPDF (62 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • A novel dataset-similarity-aware approach for evaluating stability of software metric selection techniques

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (155 KB) |  | HTML iconHTML  

    Software metric (feature) selection is an important pre-processing step before building software defect prediction models. Although much research has been done analyzing the classification performance of feature selection methods, fewer works have focused on their stability (robustness). Stability is important because feature selection methods which reliably produce the same results despite changes to the data are more trustworthy. Of the papers studying stability, most either compare the features chosen from different random subsamples of the dataset or compare each random subsample with the original dataset. These either result in an unknown degree of overlap between the subsamples, or comparing datasets of different sizes. In this work, we propose a fixed-overlap partition algorithm which generates a pair of subsamples with the same number of instances and a specified degree of overlap. We empirically evaluate the stability of 19 feature selection methods in terms of degree of overlap and feature subset size using sixteen real software metrics datasets. Consistency index is used as the stability measure, and we show that RF is the most stable filter. Results also show that degree of overlap and feature subset size do affect the stability of feature selection methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reusing and converting code clones to aspects - An algorithmic approach

    Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (129 KB) |  | HTML iconHTML  

    In this research we have developed an algorithmic approach to convert source code clones to aspects. Code cloning is the process of duplicating code or creating replication of code fragments. In this work, we use an existing code-clone detection tool to identify code clones in a source code. Secondly, we design algorithms to convert the code clones into aspects and do aspect composition with the original source code. Thirdly, we implement a prototype based on the algorithms. Fourthly, we carry out a performance analysis on the aspects composed source code and our analysis shows that the aspect composed code performs as well as the original code and even better in terms of execution times. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating test cases via model-based simulation

    Page(s): 17 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (406 KB) |  | HTML iconHTML  

    We present a new model-based test case generation approach, which takes inputs an executable system model and preliminary test case coverage, performs an automated model simulation, and eventually generates refined test cases for software testing. We adopt Live Sequence Charts to specify an executable system model, and present a logic-based model simulator for consistency testing. As a result, our model simulator produces a state transition diagram(STD) justifying the model's runtime behaviors, where each state is labeled with a set of runtime properties that are true in the state. The STD can then be automatically transformed into a refined set of test cases, in a form of a context-free grammar. Finally, we show that LSCs can also be used to specify and test certain temporal system properties during the model simulation. Their satisfaction, reflected in the STD, can either be served as a directive for selective test generation, or a basis for further temporal property model checking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A software product lines system test case tool and its initial evaluation

    Page(s): 25 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    The Software Product Lines (SPL) approach requires specific testing tools that should help to manage reusable testing assets and automate the test execution. Despite of the increasing interest by the research community regarding software testing tools, SPL still need tools to support the testing process. This work presents briefly the results of a mapping study on software testing tool and defines the requirements, design and implementation of a software product lines system test case tool, aiming at the creation and management of test assets. A controlled experiment was also conducted to evaluate the tool effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based diagnosis with default information implemented through MAX-SAT technology

    Page(s): 33 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (79 KB) |  | HTML iconHTML  

    Fault diagnosis is both a complex conceptual task and a fruitful application target for Artificial Intelligence techniques. In this paper, the focus is on model-based diagnosis (MBD), which formalizes reasoning from first principles. The contribution of the paper is twofold. On the one hand, the standard MBD representation framework is enriched to permit default information. On the other hand, we exploit the recent dramatic efficiency progress in Boolean reasoning and search -especially MAX-SAT-related technologies- to provide an alternative to the specific two-steps computational approach to exhibit minimal diagnoses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A game theoretic approach for adversarial pipeline monitoring using Wireless Sensor Networks

    Page(s): 37 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (242 KB) |  | HTML iconHTML  

    Availability of low cost sensor nodes has made Wireless Sensor Networks (WSNs) a viable choice for monitoring critical infrastructure such as power grid, civil structures and others. There are quite a few approaches in the literature that use WSN to monitor pipelines (water, gas, oil, and various other types of pipelines). The primary goal of all these protocols is to detect device malfunctions such as pipe leakage, oil spillage etc. However, none of these protocols are specifically designed to handle a malicious active adversary such as terrorist attacks. In this paper, we present a game theoretic approach to monitoring pipeline infrastructures using WSNs in the adversarial context. More specifically, we use Stackelberg competition to model attacker-defender interaction and derive the equilibrium condition of such a game under appropriate utility functions. Finally, we show that a monitoring system can do no better by deviating from its equilibrium strategy if the adversary acts rationally. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A web-based user interface for a mobile robotic system

    Page(s): 45 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (749 KB) |  | HTML iconHTML  

    An essential component of human-robot interaction, even for the case when the robot operates autonomously, is the interface between the human and the robot. This paper presents an AJAX-based graphical user interface for a mobile robotic system that has multiple sensors (an ultrasonic array, a thermal sensor, and a video streaming system) to obtain information about the environment, a virtual field strategy for obstacle avoidance and path planning, and an ANFIS controller for path tracking. The particular focus here is on the graphical user interface (“GUI”) and how it integrates path planning for obstacle avoidance and displays path tracking information for system status monitoring of the robotic system. Experimental results and a preliminary evaluation shows that the proposed architecture is a feasible one for autonomous mobile robotic systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating and enhancing cross-domain rank predictability of textual entailment datasets

    Page(s): 51 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB) |  | HTML iconHTML  

    Textual Entailment (TE) is the task of recognizing entailment, paraphrase, and contradiction relations between a given text pair. The goal of textual entailment research is to develop a core inference component that can be applied to various domains, such as IR or NLP. Since the domain that a TE system applies to may be different from its source domain, it is crucial to develop proper datasets for measuring the cross-domain ability of a TE system. We propose using Kendall's tau to measure a dataset's cross-domain rank predictability. Our analysis shows that incorporating “artificial pairs” into a dataset helps enhance its rank predictability. We also find that the completeness of guidelines has no obvious effect on the rank predictability of a dataset. To validate these findings, more investigation is needed; however these findings suggest some new directions for the creation of TE datasets in the future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.