By Topic

2010 Second International Conference on Advances in System Testing and Validation Lifecycle

22-27 Aug. 2010

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Publication Year: 2010, Page(s): C1
    Request permission for commercial reuse | PDF file iconPDF (834 KB)
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2010, Page(s): i
    Request permission for commercial reuse | PDF file iconPDF (11 KB)
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2010, Page(s): iii
    Request permission for commercial reuse | PDF file iconPDF (58 KB)
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2010, Page(s): iv
    Request permission for commercial reuse | PDF file iconPDF (109 KB)
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2010, Page(s):v - vii
    Request permission for commercial reuse | PDF file iconPDF (181 KB)
    Freely Available from IEEE
  • Preface

    Publication Year: 2010, Page(s):viii - ix
    Request permission for commercial reuse | PDF file iconPDF (90 KB) | HTML iconHTML
    Freely Available from IEEE
  • Committee

    Publication Year: 2010, Page(s):x - xii
    Request permission for commercial reuse | PDF file iconPDF (98 KB)
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2010, Page(s):xiii - xiv
    Request permission for commercial reuse | PDF file iconPDF (83 KB)
    Freely Available from IEEE
  • Argument-Driven Validation of Computer Simulations - A Necessity, Rather than an Option

    Publication Year: 2010, Page(s):1 - 4
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (450 KB) | HTML iconHTML

    Research based on computer simulations, especially that conducted through agent-based experimentation, is often criticised for not being a reliable source of information - the simulation software can hide errors or flawed designs that inherently bias results. Consequently, the academic community shows both enthusiasm and lack of trust for such approaches. In order to gain confidence is using engin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Testing of Infotainment Systems on the Basis of a Graphical Human-Machine Interface

    Publication Year: 2010, Page(s):5 - 9
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (476 KB) | HTML iconHTML

    Automotive infotainment systems were getting more and more features in recent years. The usability of their HMIs (human-machine interfaces) has been improved considerably. However the complexity of the HMI software is growing. Testing the HMI became very demanding and time consuming. Because of multiplicity of HMI variants, a better code coverage is a goal for the development process of most manuf... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach for Model-Based Random Testing

    Publication Year: 2010, Page(s):10 - 15
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (580 KB) | HTML iconHTML

    Random testing is a valuable supplement to systematic test methods because it discovers defects that are very hard to detect with systematic test strategies. We propose a novel approach for random test generation that combines the benefits of model-based testing, constraint satisfaction, and pure random testing. The proposed method has been incorporated into the IDATG (Integrating Design and Autom... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsteady Ground: Certification to Unstable Criteria

    Publication Year: 2010, Page(s):16 - 19
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (410 KB) | HTML iconHTML

    Cross Domain Systems for handling classified information complicate the certification test and evaluation problem, because along with multiple data owners comes duplicate responsibility for residual risk. Over-reliance on independent verification and validation by certifiers and accreditors representing different government agencies is interpreted as conflating the principle of defence-in-depth wi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dihomotopic Deadlock Detection via Progress Shell Decomposition

    Publication Year: 2010, Page(s):20 - 25
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (451 KB) | HTML iconHTML

    The classical problem of deadlock detection for concurrent programs has traditionally been accomplished by symbolic methods or by search of a state transition system. This work examines an approach that uses geometric semantics involving the topological notion of dihomotopy to partition the state space into components, followed by search of a reduced state space. Prior work partitioned the state-s... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Testability Metrics for Lustre/Scade Programs

    Publication Year: 2010, Page(s):26 - 31
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (522 KB) | HTML iconHTML

    Testing is a validation process carried out to find errors in a system. Testability metrics aims at identifying parts of a design/code that are difficult to test. In this article, we focus on two testability metrics defined for systems written in Lustre/Scade. An intuitive interpretation was proposed for these metrics. The aims of the work described here is to check whether this intuitive interpre... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Approach for Protocol Testing of LTE System: A Practical Case Study

    Publication Year: 2010, Page(s):32 - 36
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (572 KB) | HTML iconHTML

    The increasing usage of mobile data-intensive applications and their needs for high data rates are setting challenges towards the today's wireless mobile communication systems. The Third Generation Partnership Project (3GPP) has addressed these challenges by introducing an IP-based Long-Term Evolution (LTE) project based on Orthogonal Frequency Division Multiple Access (OFDMA) technology, aiming t... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Runtime Testability in Dynamic High-Availability Component-Based Systems

    Publication Year: 2010, Page(s):37 - 42
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (616 KB) | HTML iconHTML

    Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing cannot be performed. A prerequisite for runtime testing is the knowledge about to which extent the system can be tested safely while it is operational, i.e., the system's runtime testability. This article evaluat... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The SQALE Analysis Model: An Analysis Model Compliant with the Representation Condition for Assessing the Quality of Software Source Code

    Publication Year: 2010, Page(s):43 - 48
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (954 KB) | HTML iconHTML

    This paper presents the analysis model of the assessment method of software source code SQALE (Software Quality Assessment Based on Lifecycle Expectations). We explain what brought us to develop consolidation rules based in remediation indices. We describe how the analysis model can be implemented in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discretizing Technical Documentation for End-to-End Traceability Tests

    Publication Year: 2010, Page(s):49 - 56
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (588 KB) | HTML iconHTML

    This paper describes a technique by which English prose protocol standards were transcribed into individually testable assertions and traced from original protocol specifications, to lists of requirements, to models and test cases, and finally into test logs and network captures. Annotating statements to make them stand alone, the handling of optional behavior, and using the requirements to guide ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Verification of Shared Libraries for Backward Binary Compatibility

    Publication Year: 2010, Page(s):57 - 62
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (408 KB) | HTML iconHTML

    This paper discusses a problem of ensuring backward binary compatibility when developing shared libraries. Linux (and GCC environment) is used as the main example. Breakage of the compatibility may result in crashing or incorrect behavior of applications built with an old version of a library when they are running with a new one. The paper describes typical issues that cause binary compatibility p... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigation of OSCI TLM-2.0 Employment in Grid Computing Simulation

    Publication Year: 2010, Page(s):63 - 68
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (399 KB) | HTML iconHTML

    The burgeoning complexity of computer systems increases the role of middleware technologies in current researches. As a matter of fact, a middleware is the convergence point of progressing hardware and software engineering. In this paper, OSCI TLM-2.0, as a modeling standard for ESL design, is proposed for grid computing modeling that is one of the most important middleware technologies. Our study... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variability Management in Embedded Product Line Analysis

    Publication Year: 2010, Page(s):69 - 74
    Cited by:  Papers (3)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (553 KB) | HTML iconHTML

    Embedded software is increasing its complexity dealing with quality, cost and time-to-market among others. It must be validated intensively as it can be critical and even human life may depend on it. Development paradigms such as Model Driven Development and Software Product Lines can be an adequate alternative to traditional software development and validation methods, facilitating software valid... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interacting Entities Modelling Methodology for Robust Systems Design

    Publication Year: 2010, Page(s):75 - 80
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (583 KB) | HTML iconHTML

    This paper describes the theoretical principles and the practical implementation of OpenCookbook, an environment for systems engineering. The environment guides and supports developers during requirements and specification capturing over architectural modelling and work plan development till validation and final release. It features a coherent and unified system engineering methodology based on th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Defective Behaviour of an 8T SRAM Cell with Open Defects

    Publication Year: 2010, Page(s):81 - 86
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (680 KB) | HTML iconHTML

    The defective behaviour of an 8T SRAM cell with open defects is analyzed. Full and resistive open defects have been considered in the electrical characterization of the defective cell. Due to the similarity between the classical 6T SRAM cell and the 8T cell, only defects affecting the read port transistors have been considered. In the work, it is shown how an open in a defective cell may influence... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Hardware Performance Counters for Fault Localization

    Publication Year: 2010, Page(s):87 - 92
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (304 KB) | HTML iconHTML

    In this work, we leverage hardware performance counters-collected data as abstraction mechanisms for program executions and use these abstractions to identify likely causes of failures. Our approach can be summarized as follows: Hardware counters-based data is collected from both successful and failed executions, the data collected from the successful executions is used to create normal behavior m... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Approach for Classifying Program Failures

    Publication Year: 2010, Page(s):93 - 98
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (403 KB) | HTML iconHTML

    In this work, we leverage hardware performance counters-collected data to automatically group program failures that stem from closely related causes into clusters, which can in turn help developers prioritize failures as well as diagnose their causes. Hardware counters have been used for performance analysis of software systems in the past. By contrast, in this paper they are used as abstraction m... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.