By Topic

Advances in System Testing and Validation Lifecycle (VALID), 2010 Second International Conference on

Date 22-27 Aug. 2010

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (834 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (11 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (181 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): viii - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • Committee

    Page(s): x - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (98 KB)  
    Freely Available from IEEE
  • Reviewers

    Page(s): xiii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • Argument-Driven Validation of Computer Simulations - A Necessity, Rather than an Option

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    Research based on computer simulations, especially that conducted through agent-based experimentation, is often criticised for not being a reliable source of information - the simulation software can hide errors or flawed designs that inherently bias results. Consequently, the academic community shows both enthusiasm and lack of trust for such approaches. In order to gain confidence is using engineered systems, domains such as Safety Critical Systems employ structured argumentation techniques as means of explicitly relating claims to evidence - in other words, requirements to deliverables. We argue here that structured argumentation should be used in the development and validation process of simulation-driven research. Making use of the Goal Structuring Notation, we provide insights into how more trustworthy outcomes can be obtained through argumentation-driven validation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Testing of Infotainment Systems on the Basis of a Graphical Human-Machine Interface

    Page(s): 5 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB) |  | HTML iconHTML  

    Automotive infotainment systems were getting more and more features in recent years. The usability of their HMIs (human-machine interfaces) has been improved considerably. However the complexity of the HMI software is growing. Testing the HMI became very demanding and time consuming. Because of multiplicity of HMI variants, a better code coverage is a goal for the development process of most manufacturers. Model-based testing is one way to achieve a better code coverage and keep the costs and complexity acceptable. However, the existing research approaches in the area of model-based HMI testing can not satisfy the needs for our testing purposes. In the work a model-based testing approach will be proposed for testing both the logical behavior and the graphical interface of the automotive infotainment HMI. As an important part of the testing approach a test-oriented HMI specification model will be designed. It is a model, which describes the required behavior of the HMI and contains the necessary information for the testing process. Test generation methods and the design of tests will also be proposed. These results can be generally used for testing advanced GUI-driven applications. Specific coverage criteria for infotainment HMIs, methods for automatic test generation and verification of the system behavior are also focuses of the work. The paper introduces the ideas and the goals of our model-based testing approach for infotainment HMIs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach for Model-Based Random Testing

    Page(s): 10 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    Random testing is a valuable supplement to systematic test methods because it discovers defects that are very hard to detect with systematic test strategies. We propose a novel approach for random test generation that combines the benefits of model-based testing, constraint satisfaction, and pure random testing. The proposed method has been incorporated into the IDATG (Integrating Design and Automated Test case Generation) tool-set and validated in a number of case studies. Their results indicate that using the new approach it is indeed possible to generate effective test cases in acceptable time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsteady Ground: Certification to Unstable Criteria

    Page(s): 16 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    Cross Domain Systems for handling classified information complicate the certification test and evaluation problem, because along with multiple data owners comes duplicate responsibility for residual risk. Over-reliance on independent verification and validation by certifiers and accreditors representing different government agencies is interpreted as conflating the principle of defence-in-depth with the practice of repeated verification and validation testing. Using real-world examples of successful and unsuccessful certification test and evaluation efforts to guide the development of a new communication tool for accreditors, this research aims to reduce time and cost wasted on unnecessary retesting of the same or similar security requirements during security test and evaluation in multi-level environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dihomotopic Deadlock Detection via Progress Shell Decomposition

    Page(s): 20 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (451 KB) |  | HTML iconHTML  

    The classical problem of deadlock detection for concurrent programs has traditionally been accomplished by symbolic methods or by search of a state transition system. This work examines an approach that uses geometric semantics involving the topological notion of dihomotopy to partition the state space into components, followed by search of a reduced state space. Prior work partitioned the state-space inductively. In this work, a decomposition technique motivated by recursion coupled with a search guided by the decomposition is shown to effectively reduce the size of state transition systems. The reduced state space yields asymptotic improvement in overall runtime for verification. A prototype implementation of this method is introduced here, including a description of its theoretical foundation and its performance benchmarked against the SPIN model checker. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Testability Metrics for Lustre/Scade Programs

    Page(s): 26 - 31
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    Testing is a validation process carried out to find errors in a system. Testability metrics aims at identifying parts of a design/code that are difficult to test. In this article, we focus on two testability metrics defined for systems written in Lustre/Scade. An intuitive interpretation was proposed for these metrics. The aims of the work described here is to check whether this intuitive interpretation can be consolidated with factual evidences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Approach for Protocol Testing of LTE System: A Practical Case Study

    Page(s): 32 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB) |  | HTML iconHTML  

    The increasing usage of mobile data-intensive applications and their needs for high data rates are setting challenges towards the today's wireless mobile communication systems. The Third Generation Partnership Project (3GPP) has addressed these challenges by introducing an IP-based Long-Term Evolution (LTE) project based on Orthogonal Frequency Division Multiple Access (OFDMA) technology, aiming to provide wireless communication standard with targeted data rates over 100Mbits/s. Development of LTE-based communication system sets major challenges for the testing process. Testing of particular protocol implementations of an incomplete system is challenging, as testing needs to be carried out in both simulation-based (when hardware not yet available) and target environments. There is often also need to simulate the over- and underlying protocol layers as these may not be available during early phase testing. This paper describes a practical case study where a hybrid approach of simulation- and target-based testing is carried out for testing particular protocol implementation. The described approach enables convenient transition from early phase simulation-based testing of single protocol layer to target testing of the overall system including the complete communication protocol stack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Runtime Testability in Dynamic High-Availability Component-Based Systems

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing cannot be performed. A prerequisite for runtime testing is the knowledge about to which extent the system can be tested safely while it is operational, i.e., the system's runtime testability. This article evaluates RTM, a cost-based metric for estimating runtime testability. It is used to assist system engineers in directing the implementation of remedial measures, by providing an action plan which considers the trade-off between testability and cost. Two testability case studies are performed on two different component-based systems, assessing RTM's ability to identify runtime testability problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The SQALE Analysis Model: An Analysis Model Compliant with the Representation Condition for Assessing the Quality of Software Source Code

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (954 KB) |  | HTML iconHTML  

    This paper presents the analysis model of the assessment method of software source code SQALE (Software Quality Assessment Based on Lifecycle Expectations). We explain what brought us to develop consolidation rules based in remediation indices. We describe how the analysis model can be implemented in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discretizing Technical Documentation for End-to-End Traceability Tests

    Page(s): 49 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (588 KB) |  | HTML iconHTML  

    This paper describes a technique by which English prose protocol standards were transcribed into individually testable assertions and traced from original protocol specifications, to lists of requirements, to models and test cases, and finally into test logs and network captures. Annotating statements to make them stand alone, the handling of optional behavior, and using the requirements to guide discovery of missing information from the standard is also described. The approach allows us to have both a fast feedback loop for debugging the specification of the protocol and a way to estimate the coverage degree of the generated tests with respect to requirements. We discuss tool support developed specifically for the approach and exemplify with excerpts from the application of this technique in the process of testing hundreds of Microsoft's open protocol specifications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Verification of Shared Libraries for Backward Binary Compatibility

    Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    This paper discusses a problem of ensuring backward binary compatibility when developing shared libraries. Linux (and GCC environment) is used as the main example. Breakage of the compatibility may result in crashing or incorrect behavior of applications built with an old version of a library when they are running with a new one. The paper describes typical issues that cause binary compatibility problems and presents a new method for library verification for such issues. Existing tools can detect only a small fraction of all possible backward compatibility problems while the suggested method can verify a broad spectrum of them. The method is based on comparison of function signatures and type definitions obtained from library header files in addition to analyzing symbols in library binaries. This paper also describes an automated verification tool that implements the suggested method and presents some results of its practical usage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigation of OSCI TLM-2.0 Employment in Grid Computing Simulation

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (399 KB) |  | HTML iconHTML  

    The burgeoning complexity of computer systems increases the role of middleware technologies in current researches. As a matter of fact, a middleware is the convergence point of progressing hardware and software engineering. In this paper, OSCI TLM-2.0, as a modeling standard for ESL design, is proposed for grid computing modeling that is one of the most important middleware technologies. Our study shows that taking advantages of OSCI TLM-2.0 standard in grid computing simulation not only reduces simulation time but also provides an extensible modeling environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variability Management in Embedded Product Line Analysis

    Page(s): 69 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (553 KB) |  | HTML iconHTML  

    Embedded software is increasing its complexity dealing with quality, cost and time-to-market among others. It must be validated intensively as it can be critical and even human life may depend on it. Development paradigms such as Model Driven Development and Software Product Lines can be an adequate alternative to traditional software development and validation methods, facilitating software validation based on models like model analysis. But for a proper validation all variability issues that take part in analysis must be properly managed. With the aim of facing it, a study of variability management in analysis has been done. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interacting Entities Modelling Methodology for Robust Systems Design

    Page(s): 75 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    This paper describes the theoretical principles and the practical implementation of OpenCookbook, an environment for systems engineering. The environment guides and supports developers during requirements and specification capturing over architectural modelling and work plan development till validation and final release. It features a coherent and unified system engineering methodology based on the interacting entities paradigm. In order to implement it, a generic web portal was developed. Targeting embedded systems, it nevertheless was proven to be an effective tool for a wide range of other system domains. OpenCookbook can be tailored to the needs of a specific organisation as well as accommodate engineering standards like IEC61508. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Defective Behaviour of an 8T SRAM Cell with Open Defects

    Page(s): 81 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (680 KB) |  | HTML iconHTML  

    The defective behaviour of an 8T SRAM cell with open defects is analyzed. Full and resistive open defects have been considered in the electrical characterization of the defective cell. Due to the similarity between the classical 6T SRAM cell and the 8T cell, only defects affecting the read port transistors have been considered. In the work, it is shown how an open in a defective cell may influence the correct operation of a victim cell sharing the same read circuitry. Also, it is shown that the sequence of bits written on the defective cell prior to a read action can mask the presence of the defect. Different orders of critical resistance have been found depending on the location of the open defect. A 45nm technology has been used for the illustrative example presented in the work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Hardware Performance Counters for Fault Localization

    Page(s): 87 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    In this work, we leverage hardware performance counters-collected data as abstraction mechanisms for program executions and use these abstractions to identify likely causes of failures. Our approach can be summarized as follows: Hardware counters-based data is collected from both successful and failed executions, the data collected from the successful executions is used to create normal behavior models of programs, and deviations from these models observed in failed executions are scored and reported as likely causes of failures. The results of our experiments conducted on three open source projects suggest that the proposed approach can effectively prioritize the space of likely causes of failures, which can in turn improve the turn around time for defect fixes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Approach for Classifying Program Failures

    Page(s): 93 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB) |  | HTML iconHTML  

    In this work, we leverage hardware performance counters-collected data to automatically group program failures that stem from closely related causes into clusters, which can in turn help developers prioritize failures as well as diagnose their causes. Hardware counters have been used for performance analysis of software systems in the past. By contrast, in this paper they are used as abstraction mechanisms for program executions. The results of our feasibility studies conducted on two widely-used applications suggest that hardware counters-collected data can be used to reliably classify failures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.