Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Software Reliability Engineering, 2004. ISSRE 2004. 15th International Symposium on

Date 2-5 Nov. 2004

Filter Results

Displaying Results 1 - 25 of 48
  • [Cover page]

    Publication Year: 2004 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (406 KB)  
    Freely Available from IEEE
  • 15th International Symposium on Software Reliability Engineering

    Publication Year: 2004
    Save to Project icon | Request Permissions | PDF file iconPDF (460 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2004 , Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • Preface

    Publication Year: 2004 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (100 KB)  
    Freely Available from IEEE
  • Welcome from the Program Committee Co-Chairs

    Publication Year: 2004 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (97 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2004 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (97 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2004 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (107 KB)  
    Freely Available from IEEE
  • Additional reviewers

    Publication Year: 2004 , Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (107 KB)  
    Freely Available from IEEE
  • Unit testing in practice

    Publication Year: 2004 , Page(s): 3 - 13
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    Unit testing is a technique that receives a lot of criticism in terms of the amount of time that it is perceived to take and in how much it costs to perform. However it is also the most effective means to test individual software components for boundary value behavior and ensure that all code has been exercise adequately (e.g. statement, branch or MC/DC coverage). In this paper we examine the available data from three safety related software projects undertaken by Pi Technology that have made use of unit testing. Additionally we discuss the different issues that have been found applying the technique at different phases of the development and using different methods to generate those test. In particular we provide an argument that the perceived costs of unit testing may be exaggerated and that the likely benefits in terms of defect detection are actually quite high in relation to those costs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deriving test sets from partial proofs

    Publication Year: 2004 , Page(s): 14 - 24
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    Proof-guided testing is intended to enhance the test design with information extracted from the argument for correctness. The target application field is the verification of fault-tolerance algorithms where a complete formal proof is not available. Ideally, testing should be focused on the pending parts of the proof. The approach is experimentally assessed using the example of a group membership protocol (GMP), a complete proof of which has been developed by others in the PVS environment. In order to obtain a partial proof example, we proceed to flaw insertion into the PVS specification. Test selection criteria are then derived from the analysis of the reconstructed (now partial) proof. Their efficiency for revealing the flaw is experimentally assessed, yielding encouraging results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generic method for statistical testing

    Publication Year: 2004 , Page(s): 25 - 34
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    This paper addresses the problem of selecting finite test sets and automating this selection. Among these methods, some are deterministic and some are statistical. The kind of statistical testing we consider has been inspired by the work of Thevenod-Fosse and Waeselynck. There, the choice of the distribution on the input domain is guided by the structure of the program or the form of its specification. In the present paper, we describe a new generic method for performing statistical testing according to any given graphical description of the behavior of the system under test. This method can be fully automated. Its main originality is that it exploits recent results and tools in combinatorics, precisely in the area of random generation of combinatorial structures. Uniform random generation routines are used for drawing paths from the set of execution paths or traces of the system under test. Then a constraint resolution step is performed, aiming to design a set of test data that activate the generated paths. This approach applies to a number of classical coverage criteria. Moreover, we show how linear programming techniques may help to improve the quality of test, i.e. the probabilities for the elements to be covered by the test process. The paper presents the method in its generality. Then, in the last section, experimental results on applying it to structural statistical software testing are reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical software testing with parallel modeling: a case study

    Publication Year: 2004 , Page(s): 35 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    Statistical software testing promises to offer a solution to the growing testing burden caused by the ever-increasing complexity of today's software systems. Nevertheless, the complexity of these systems makes it more difficult to provide a model to use as a basis for statistical testing. The flat and hierarchical modeling currently used to create operational profiles leads to enormous models when capturing the usage of these complex systems. This research describes a method to extend formal specification languages, intended for modeling complex systems, with statistical testing components. With this new procedure for defining operational profiles, we can generate statistical test cases from significantly larger models. This sets the stage for conducting future studies comparing statistical testing to other test strategies, such as structural testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability growth in software products

    Publication Year: 2004 , Page(s): 47 - 53
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    Most of the software reliability growth models work under the assumption that reliability of software grows due to the bugs that cause failures being removed from the software. While correcting bugs will improve reliability, another phenomenon has often been observed - the failure rate of a software product, as observed by the user, improves with time irrespective of whether bugs are corrected or not. Consequently, the reliability of a product, as observed by users, varies, depending on the length of time they have been using the product. One reason for this reliability growth is that as the users gain experience with the product, they learn to use the product correctly and find work-around for failure-causing situations. Another factor that affects this growth is that following the product installation, the user discovers that other actions may be required, like installing new drivers, upgrading other software to a compatible version, etc. to properly configure the new product. In this paper we present a simple model to represent this phenomenon - we assume that the failure rate for a product decays with a factor α per unit time. Applying this failure rate decay model to the data collected on reported failures and number of units of the product sold, it is possible to determine the initial failure rate, the decay factor, and the steady state failure rate of a product. The paper provides a number of examples where this model has been applied to data captured from released products. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability estimation for statistical usage testing using Markov chains

    Publication Year: 2004 , Page(s): 54 - 65
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    Software validation is an important activity in order to test whether or not the correct software has been developed. Several testing techniques have been developed, and one of these is statistical usage testing (SUT). The main purpose of SUT is to test a software product from a user's point of view. Hence, usage models are designed and then test cases are developed from the models. Another advantage of SUT is that the reliability of the software can be estimated. In this paper, Markov chains are used to represent the usage models. Several approaches using Markov chains have been applied. This paper extends these approaches and presents a new approach to estimate the reliability from Markov chains. The reliability estimation is implemented in a new tool for statistical usage testing called MaTeLo. The tool is developed in a joint European project involving six industrial partners and two university partners. The purpose of the tool is to provide an estimate of the reliability and to automatically produce test cases based on usage models described as to Markov models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Validation of a methodology for assessing software reliability

    Publication Year: 2004 , Page(s): 66 - 76
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    Software-based digital systems are progressively replacing analog systems in safety-critical applications. However the ability to predict their reliability is not well understood and needs further study. A first step towards systematic resolution of this issue was presented in a recent software engineering measure study. In that study a set of software engineering measures were ranked with respect to their ability in predicting software reliability through an expert opinion elicitation process. This study also proposed a concept of reliability prediction system (RePS) to bridge the gap between software engineering measures and software reliability. The research presented in this paper validates the rankings obtained and the concept of RePS proposed in the previous study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performability modeling of mobile software systems

    Publication Year: 2004 , Page(s): 77 - 88
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    An increasing number of applications operate in heterogeneous computing environments, often with mobile components. Methodologies that help developers assess the ability of such applications to meet their performance requirements throughout the software life-cycle are needed. In particular, early in the design phases, analysis techniques are critical for ensuring the future system's behavior, evaluating and comparing design alternatives. A performability evaluation is the most appropriate means to assess the expected system's ability to perform, including the effects of component failures and repairs. This paper focuses on model-based analysis of performability of mobile software systems. We propose a general methodology that starts from design artifacts expressed in a UML-based notation. Inferred performability models are based on the stochastic activity networks notation. The viability of the proposed approach is demonstrated through its application in a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Are found defects an indicator of software correctness? An investigation in a controlled case study

    Publication Year: 2004 , Page(s): 91 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a "hacker" approach to software development, and 3) more research is needed to elaborate this issue. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An exploration of software faults and failure behaviour in a large population of programs

    Publication Year: 2004 , Page(s): 101 - 112
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    A large part of software engineering research suffers from a major problem-there are insufficient data to test software hypotheses, or to estimate parameters in models. To obtain statistically significant results, a large set of programs is needed, each set comprising many programs built to the same specification. We have gained access to such a large body of programs (written in C, C++, Java or Pascal) and in this paper we present the results of an exploratory analysis of around 29,000 C programs written to a common specification. The objectives of this study were to characterise the types of fault that are present in these programs; to characterise how programs are debugged during development; and to assess the effectiveness of diverse programming. The findings are discussed, together with the potential limitations on the realism of the findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Empirical studies of test case prioritization in a JUnit testing environment

    Publication Year: 2004 , Page(s): 113 - 124
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    Test case prioritization provides a way to run test cases with the highest priority earliest. Numerous empirical studies have shown that prioritization can improve a test suite's rate of fault detection, but the extent to which these results generalize is an open question because the studies have all focused on a single procedural language, C, and a few specific types of test suites, in particular, Java and the JUnit testing framework are being used extensively in practice, and the effectiveness of prioritization techniques on Java systems tested under JUnit has not been investigated. We have therefore designed and performed a controlled experiment examining whether test case prioritization can be effective on Java programs tested under JUnit, and comparing the results to those achieved in earlier studies. Our analyses show that test case prioritization can significantly improve the rate of fault detection of JUnit test suites, but also reveal differences with respect to previous studies that can be related to the language and testing paradigm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study on reliability modeling for diverse software systems

    Publication Year: 2004 , Page(s): 125 - 136
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    Reliability and fault correlation are two main concerns for design diversity, yet empirical data are limited in investigating these two. In previous work, we conducted a software project with real-world application for investigation on software testing and fault tolerance for design diversity. Mutants were generated by injecting one single real fault recorded in the software development phase to the final versions. In this paper, we perform more analysis and experiments on these mutants to evaluate and investigate the reliability features in diverse software systems. We apply our project data on two different reliability models and estimate the reliability bounds for evaluation purpose. We also parameterize fault correlations to predict the reliability of various combinations of versions, and compare three different fault-tolerant software architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Boundary coverage criteria for test generation from formal models

    Publication Year: 2004 , Page(s): 139 - 150
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    This paper proposes a new family of model-based coverage criteria, based on formalizing boundary-value testing heuristics. The new criteria form a hierarchy of data-oriented coverage criteria, and can be applied to any formal notation that uses variables and values. They can be used either to measure the coverage of an existing test set, or to generate tests from a formal model. We give algorithms that can be used to generate tests that satisfy the criteria. These algorithms and criteria have been incorporated into the BZ-TESTING-TOOLS (BZ-TT) tool-set for automated test case generation from B, Z and UML/OCL specifications, and have been used and validated on several industrial applications in the domain of critical software, particularly smart cards and transport systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based test driven development of the Tefkat model-transformation engine

    Publication Year: 2004 , Page(s): 151 - 160
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    Tefkat is an implementation of a rule- and pattern-based engine for the transformation of models defined using the Object Management Group's (OMG) Model-Driven Architecture (MDA). The process for the development of the engine included the concurrent development of a unit test suite for the engine. The test suite is constructed as a number of models, whose elements comprise the test cases, and which are passed to a test harness for processing. The paper discusses the difficulties and opportunities encountered in the process, and draws implications for the broader problem of testing in a model-driven environment, and of using models for testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test-adequacy and statistical testing: combining different properties of a test-set

    Publication Year: 2004 , Page(s): 161 - 172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    Dependability assessment of safety-critical or safety-related software components is an important issue for example within the nuclear industry, the avionics sector or the military. Statistical testing is one way of quantifying the dependability of a given software product. The use of sector-specific standards with their suggested test-criteria is another (nonquantitative) way of aiming at employing only components that are "dependable enough". Ideally, both, the acknowledged test criteria and statistical test methods should come into play when assessing software dependability. We want to - in the long-term - move towards this aim. Thus we investigate in this paper a model to combine the fault-detection power of a given test-set (a test-adequacy criterion) with the statistical power of the test-set, i.e. the number of statistical tests within the test-set. With this model we aim at drawing out of any given test-set - whether devised by a plant engineer or a statistician - the overall contribution it can make to dependability assessment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Plannable test selection criteria for FSMs extracted from operational specifications

    Publication Year: 2004 , Page(s): 173 - 184
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    Model-based test generation (MBTG) is becoming an area of active research. Several MBTG approaches extract a finite state machine (FSM) from a given model, and use structural (mostly transition) coverage of the extracted FSM as a test selection criteria. In this paper, we demonstrate inadequacy of structural coverage criteria, and propose a set of test selection criteria for extracted FSMs. Our models are described in terms of operations provided by the system under test (SUT). Each operation is specified as a set of possible results each with a guard condition and a set of update actions on its parameters and the system state. The proposed test selection criteria are based on (1) mutations of guard conditions and update actions, (2) concept of a session, which targets errors of SUT not committing the updated system state to persistent storage, and (3) 2-way coverage of independent operations available in a given FSM state. We describe an AI planning based algorithm for finding a sequence of operation invocations to satisfy our proposed test selection criteria. We illustrate our test selection criteria, and report results of a case study which compares fault detection capability of our proposed test selection criteria with that of structural criteria. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bypass testing of Web applications

    Publication Year: 2004 , Page(s): 187 - 197
    Cited by:  Papers (12)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    Web software applications are increasingly being deployed in sensitive situations. Web applications are used to transmit, accept and store data that is personal, company confidential and sensitive. Input validation testing (IVT) checks user inputs to ensure that they conform to the program's requirements, which is particularly important for software that relies on user inputs, including Web applications. A common technique in Web applications is to perform input validation on the client with scripting languages such as JavaScript. An insidious problem with client-side input validation is that end users can bypass this validation. Bypassing validation can cause failures in the software, and can also break the security on Web applications, leading to unauthorized access to data, system failures, invalid purchases and entry of bogus data. We are developing a strategy called bypass testing to create client-side tests for Web applications that intentionally violate explicit and implicit checks on user inputs. This paper describes the strategy, defines specific rules and adequacy criteria for tests, describes a proof-of-concept automated tool, and presents initial empirical results from applying bypass testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.