Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Software Reliability, 2007. ISSRE '07. The 18th IEEE International Symposium on

Date 5-9 Nov. 2007

Filter Results

Displaying Results 1 - 25 of 36
  • The 18th IEEE International Symposium on Software Reliability - Cover

    Publication Year: 2007 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (303 KB)  
    Freely Available from IEEE
  • The 18th IEEE International Symposium on Software Reliability - Title page

    Publication Year: 2007 , Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • The 18th IEEE International Symposium on Software Reliability - Copyright

    Publication Year: 2007 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • The 18th IEEE International Symposium on Software Reliability - TOC

    Publication Year: 2007 , Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • Greetings from the General and Program Chairs

    Publication Year: 2007 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2007 , Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2007 , Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2007 , Page(s): xiii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Secondary reviewers

    Publication Year: 2007 , Page(s): xv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • Sensitivity of Website Reliability to Usage Profile Changes

    Publication Year: 2007 , Page(s): 3 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    To measure the reliability of a website from a user's point of view, the uncertainly on the usage of the website has to be taken into account. In this paper we investigate the influence of this uncertainly on the reliability estimate for a web server. For this purpose a session based Markov model is used to model the usage extracted from the server's logfiles. From these logfiles a complete user profile can be extracted together with an estimate of the uncertainty on this user profile. This paper investigates the applicability of this kind of Markov model on web server reliability and discusses the difficulties with data extraction from the logfiles. Advantages and disadvantages of this approach are discussed and the approach is applied to data from a university department's web server to demonstrate its applicability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring Software Reliability in Practice: An Industry Case Study

    Publication Year: 2007 , Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    Software reliability modeling techniques have been touted as way of measuring and tracking software systems reliability. However a number of issues make it difficult to use and apply these models in practice. In this paper we show some of the challenges and issues that we have encountered in applying these techniques to track and predict a networking software system reliability behavior at two different stages of its life cycle. Through our case study we show some of the practical solutions we have adopted to overcome these challenges. We also try to establish a relationship between the software testing phase based reliability prediction and field software reliability measurement in order to derive a systematic tracking approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project

    Publication Year: 2007 , Page(s): 17 - 26
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB) |  | HTML iconHTML  

    As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coordinated Atomic Actions for Dependable Distributed Systems: the Current State in Concepts, Semantics and Verification Means

    Publication Year: 2007 , Page(s): 29 - 38
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB) |  | HTML iconHTML  

    Coordinated Atomic Actions (CAAs) have been introduced about ten years ago as a conceptual framework for developing fault-tolerant concurrent systems. All the work done since then extended the CAA framework with the capabilities to model, verify, and implement concurrent distributed systems following pre-defined development methodologies. As a result, CAAs, compared to other approaches available, offer a rich set of means for engineering dependable systems. Nevertheless, it is sometimes difficult to have a global and analytical view of all the features available as this concept provides a number of features which need to be applied in combination. The main contribution of this paper is in presenting a complete state-of-the-art overview of the work done around CAAs from the three perspectives: the definitions of the fundamental concepts, their various semantics and the means supporting formal verification. This paper is useful for the potential CAAs users in helping them to avoid misinterpretation when employing all the available features. Finally, our paper should contribute in better understanding of the likely directions in which the CAA framework may evolve in the near future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Self-Protecting Enterprise Applications

    Publication Year: 2007 , Page(s): 39 - 48
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (606 KB) |  | HTML iconHTML  

    Enterprise systems must guarantee high availability and reliability to provide 24/7 services without interruptions and failures. Mechanisms for handling exceptional cases and implementing fault tolerance techniques can reduce failure occurrences, and increase dependability. Most of such mechanisms address major problems that lead to unexpected service termination or crashes, but do not deal with many subtle domain dependent failures that do not necessarily cause service termination or crashes, but result in incorrect results. In this paper, we propose a technique for developing selfprotecting systems. The technique proposed in this paper observes values at relevant program points. When the technique detects a software failure, it uses the collected information to identify the execution contexts that lead to the failure, and automatically enables mechanisms for preventing future occurrences of failures of the same type. Thus, failures do not occur again after the first detection of a failure of the same type. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Modeling of a 1-Out-Of-2 System: Research with Diverse Off-The-Shelf SQL Database Servers

    Publication Year: 2007 , Page(s): 49 - 58
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    Fault tolerance via design diversity is often the only viable way of achieving sufficient dependability levels when using off-the-shelf components. We have reported previously on studies with bug reports of four open-source and commercial off-the-shelf database servers and later release of two of them. The results were very promising for designers of fault-tolerant solutions that wish to employ diverse servers: very few bugs caused failures in more than one server and none caused failure in more than two. In this paper we offer details of two approaches we have studied to construct reliability growth models for a 1-out-of-2 fault-tolerant server which utilize the bug reports. The models presented are of practical significance to system designers wishing to employ diversity with off-the-shelf components since often the bug reports are the only direct dependability evidence available to them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corroborating User Assessments of Software Behavior to Facilitate Operational Testing

    Publication Year: 2007 , Page(s): 61 - 70
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    Operational or "beta" testing of software has a number of benefits for software vendors and has become common industry practice. However, ordinary users are more likely to overlook or misreport software problems than experienced software testers are. To compensate for this shortcoming, we present a technique called corroboration-based filtering for corroborating user assessments of individual operational executions for which audit information has been captured for possible offline review. Independent assessments concerning similar executions are pooled by automatically clustering together executions with similar execution profiles. Executions are chosen for review based on their user assessments, the size of the cluster each execution belongs to, and whether the cluster has already been confirmed by developers to contain an actual failure. We explain the rationale for this technique, analyze it probabilistically, and present the results of empirically comparing it to alternative techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requirement Error Abstraction and Classification: A Control Group Replicated Study

    Publication Year: 2007 , Page(s): 71 - 80
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    This paper is the second in a series of empirical studies about requirement error abstraction and classification as a quality improvement approach. The Requirement error abstraction and classification method supports the developers' effort in efficiently identifying the root cause of requirements faults. By uncovering the source of faults, the developers can locate and remove additional related faults that may have been overlooked, thereby improving the quality and reliability of the resulting system. This study is a replication of an earlier study that adds a control group to address a major validity threat. The approach studied includes a process for abstracting errors from faults and provides a requirement error taxonomy for organizing those errors. A unique aspect of this work is the use of research from human cognition to improve the process. The results of the replication are presented and compared with the results from the original study. Overall, the results from this study indicate that the error abstraction and classification approach improves the effectiveness and efficiency of inspectors. The requirement error taxonomy is viewed favorably and provides useful insights into the source of faults. In addition, human cognition research is shown to be an important factor that affects the performance of the inspectors. This study also provides additional evidence to motivate further research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prioritization of Regression Tests using Singular Value Decomposition with Empirical Change Records

    Publication Year: 2007 , Page(s): 81 - 90
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50% of test runs and that the highest-priority suggested test found an additional fault 60% of the time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing Security Policies: Going Beyond Functional Testing

    Publication Year: 2007 , Page(s): 93 - 102
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    While important efforts are dedicated to system functional testing, very few works study how to test specifically security mechanisms, implementing a security policy. This paper introduces security policy testing as a specific target for testing. We propose two strategies for producing security policy test cases, depending if they are built in complement of existing functional test cases or independently from them. Indeed, any security policy is strongly connected to system functionality: testing functions includes exercising many security mechanisms. However, testing functionality does not intend at putting to the test security aspects. We thus propose test selection criteria to produce tests from a security policy. To quantify the effectiveness of a set of test cases to detect security policy flaws, we adapt mutation analysis and define security policy mutation operators. A library case study, a 3-tiers architecture, is used to obtain experimental trends. Results confirm that security must become a specific target of testing to reach a satisfying level of confidence in security mechanisms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated Software Vulnerability and Security Functionality Assessment

    Publication Year: 2007 , Page(s): 103 - 108
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    Product security is an on-going challenge for network equipment vendors. In this paper, we present a systematic methodology for some software vulnerability assessment and security function verification. Based on this approach, a scalable and adaptable automatic test system was implemented to test over a hundred production software releases over the past year. This paper describes the methodology, the framework, and the results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparison between Internal and External Malicious Traffic

    Publication Year: 2007 , Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1088 KB) |  | HTML iconHTML  

    This paper empirically compares malicious traffic originating inside an organization (i.e., internal traffic) with malicious traffic originating outside an organization (i.e., external traffic). Two honeypot target computers were deployed to collect malicious traffic data over a period of fifteen weeks. In the first study we showed that there was a weak correlation between internal and external traffic based on the number of malicious connections. Since the type of malicious activity is linked to the port that was targeted, we focused on the most frequently targeted ports. We observed that internal malicious traffic often contained different malicious content compared to that of external traffic. In the third study, we discovered that the volume of malicious traffic was linked to the day of the week. We showed that internal and external malicious activities differ: where the external malicious activity is quite stable over the week, the internal traffic varied as a function of the users' activity profile. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated Oracle Comparators for TestingWeb Applications

    Publication Year: 2007 , Page(s): 117 - 126
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    Software developers need automated techniques to maintain the correctness of complex, evolving Web applications. While there has been success in automating some of the testing process for this domain, there exists little automated support for verifying that the executed test cases produce expected results. We assist in this tedious task by presenting a suite of automated oracle comparators for testing Web applications. To effectively identify failures, each comparator is specialized to particular characteristics of the possibly nondeterministic Web applications' output in the form of HTML responses. We also describe combinations of comparators designed to achieve both high precision and recall in failure detection and a tool for helping testers to analyze the output of multiple oracles in detail. We present results from an evaluation of the effectiveness and costs of the oracle comparators. We also provide recommendations to testers on applying effective oracle comparators based on their application's characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Impact of Injection Triggers for OS Robustness Evaluation

    Publication Year: 2007 , Page(s): 127 - 126
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    Traditionally, in fault injection-based robustness evaluation of software (specifically for operating systems - OS's), faults or errors are injected at specific code locations. This paper studies the sensitivity and accuracy of the robustness evaluation results arising from varying the timing of injecting the faults into the OS. A strategy to guide the triggering of fault injection is proposed, based on the observation that the operational usage profile of a driver shows a high degree of regularity in the calls being made. The concept of call blocks (i.e., a distinct sequence of calls made to the driver) can be used to guide injections into different system states, corresponding to the driver operations carried out. A real-world case study compares the effectiveness of the proposed strategy to traditional location-based approaches, demonstrating that significant and useful insights can be gained by modulating the injection instants. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Machine Learning to Support Debugging with Tarantula

    Publication Year: 2007 , Page(s): 137 - 146
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    Using a specific machine learning technique, this paper proposes a way to identify suspicious statements during debugging. The technique is based on principles similar to Tarantula but addresses its main flaw: its difficulty to deal with the presence of multiple faults as it assumes that failing test cases execute the same fault(s). The improvement we present in this paper results from the use of C4.5 decision trees to identify various failure conditions based on information regarding the test cases' inputs and outputs. Failing test cases executing under similar conditions are then assumed to fail due to the same fault(s). Statements are then considered suspicious if they are covered by a large proportion of failing test cases that execute under similar conditions. We report on a case study that demonstrates improvement over the original Tarantula technique in terms of statement ranking. Another contribution of this paper is to show that failure conditions as modeled by a C4.5 decision tree accurately predict failures and can therefore be used as well to help debugging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Inference of Computer Virus Propagation Using Non-Homogeneous Poisson Processes

    Publication Year: 2007 , Page(s): 149 - 158
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    This paper presents statistical inference of computer virus propagation using non-homogeneous Poisson processes (NHPPs). Under some mathematical assumptions, the number of infected hosts can be modeled by an NHPP In particular, this paper applies a framework of mixed-type NHPPs to the statistical inference of periodic virus propagation. The mixed-type NHPP is defined by a superposition of NHPPs. In numerical experiments, we examine a goodness-of-fit criterion of NHPPs on fitting to real virus infection data, and discuss the effectiveness of the model-based prediction approach for computer virus propagation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.