By Topic

Software Engineering, IEEE Transactions on

Issue 5 • Date May 2007

Filter Results

Displaying Results 1 - 11 of 11
  • [Front cover]

    Publication Year: 2007 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2007 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • A Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems

    Publication Year: 2007 , Page(s): 273 - 286
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3210 KB) |  | HTML iconHTML  

    To contribute to the body of empirical research on fault distributions during development of complex software systems, a replication of a study of Fenton and Ohlsson is conducted. The hypotheses from the original study are investigated using data taken from an environment that differs in terms of system size, project duration, and programming language. We have investigated four sets of hypotheses on data from three successive telecommunications projects: 1) the Pareto principle, that is, a small number of modules contain a majority of the faults (in the replication, the Pareto principle is confirmed), 2) fault persistence between test phases (a high fault incidence in function testing is shown to imply the same in system testing, as well as prerelease versus postrelease fault incidence), 3) the relation between number of faults and lines of code (the size relation from the original study could be neither confirmed nor disproved in the replication), and 4) fault density similarities across test phases and projects (in the replication study, fault densities are confirmed to be similar across projects). Through this replication study, we have contributed to what is known on fault distributions, which seem to be stable across environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Techniques for Classifying Executions of Deployed Software to Support Software Engineering Tasks

    Publication Year: 2007 , Page(s): 287 - 304
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1349 KB) |  | HTML iconHTML  

    There is an increasing interest in techniques that support analysis and measurement of fielded software systems. These techniques typically deploy numerous instrumented instances of a software system, collect execution data when the instances run in the field, and analyze the remotely collected data to better understand the system's in-the-field behavior. One common need for these techniques is the ability to distinguish execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which condition a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). To address the limitations of existing approaches, we have developed three techniques for automatically classifying execution data as belonging to one of several classes. In this paper, we introduce our techniques and apply them to the binary classification of passing and failing behaviors. Our three techniques impose different overheads on program instances and, thus, each is appropriate for different application scenarios. We performed several empirical studies to evaluate and refine our techniques and to investigate the trade-offs among them. Our results show that 1) the first technique can build very accurate models, but requires a complete set of execution data; 2) the second technique produces slightly less accurate models, but needs only a small fraction of the total execution data; and 3) the third technique allows for even further cost reductions by building the models incrementally, but requires some sequential ordering of the software instances' instrumentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Determining the Proper Number and Price of Software Licenses

    Publication Year: 2007 , Page(s): 305 - 315
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1364 KB) |  | HTML iconHTML  

    Software houses sell their products by transferring usage licenses of various software components to the customers. Depending on the kind of software, there are several different license types that allow controlled access of services. The two most popular types are the fixed license, which gives access rights for an identified workstation, and the floating license, which restricts the number of simultaneous users to a certain bound. The latter of these types is advantageous when the users do not demand full-time services and occasional lack of access is bearable. The problem of deciding the number of floating licenses is studied in the present paper. Based on the expected usage profile of the software, we calculate the minimal number of licenses that guarantees that the customers get service better than a given lower bound. The problem is studied by using certain queuing models, known as the Erlang toss system, the Erlang delay system, and the Engset model. None of these analytic models consider, however, the transient period that we analyze by means of simulation and by the so-called modified offered load approximation. We also give simple formulas presenting how the number of software licenses needed to keep the probability of nonaccess below a given blocking level grows as a function of the offered load, which is the proportion of the time used in the case that all requests were successful. Results of the study may be used for setting license prices and for determining the proper number of licenses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross versus Within-Company Cost Estimation Studies: A Systematic Review

    Publication Year: 2007 , Page(s): 316 - 329
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6697 KB)  

    The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e., $20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Arbitrary Stable Properties Using Efficient Snapshots

    Publication Year: 2007 , Page(s): 330 - 346
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5669 KB) |  | HTML iconHTML  

    A stable properly continues to hold in an execution once it becomes true. Detecting arbitrary stable properties efficiently in distributed executions is still an open problem. The known algorithms for detecting arbitrary stable properties and snapshot algorithms used to detect such stable properties suffer from drawbacks such as the following: They incur the overhead of a large number of messages per global snapshot, or alter application message headers, or use inhibition, or use the execution history, or assume a strong property such as causal delivery of messages in the system. We solve the problem of detecting an arbitrary stable property efficiently under the following assumptions: P1) the application messages should not be modified, not even by timestamps or message coloring. P2) no inhibition is allowed. P3) the algorithm should not use the message history. P4) any process can initiate the algorithm. This paper proposes a family of nonintrusive algorithms requiring 6(n - 1) control messages, where n is the number of processes. A three-phase strategy of uncoordinated observation of local states is used to give a consistent snapshot from which any stable property can be detected. A key feature of our algorithms is that they do not rely on the processes continually and pessimistically reporting their activity. Only the relevant activity that occurs in the thin slice during the algorithm execution needs to be examined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Use of Multilegged Arguments to Increase Confidence in Safety Claims for Software-Based Systems: A Study Based on a BBN Analysis of an Idealized Example

    Publication Year: 2007 , Page(s): 347 - 365
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1430 KB) |  | HTML iconHTML  

    The work described here concerns the use of so-called multilegged arguments to support dependability claims about software-based systems. The informal justification for the use of multilegged arguments is similar to that used to support the use of multiversion software in pursuit of high reliability or safety. Just as a diverse 1-out-of-2 system might be expected to be more reliable than each of its two component versions, so might a two-legged argument be expected to give greater confidence in the correctness of a dependability claim (for example, a safety claim) than would either of the argument legs alone. Our intention here is to treat these argument structures formally, in particular, by presenting a formal probabilistic treatment of "confidence," which will be used as a measure of efficacy. This will enable claims for the efficacy of the multilegged approach to be made quantitatively, answering questions such as, "How much extra confidence about a system's safety will I have if I add a verification argument leg to an argument leg based upon statistical testing?" For this initial study, we concentrate on a simplified and idealized example of a safety system in which interest centers upon a claim about the probability of failure on demand. Our approach is to build a "Bayesian belief network" (BBN) model of a two-legged argument and manipulate this analytically via parameters that define its node probability tables. The aim here is to obtain greater insight than what is afforded by the more usual BBN treatment, which involves merely numerical manipulation. We show that the addition of a diverse second argument leg can indeed increase confidence in a dependability claim; in a reasonably plausible example, the doubt in the claim is reduced to one-third of the doubt present in the original single leg. However, we also show that there can be some unexpected and counterintuitive subtleties here; for example, an entirely supportive second leg can sometimes undermine a- n original argument, resulting, overall, in less confidence than what came from this original argument. Our results are neutral on the issue of whether such difficulties will arise in real life $that is, when real experts judge real systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Join the IEEE Computer Society!

    Publication Year: 2007 , Page(s): 366 - 368
    Save to Project icon | Request Permissions | PDF file iconPDF (362 KB)  
    Freely Available from IEEE
  • TSE Information for authors

    Publication Year: 2007 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2007 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org