By Topic

Software Engineering, IEEE Transactions on

Issue 2 • Date Feb. 2006

Filter Results

Displaying Results 1 - 9 of 9
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (87 KB)  
    Freely Available from IEEE
  • Software defect association mining and defect correction effort prediction

    Page(s): 69 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4248 KB)  

    Much current software defect prediction work focuses on the number of defects remaining in a software system. In this paper, we present association rule mining based methods to predict defect associations and defect correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that, for defect association prediction, the accuracy is very high and the false-negative rate is very low. Likewise, for the defect correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect correction effort prediction method with other types of methods - PART, C4.5, and Naive Bayes - and show that accuracy has been improved by at least 23 percent. We also evaluated the impact of support and confidence levels on prediction accuracy, false-negative rate, false-positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal project feature weights in analogy-based cost estimation: improvement and limitations

    Page(s): 83 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1800 KB)  

    Cost estimation is a vital task in most important software project decisions such as resource allocation and bidding. Analogy-based cost estimation is particularly transparent, as it relies on historical information from similar past projects, whereby similarities are determined by comparing the projects' key attributes and features. However, one crucial aspect of the analogy-based method is not yet fully accounted for: the different impact or weighting of a project's various features. Current approaches either try to find the dominant features or require experts to weight the features. Neither of these yields optimal estimation performance. Therefore, we propose to allocate separate weights to each project feature and to find the optimal weights by extensive search. We test this approach on several real-world data sets and measure the improvements with commonly used quality metrics. We find that this method 1) increases estimation accuracy and reliability, 2) reduces the model's volatility and, thus, is likely to increase its acceptance in practice, and 3) indicates upper limits for analogy-based estimation quality as measured by standard metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Runtime analysis of atomicity for multithreaded programs

    Page(s): 93 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3000 KB)  

    Atomicity is a correctness condition for concurrent systems. Informally, atomicity is the property that every concurrent execution of a set of transactions is equivalent to some serial execution of the same transactions. In multithreaded programs, executions of procedures (or methods) can be regarded as transactions. Correctness in the presence of concurrency typically requires atomicity of these transactions. Tools that automatically detect atomicity violations can uncover subtle errors that are hard to find with traditional debugging and testing techniques. This paper describes two algorithms for runtime detection of atomicity violations and compares their cost and effectiveness. The reduction-based algorithm checks atomicity based on commutativity properties of events in a trace; the block-based algorithm efficiently represents the relevant information about a trace as a set of blocks (i.e., pairs of events plus associated synchronizations) and checks atomicity by comparing each block with other blocks. To improve the efficiency and accuracy of both algorithms, we incorporate a multilockset algorithm for checking data races, dynamic escape analysis, and happen-before analysis. Experiments show that both algorithms are effective in finding atomicity violations. The block-based algorithm is more accurate but more expensive than the reduction-based algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic transformation of functional analysis model into OO design and implementation

    Page(s): 111 - 135
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8664 KB)  

    Functional refinement is beneficial to object-oriented (OO) software development, especially for problems with more complex functions. However, the use of functional refinement in OO software development has not received much attention. This paper proposes an enhanced data flow diagram (DFD), called data flow net (DF net), for specifying use-cases through functional decomposition. It proposes a novel approach to complement existing OO software development methods with functional decomposition for realizing use-cases, especially those with more complex functions. In the requirements analysis stage, the proposed approach realizes use-cases through functional refinement and specifies them in DF nets. In the design and implementation stages, it transforms the DF nets systematically and precisely into OO design and implementation. The approach is amenable to automation and a prototype has been developed to support the transformation process. In the development of an OO system, it is seamless to realize some of the use-cases using the proposed approach and the remaining use-cases in the same target system using any existing OO software development methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Advertisement]

    Page(s): 136
    Save to Project icon | Request Permissions | PDF file iconPDF (615 KB)  
    Freely Available from IEEE
  • TSE Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (87 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org