By Topic

Software Engineering, IEEE Transactions on

Issue 4 • Date July-Aug. 2008

Filter Results

Displaying Results 1 - 14 of 14
  • [Front cover]

    Publication Year: 2008 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (92 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2008 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • Introduction to the Special Section on the ACM SIGSOFT Foundations of Software Engineering Conference

    Publication Year: 2008 , Page(s): 433
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Asking and Answering Questions during a Programming Change Task

    Publication Year: 2008 , Page(s): 434 - 451
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4163 KB) |  | HTML iconHTML  

    Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming, answers are needed to three broad research questions: 1) What does a programmer need to know about a code base when evolving a software system? 2) How does a programmer go about finding that information? 3) How well do existing tools support programmers in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers' questions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Test Suites and Adequacy Criteria Using Simulation-Based Models of Distributed Systems

    Publication Year: 2008 , Page(s): 452 - 470
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2917 KB) |  | HTML iconHTML  

    Test adequacy criteria provide the engineer with guidance on how to populate test suites. While adequacy criteria have long been a focus of research, existing testing methods do not address many of the fundamental characteristics of distributed systems, such as distribution topology, communication failure, and timing. Furthermore, they do not provide the engineer with a means to evaluate the relative effectiveness of different criteria, nor the relative effectiveness of adequate test suites satisfying a given criterion. This paper makes three contributions to the development and use of test adequacy criteria for distributed systems: (1) a testing method based on discrete-event simulations; (2) a fault-based analysis technique for evaluating test suites and adequacy criteria; and (3) a series of case studies that validate the method and technique. The testing method uses a discrete-event simulation as an operational specification of a system, in which the behavioral effects of distribution are explicitly represented. Adequacy criteria and test cases are then defined in terms of this simulation-based specification. The fault-based analysis involves mutation of the simulation-based specification to provide a foil against which test suites and the criteria that formed them can be evaluated. Three distributed systems were used to validate the method and technique, including DNS, the domain name system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analogy-X: Providing Statistical Inference to Analogy-Based Software Cost Estimation

    Publication Year: 2008 , Page(s): 471 - 484
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2932 KB) |  | HTML iconHTML  

    Data-intensive analogy has been proposed as a means of software cost estimation as an alternative to other data intensive methods such as linear regression. Unfortunately, there are drawbacks to the method. There is no mechanism to assess its appropriateness for a specific dataset. In addition, heuristic algorithms are necessary to select the best set of variables and identify abnormal project cases. We introduce a solution to these problems based upon the use of the Mantel correlation randomization test called Analogy-X. We use the strength of correlation between the distance matrix of project features and the distance matrix of known effort values of the dataset. The method is demonstrated using the Desharnais dataset and two random datasets, showing (1) the use of Mantel's correlation to identify whether analogy is appropriate, (2) a stepwise procedure for feature selection, as well as (3) the use of a leverage statistic for sensitivity analysis that detects abnormal data points. Analogy-X, thus, provides a sound statistical basis for analogy, removes the need for heuristic search and greatly improves its algorithmic performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings

    Publication Year: 2008 , Page(s): 485 - 496
    Cited by:  Papers (102)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3758 KB) |  | HTML iconHTML  

    Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Do Crosscutting Concerns Cause Defects?

    Publication Year: 2008 , Page(s): 497 - 515
    Cited by:  Papers (71)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3141 KB) |  | HTML iconHTML  

    There is a growing consensus that crosscutting concerns harm code quality. An example of a crosscutting concern is a functional requirement whose implementation is distributed across multiple software modules. We asked the question, "How much does the amount that a concern is crosscutting affect the number of defects in a program?" We conducted three extensive case studies to help answer this question. All three studies revealed a moderate to strong statistically significant correlation between the degree of scattering and the number of defects. This paper describes the experimental framework we developed to conduct the studies, the metrics we adopted and developed to measure the degree of scattering, the studies we performed, the efforts we undertook to remove experimental and other biases, and the results we obtained. In the process, we have formulated a theory that explains why increased scattering might lead to increased defects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Empirical Study on Views of Importance of Change Impact Analysis Issues

    Publication Year: 2008 , Page(s): 516 - 530
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2619 KB) |  | HTML iconHTML  

    Change impact analysis is a change management activity that previously has been studied much from a technical perspective. For example, much work focuses on methods for determining the impact of a change. In this paper, we present results from a study on the role of impact analysis in the change management process. In the study, impact analysis issues were prioritised with respect to criticality by software professionals from an organisational perspective and a self-perspective. The software professionals belonged to three organisational levels: operative, tactical and strategic. Qualitative and statistical analyses with respect to differences between perspectives as well as levels are presented. The results show that important issues for a particular level are tightly related to how the level is defined. Similarly, issues important from an organisational perspective are more holistic than those important from a self-perspective. However, our data indicate that the self-perspective colours the organisational perspective, meaning that personal opinions and attitudes cannot easily be disregarded. In comparing the perspectives and the levels, we visualise the differences in a way that allow us to discuss two classes of issues: high-priority and medium-priority. The most important issues from this point of view concern fundamental aspects of impact analysis and its execution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing an Application Server to Support Available Components

    Publication Year: 2008 , Page(s): 531 - 545
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2051 KB) |  | HTML iconHTML  

    Three-tier middleware architecture is commonly used for hosting enterprise-distributed applications. Typically, the application is decomposed into three layers: front end, middle tier, and back end. Front end ("Web server") is responsible for handling user interactions and acts as a client of the middle tier, while back end provides storage facilities for applications. Middle tier ("application server") is usually the place where all computations are performed. One of the benefits of this architecture is that it allows flexible management of a cluster of computers for performance and scalability; further, availability measures, such as replication, can be introduced in each tier in an application-specific manner. However, incorporation of availability measures in a multitier system poses challenging system design problems of integrating open, nonproprietary solutions to transparent failover, exactly once execution of client requests, nonblocking transaction processing, and an ability to work with clusters. This paper describes how replication for availability can be incorporated within the middle and back-end tiers, meeting all these challenges. This paper develops an approach that requires enhancements to the middle tier only for supporting replication of both the middleware back-end tiers. The design, implementation, and performance evaluation of such a middle-tier-based replication scheme for multidatabase transactions on a widely deployed open source application server (JBoss) are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-Based Adaptation of Behavioral Mismatching Components

    Publication Year: 2008 , Page(s): 546 - 563
    Cited by:  Papers (42)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2731 KB) |  | HTML iconHTML  

    Component-Based Software Engineering focuses on the reuse of existing software components. In practice, most components cannot be integrated directly into an application-to-be, because they are incompatible. Software Adaptation aims at generating, as automatically as possible, adaptors to compensate mismatch between component interfaces, and is therefore a promising solution for the development of a real market of components promoting software reuse. In this article, we present our approach for software adaptation which relies on an abstract notation based on synchronous vectors and transition systems for governing adaptation rules. Our proposal is supported by dedicated algorithms that generate automatically adaptor protocols. These algorithms have been implemented in a tool, called Adaptor, that can be used through a user-friendly graphical interface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Self-Stabilizing Operating Systems

    Publication Year: 2008 , Page(s): 564 - 576
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1559 KB) |  | HTML iconHTML  

    This work presents several approaches for designing self-stabilizing operating systems. The first approach is based on periodical automatic reinstalling of the operating system and restart. The second reinstalls the executable portion of the operating system and uses predicates on the operating system state (content of variables) to ensure that the operating system does not diverge from its specifications. The last approach presents an example of a tailored self-stabilizing very tiny operating system. Prototypes using the Intel Pentium processor were composed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TSE Information for authors

    Publication Year: 2008 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2008 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (92 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org