By Topic

Software Engineering, IEEE Transactions on

Issue 8 • Date Aug 1997

Filter Results

Displaying Results 1 - 6 of 6
  • Semantics guided regression test cost reduction

    Page(s): 498 - 516
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7952 KB)  

    Software maintainers are faced with the task of regression testing: retesting a modified program on an often large number of test cases. The cost of regression testing can be reduced if the size of the program is reduced and if old test cases and results can be reused. Two complimentary algorithms for reducing the cost of regression testing are presented. The first produces a program called Differences that captures the semantic change between Certified, a previously tested program, and Modified, a changed version of Certified. It is more efficient to test Differences, because it omits unchanged computations. The program Differences is computed using a combination of program slices. The second algorithm identifies test cases for which Certified and Modified produce the same output and existing test cases that test new components in Modified. The algorithm is based on the notion of common execution patterns. Program components with common execution patterns have the same execution pattern during some call to their procedure. They are computed using a calling context slice. Whereas an interprocedural slice includes the program components necessary to capture all possible executions of a statement, a calling context slice includes only those program components necessary to capture the execution of a statement in a particular calling context. Together with Differences, it is possible to test Modified by running Differences on a smaller number of test cases. This is more efficient than running Modified on a large number of test cases. A prototype implementation has been built to examine and illustrate these algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reply to: “Property-based software engineering measurement”

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (16 KB)  

    L.C. Briand, S. Morasca and V.R. Basili (ibid., vol. 22, no. 1, pp. 68-85, Jan. 1996) introduced a measurement-theoretic approach to software measurement and criticized (among others) the work of the author, but they misinterpreted his work. The author does not require additive software (complexity) measures as Briand, Morasca and Basili state. The author uses the concept of the extensive structure in order to show the empirical properties behind software measures. Briand, Morasca and Basili use the concept of meaningfulness in order to describe scales and that certain scale levels are not excluded by the Weyuker properties. However, they do not consider that scales and scale types are different things View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A model for software development effort and cost estimation

    Page(s): 485 - 497
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    Several algorithmic models have been proposed to estimate software costs and other management parameters. Early prediction of completion time is absolutely essential for proper advance planning and aversion of the possible ruin of a project. L.H. Putnam's (1978) SLIM (Software LIfecycle Management) model offers a fairly reliable method that is used extensively to predict project completion times and manpower requirements as the project evolves. However, the nature of the Norden/Rayleigh curve used by Putnam renders it unreliable during the initial phases of the project, especially in projects involving a fast manpower buildup, as is the case with most software projects. In this paper, we propose the use of a model that improves early prediction considerably over the Putnam model. An analytic proof of the model's improved performance is also demonstrated on simulated data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On parallelization of static scheduling algorithms

    Page(s): 517 - 528
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB)  

    Most static algorithms that schedule parallel programs represented by macro dataflow graphs are sequential. This paper discusses the essential issues pertaining to parallelization of static scheduling and presents two efficient parallel scheduling algorithms. The proposed algorithms have been implemented on an Intel Paragon machine and their performances have been evaluated. These algorithms produce high-quality scheduling and are much faster than existing sequential and parallel algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ADTEST: a test data generation suite for Ada software systems

    Page(s): 473 - 484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    Presents the design of the software system ADTEST (ADa TESTing), for generating test data for programs developed in Ada83. The key feature of this system is that the problem of test data generation is treated entirely as a numerical optimization problem and, as a consequence, this method does not suffer from the difficulties commonly found in symbolic execution systems, such as those associated with input variable-dependent loops, array references and module calls. Instead, program instrumentation is used to solve a set of path constraints without explicitly knowing their form. The system supports not only the generation of integer and real data types, but also non-numerical discrete types such as characters and enumerated types. The system has been tested on large Ada programs (60,000 lines of code) and found to reduce the effort required to test programs as well as providing an increase in test coverage View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the statistical analysis of the number of errors remaining in a software design document after inspection

    Page(s): 529 - 532
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Sometimes, complex software systems fail because of faults introduced in the requirements and design stages of the development process. Reviewing documents related to requirements and design by several reviewers can remove some of these faults, but often a few remain undetected until the software is developed. In this paper, we propose a procedure leading to the estimation of the number of faults which are not discovered. The main advantage of our procedure is that we do not need the standard assumption of independence among reviewers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org