Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Software Engineering, IEEE Transactions on

Issue 8 • Date Aug. 1998

Filter Results

Displaying Results 1 - 8 of 8
  • Introduction To The Special Section

    Publication Year: 1998 , Page(s): 585
    Save to Project icon | Request Permissions | PDF file iconPDF (107 KB)  
    Freely Available from IEEE
  • Cost-effective analysis of in-place software processes

    Publication Year: 1998 , Page(s): 650 - 663
    Cited by:  Papers (16)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB)  

    Process studies and improvement efforts typically call for new instrumentation on the process in order to collect the data they have deemed necessary. This can be intrusive and expensive, and resistance to the extra workload often foils the study before it begins. The result is neither interesting new knowledge nor an improved process. In many organizations, however, extensive historical process and product data already exist. Can these existing data be used to empirically explore what process factors might be affecting the outcome of the process? If they can, organizations would have a cost-effective method for quantitatively, if not causally, understanding their process and its relationship to the product. We present a case study that analyzes an in-place industrial process and takes advantage of existing data sources. In doing this, we also illustrate and propose a methodology for such exploratory empirical studies. The case study makes use of several readily-available repositories of process data in the industrial organization. Our results show that readily available data can be used to correlate both simple aggregate metrics and complex process metrics with defects in the product. Through the case study, we give evidence supporting the claim that exploratory empirical studies can provide significant results and benefits while being cost-effective in their demands on the organization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating testing methods by delivered reliability [software]

    Publication Year: 1998 , Page(s): 586 - 601
    Cited by:  Papers (45)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing partially-implemented real-time systems

    Publication Year: 1998 , Page(s): 602 - 614
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    Most analysis methods for real-time systems assume that all the components of the system are at roughly the same stage of development and can be expressed in a single notation, such as a specification or programming language. There are, however, many situations in which developers would benefit from tools that could analyze partially-implemented systems: those for which some components are given only as high-level specifications while others are fully implemented in a programming language. In this paper, we propose a method for analyzing such partially-implemented real-time systems. We consider real-time concurrent systems for which some components are implemented in Ada and some are partially specified using regular expressions and graphical interval logic (GIL), a real-time temporal logic. We show how to construct models of the partially-implemented systems that account for such properties as run-time overhead and scheduling of processes, yet support tractable analysis of nontrivial programs. The approach can be fully automated, and we illustrate it by analyzing a small example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Factors that impact implementing a system development methodology

    Publication Year: 1998 , Page(s): 640 - 649
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    Presents the findings of empirical research from 61 companies, mostly from the USA, to identify the factors that may impact implementation of a system development methodology (SDM). The study uses a survey instrument to identify the SDM implementation factors. The survey focused on the perspective of the primary constituents: functional managers, information systems managers, system personnel and external consultants. The study uses an exploratory factor analysis that identifies five factors important to implementing an SDM: organizational SDM transition, functional management involvement/support, SDM transition, the use of models and external support. The research findings have important implications for further research and the practice of system development. For researchers, it points to important measures in the implementation and use of SDMs that may be further verified and extended in subsequent research. For practitioners, it provides a general guide to the important aspects to consider in the implementation and use of an SDM View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient distributed detection of conjunctions of local predicates

    Publication Year: 1998 , Page(s): 664 - 677
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB)  

    Global predicate detection is a fundamental problem in distributed systems and finds applications in many domains such as testing and debugging distributed programs. This paper presents an efficient distributed algorithm to detect conjunctive-form global predicates in distributed systems. The algorithm detects the first consistent global state that satisfies the predicate even if the predicate is unstable. Unlike previously proposed run-time predicate detection algorithms, our algorithm does not require the exchange of control messages during the normal computation. All the necessary information to detect predicates is piggybacked on computation messages of application programs. The algorithm is distributed because the predicate detection efforts as well as the necessary information are equally distributed among the processes. We prove the correctness of the algorithm and compare its performance with respect to message, storage and computational complexities with that of the previously proposed run-time predicate detection algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Communication metrics for software development

    Publication Year: 1998 , Page(s): 615 - 628
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    Presents empirical evidence that metrics on communication artifacts generated by groupware tools can be used to gain significant insight into the development process that produced them. We describe a test-bed for developing and testing communication metrics, a senior-level software engineering project course at Carnegie Mellon University, in which we conducted several studies and experiments from 1991-1996 with more than 400 participants. Such a test-bed is an ideal environment for empirical software engineering, providing sufficient realism while allowing for controlled observation of important project parameters. We describe three proof-of-concept experiments to illustrate the value of communication metrics in software development projects. Finally, we propose a statistical framework based on structural equations for validating these communication metrics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managerial use of metrics for object-oriented software: an exploratory analysis

    Publication Year: 1998 , Page(s): 629 - 639
    Cited by:  Papers (84)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practising managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org