By Topic

Software Engineering, IEEE Transactions on

Issue 7 • Date July 2003

Filter Results

Displaying Results 1 - 8 of 8
  • Comments on "The confounding effect of class size on the validity of object-oriented metrics"

    Page(s): 670 - 672
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (205 KB)  

    It has been proposed by El Emam et al. (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge-based repository scheme for storing and retrieving business components: a theoretical design and an empirical analysis

    Page(s): 649 - 664
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1940 KB) |  | HTML iconHTML  

    Component-based development (CDB) promises to reduce complexity and cost of software development and maintenance through reuse. For CBD to be successful, a vibrant market for commercial business components is essential. One of the key requirements of an active market for business components is an effective scheme for classifying and describing them at various levels of detail, as well as a corresponding repository for storing and retrieving these components. Such a scheme needs to support various constituents such as business users, managers, and application assemblers. The scheme and repository should help users and managers to select components that match their requirements and aid application assemblers in identifying components most compatible with their deployment environment (such as the platform) and system inputs (such as data types). Drawing from the concepts of group technology and software reuse paradigm, this paper proposes a scheme for classifying and describing business components and the design of a knowledge-based repository for their storage and retrieval. The proposed scheme is implemented in a prototype repository. The effectiveness of the prototype and the underlying classification and coding scheme is assessed empirically through controlled experiments. Results support the assertion that the scheme is effective in enhancing the users' and analysts' ability to find the needed business components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical investigation of the influence of a type of side effects on program comprehension

    Page(s): 665 - 670
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1516 KB) |  | HTML iconHTML  

    This paper reports the results of a study on the impact of a type of side effect (SE) upon program comprehension. We applied a crossover design on different tests involving fragments of C code that include increment and decrement operators. Each test had an SE version and a side-effect-free counterpart. The variables measured in the treatments were the number of correct answers and the time spent in answering. The results show that the side-effect operators considered significantly reduce performance in comprehension-related tasks, providing empirical justification for the belief that side effects are harmful. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A safe algorithm for resolving OR deadlocks

    Page(s): 608 - 622
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3981 KB) |  | HTML iconHTML  

    Deadlocks in the OR model are usually resolved by aborting a deadlocked process. Prior algorithms for the same model sometimes abort nodes needlessly wasting computing resources. This paper presents a new deadlock resolution algorithm for the OR model that satisfies the following correctness criteria: (Safety) the algorithm does not resolve false deadlocks; (Liveness) the algorithm resolves all deadlocks in finite time. The communication cost of the algorithm is similar to that of previous nonsafe proposals. The theoretical cost has been validated by simulation. In addition, different algorithm initiation alternatives have been analyzed in order to reduce the latency of deadlocks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General test result checking with log file analysis

    Page(s): 634 - 648
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (868 KB) |  | HTML iconHTML  

    We describe and apply a lightweight formal method for checking test results. The method assumes that the software under test writes a text log file; this log file is then analyzed by a program to see if it reveals failures. We suggest a state-machine-based formalism for specifying the log file analyzer programs and describe a language and implementation based on that formalism. We report on empirical studies of the application of log file analysis to random testing of units. We describe the results of experiments done to compare the performance and effectiveness of random unit testing with coverage checking and log file analysis to other unit testing procedures. The experiments suggest that writing a formal log file analyzer and using random testing is competitive with other formal and informal methods for unit testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An investigation of graph-based class integration test order strategies

    Page(s): 594 - 607
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4410 KB) |  | HTML iconHTML  

    The issue of ordering class integration in the context of integration testing has been discussed by a number of researchers. More specifically, strategies have been proposed to generate a test order while minimizing stubbing. Recent papers have addressed the problem of deriving an integration order in the presence of dependency cycles in the class diagram. Such dependencies represent a practical problem as they make any topological ordering of classes impossible. Three main approaches, aimed at "breaking" cycles, have been proposed. The first one was proposed by Tai and Daniels (1999) and is based on assigning a higher-level order according to aggregation and inheritance relationships and a lower-level order according to associations. The second one was proposed by Le Traon et al. (2000) and is based on identifying strongly connected components in the dependency graph. The third one was proposed by Briand et al. (2000); it combines some of the principles of the two previous approaches and addresses some of their shortcomings (e.g., the first approach may result into unnecessary stubbing whereas the second may lead to breaking cycles by "removing" aggregation or inheritance dependencies, thus leading to complex stubbing). This paper reviews these strategies (principles are described, advantages and drawbacks are precisely investigated) and provides both analytical and empirical comparisons based on five case studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inference of message sequence charts

    Page(s): 623 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1027 KB) |  | HTML iconHTML  

    Software designers draw message sequence charts for early modeling of the individual behaviors they expect from the concurrent system under design. Can they be sure that precisely the behaviors they have described are realizable by some implementation of the components of the concurrent system? If so, can we automatically synthesize concurrent state machines realizing the given MSCs? If, on the other hand, other unspecified and possibly unwanted scenarios are "implied" by their MSCs, can the software designer be automatically warned and provided the implied MSCs? In this paper, we provide a framework in which all these questions are answered positively. We first describe the formal framework within which one can derive implied MSCs and then provide polynomial-time algorithms for implication, realizability, and synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A choice relation framework for supporting category-partition test case generation

    Page(s): 577 - 593
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB) |  | HTML iconHTML  

    We describe in this paper a choice relation framework for supporting category-partition test case generation. We capture the constraints among various values (or ranges of values) of the parameters and environment conditions identified from the specification, known formally as choices. We express these constraints in terms of relations among choices and combinations of choices, known formally as test frames. We propose a theoretical backbone and techniques for consistency checks and automatic deductions of relations. Based on the theory, algorithms have been developed for generating test frames from the relations. These test frames can then be used as the basis for generating test cases. Our algorithms take into consideration the resource constraints specified by software testers, thus maintaining the effectiveness of the test frames (and hence test cases) generated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org