By Topic

Software Engineering, IEEE Transactions on

Issue 10 • Date Oct 2001

Filter Results

Displaying Results 1 - 5 of 5
  • EMERALDS: a small-memory real-time microkernel

    Page(s): 909 - 928
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    EMERALDS (Extensible Microkernel for Embedded, ReAL-time, Distributed Systems) is a real-time microkernel designed for small-memory embedded applications. These applications must run on slow (15-25 MHz) processors with just 32-128 kbytes of memory, either to keep production costs down in mass produced systems or to keep weight and power consumption low. To be feasible for such applications, the OS must not only be small in size (less than 20 kbytes), but also have low overhead kernel services. Unlike commercial embedded OSs which rely on carefully optimized code to achieve efficiency, EMERALDS takes the approach of redesigning the basic OS services of task scheduling, synchronization, communication, and system call mechanism by using characteristics found in small-memory embedded systems, such as small code size and a priori knowledge of task execution and communication patterns. With these new schemes, the overheads of various OS services are reduced 20-40 percent without compromising any OS functionality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of mobile processes via abstract machines

    Page(s): 867 - 889
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB) |  | HTML iconHTML  

    We use a structural operational semantics which drives us in inferring quantitative measures on system evolution. The transitions of the system are labeled and we assign rates to them by only looking at these labels. The rates reflect the possibly distributed architecture on which applications run. We then map transition systems to Markov chains, and performance evaluation is carried out using standard tools. As a working example, we compare the performance of a conventional uniprocessor with a prefetch pipeline machine. We also consider two case studies from the literature involving mobile computation to show that our framework is feasible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prioritizing test cases for regression testing

    Page(s): 929 - 948
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (944 KB) |  | HTML iconHTML  

    Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software cost estimation with incomplete data

    Page(s): 890 - 908
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (590 KB) |  | HTML iconHTML  

    The construction of software cost estimation models remains an active topic of research. The basic premise of cost modeling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases and may be detrimental to the accuracy of cost estimation models. We describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modeling. Three techniques are evaluated: listwise deletion, mean imputation, and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On comparisons of random, partition, and proportional partition testing

    Page(s): 949 - 960
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    Early studies of random versus partition testing used the probability of detecting at least one failure as a measure of test effectiveness and indicated that partition testing is not significantly more effective than random testing. More recent studies have focused on proportional partition testing because a proportional allocation of the test cases (according to the probabilities of the subdomains) can guarantee that partition testing will perform at least as well as random testing. We show that this goal for partition testing is not a worthwhile one. Guaranteeing that partition testing has at least as high a probability of detecting a failure comes at the expense of decreasing its relative advantage over random testing. We then discuss other problems with previous studies and show that failure to include important factors (cost, relative effectiveness) can lead to misleading results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org