By Topic

Software Engineering, IEEE Transactions on

Issue 3 • Date May 1983

Filter Results

Displaying Results 1 - 25 of 26
  • IEEE Transactions on Software Engineering - Table of contents

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (547 KB)  
    Freely Available from IEEE
  • IEEE Computer Society

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (304 KB)  
    Freely Available from IEEE
  • Editorial

    Page(s): 217
    Save to Project icon | Request Permissions | PDF file iconPDF (144 KB)  
    Freely Available from IEEE
  • State-of-the-Art Issues in Distributed Databases

    Page(s): 218
    Save to Project icon | Request Permissions | PDF file iconPDF (1352 KB)  
    Freely Available from IEEE
  • A Formal Model of Crash Recovery in a Distributed System

    Page(s): 219 - 228
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2176 KB)  

    A formal model for atomic commit protocols for a distributed database system is introduced. The model is used to prove existence results about resilient protocols for site failures that do not partition the network and then for partitioned networks. For site failures, a pessimistic recovery technique, called independent recovery, is introduced and the class of failures for which resilient protocols exist is identified. For partitioned networks, two cases are studied: the pessimistic case in which messages are lost, and the optimistic case in which no messages are lost. In all cases, fundamental limitations on the resiliency of protocols are derived. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Rematerialization: Processing Distributed Queries Using Redundant Data

    Page(s): 228 - 232
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB)  

    In this paper an approach to processing distributed queries that makes explicit use of redundant data is proposed. The basic idea is to focus on the dynamics of materialization, defined as the collection of data and partial results available for processing at any given time, as query processing proceeds. In this framework the role of data redudancy in maximizing parallelism and minimizing data movement is clarified. What results is not only the discovery of new algorithms but an improved framework for their evaluation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing Concurrency Control Algorithms When User and System Operations Differ

    Page(s): 233 - 239
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2560 KB)  

    Concurrency control algorithms for database systems are usually regarded as methods for synchronizing Read and Write operations. Such methods are judged to be correct if they only produce serializable executions. However, Reads and Writes are sometimes inaccurate models of the operations executed by a database system. In such cases, serializability does not capture all aspects of concurrency control executions. To capture these aspects, we describe a proof schema for analyzing concurrency control correctness. We illustrate the proof schema by presenting two new concurrency algorithms for distributed database systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Mutual Inconsistency in Distributed Systems

    Page(s): 240 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1792 KB)  

    Many distributed systems are now being developed to provide users with convenient access to data via some kind of communications network. In many cases it is desirable to keep the system functioning even when it is partitioned by network failures. A serious problem in this context is how one can support redundant copies of resources such as files (for the sake of reliability) while simultaneously monitoring their mutual consistency (the equality of multiple copies). This is difficult since network faiures can lead to inconsistency, and disrupt attempts at maintaining consistency. In fact, even the detection of inconsistent copies is a nontrivial problem. Naive methods either 1) compare the multiple copies entirely or 2) perform simple tests which will diagnose some consistent copies as inconsistent. Here a new approach, involving version vectors and origin points, is presented and shown to detect single file, multiple copy mutual inconsistency effectively. The approach has been used in the design of LOCUS, a local network operating system at UCLA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Input–Output Tools: A Language Facility for Interactive and Real-Time Systems

    Page(s): 247 - 259
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3848 KB)  

    A conceptual model is discussed which allows the hierarchic definition of high-level input driven objects, called input-output tools, from any set of basic input primitives. An input-output tool is defined as a named object. Its most important elements are the input rule, output rule, internal tool definitions, and a tool body consisting of executable statements. The input rule contains an expression with tool designators as operands and with operators allowing for sequencing, selection, interleaving, and repetition. Input rules are similar in appearance to production rules in grammars. The input expression specifies one or more input sequences, or input patterns, in terms of tool designators. An input parser tries, at run-time, to match (physical) input tokens against active input sequences. If a match between an input token and a tool designator is found, the corresponding tool body is executed, and the output is generated according to specifications in the tool body. The control structures in the input expression allow a variety of input patterns from any number of sources. Tool definitions may occur in-line or be stored in a library. All tools are ultimately encompassed in one tool representing the program. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • File Structures, Program Structures, and Attributed Grammars

    Page(s): 260 - 266
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2696 KB)  

    A language for defining sequential file structures, characterized as nested sequences of records having in common certain keys and types, is presented. "Input schemata" are defined as program skeletons that contain all the necessary control structure to process a specified file. A method for obtaining an input schema from the corresponding file structure definition is given. The method is based on attributed grammars, and has been implemented in the programming language PROLOG. This constitutes a formalization of some aspects of the data-directed program design method of Jackson and Warnier. Examples of applications of this method to business data processing problems such as file updating and report generation are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compilation of Nonprocedural Specifications into Computer Programs

    Page(s): 267 - 279
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4104 KB)  

    The paper describes the compilation of a program specification, written in the very high level nonprocedural MODEL language, into an object, PL/1 or Cobol, procedural language program. Nonprocedural programming languages are descriptive and devoid of procedural controls. They are therefore easier to use and require less programming skills than procedural languages. The MODEL language is briefly presented and illustrated followed by a description of the compilation process. An important early phase in the compilation is the representation of the specification by a dependency graph, denoted as array graph, which expresses the data flow interdependencies between statements. Two classes of algorithms which utilize this graph are next described. The first class checks various completeness, nonambiguity, and consistency aspects of the specification. Upon detecting any problems, the system attempts some automatic correcting measures which are reported to the user, or alternately, when no corrections appear as reasonable, it reports the error and solicits a modification from the user. The second class of algorithms produces an intermediate design of an object program in a language independent form. Finally, PL/1 or Cobol code is generated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Diagrammatic Notation for Abstract Syntax and Abstract Structured Objects

    Page(s): 280 - 289
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2424 KB)  

    The concept of abstract syntax is commonly applied to the formal specification of programming language semantics and is also useful in the broader context of software design. This paper proposes a scheme for the representation of abstract syntax specifications in the form of easily understood charts. The structuring primitives considered are those of scalar enumeration, heterogeneous aggregation, homogeneous lists and sets, disjunction, and partial functions. Secondarily, a related system of charts for depicting particular objects with a given structure is proposed. Examples are given to illustrate the use of these diagrammatic notations in the contexts of language description and software design and documentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Design for a Secure System Based on Program Analysis

    Page(s): 289 - 299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4504 KB)  

    This paper describes the design of a prototype experimental secure operating system kernel called xsl that supports compile-time enforcement of an information flow policy. The security model chosen is an extension of Feiertag's model modified to state requirements in terms of program analysis functions. A prototype flow analyzer for Pascal programs, based on Denning's model, has been designed and implemented for incorporation into xs1. In addition, a flow analyzer, based on London's model, has also been designed and implemented. Both kinds of enforcement are supported in xsl. Both program anallyzers use an intermediate code program representation, originally designed for code optimization. Implementation of the flow analyzers is in Euclid with the remainder of xsl in PascaL View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VSWS: The Variable-Interval Sampled Working Set Policy

    Page(s): 299 - 305
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3984 KB)  

    A local variable-size memory policy called the variable-interval sampled working set (VSWS) policy is described. The results of trace-driven simulation experiments reported here show that VSWS has a static performance comparable to those of the working set (WS) and sampled working set (SWS) policies, a dynamic performance better than those of WS, SWS, and the page fault frequency (PFF) policy, and similar to that of the damped working set (DWS) policy. Furthermore, VSWS generaly causes substantially less process suspensions than SWS, and is less expensive to implement than WS or DWS, since it requires the same hardware support as SWS and PFF. The sampling overhead of VSWS is comparable to that of SWS and lower than that of PFF. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Study of a New Perfect Hash Scheme

    Page(s): 305 - 313
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3824 KB)  

    A new approach is proposed for the design of perfect hash functions. The algorithms developed can be effectively applied to key sets of large size. The basic ideas employed in the construction are rehash and segmentation. Analytic results are given which are applicable when problem sizes are small. Extensive experiments have been performed to test the approach for problems of larger size. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonsensitive Data and Approximate Transactions

    Page(s): 314 - 322
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3136 KB)  

    A methodology has been proposed for solving database problems requiring only approximate solutions. Data items are classified as sensitive and nonsensitive. An approximate transaction modifies only the nonsensitive data items which need not satisfy strong consistency constraints, and provides results only up to a degree of approximation. Further, it is shown that such an approach improves the performance in situations where transaction conflicts are frequent. Additionally, the methodology provides users as well as data managers with mechanisms to control the precision of the computation, preserving the qualitative characteristics of the data items. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Release Time of Computer Software

    Page(s): 323 - 327
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2000 KB)  

    A decision procedure to determine when computer software should be released is described. This procedure is based upon the cost-benefit for the entire company that has developed the software. This differs from the common practice of only minimizing the repair costs for the data processing division. Decision rules are given to determnine at what time the system should be released based upon the results of testing the software. Necessary and sufficient conditions are identified which determine when the system should be released (immediately, before the deadline, at the deadline, or after the deadline). No assumptions are made about the relationship between any of the model's parameters. The model can be used whether the software was developed by a first or second party. The case where future costs are discounted is also considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining Testing with Formal Specifications: A Case Study

    Page(s): 328 - 335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2624 KB)  

    This paper describes our experience specifying, implementing, and validating a record-oriented text editor similar to one discussed in [7]. Algebraic axioms served as the specification notation; and the implementation was tested with a compiler-based system that uses the axioms to test implementations with a finite collection of test cases. Formal specifications were sometimes difficult to produce, but helped reveal errors during unit testing. Thorough exercising of the implementations by the specifications resulted in few errors persisting until integration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing for Perturbations of Program Statements

    Page(s): 335 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5016 KB)  

    Many testing methods require the selection of a set of paths on which tests are to be conducted. Errors in arithmetic expressions within program statements can be represented as perturbing functions added to the correct expression. It is then possible to derive the set of errors in a chosen functional class which cannot possibly be detected using a given test path. For example, test paths which pass through an assignment statement "X := f(Y)" are incapable of revealing if the expression "X -f( Y)" has been added to later statements. In general, there are an infinite number of such undetectable error perturbations for any test path. However, when the chosen functional class of error expressions is a vector space, a finite characterization of all undetectable expressions can be found for one test path, or for combined testing along several paths. An analysis of the undetected perturbations for sequential programs operating on integers and real numbers is presented which permits the detection of multinomial error terms. The reduction of the space of (potential undetected errors is proposed as a criterion for test path selection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Data Flow Oriented Program Testing Strategy

    Page(s): 347 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3016 KB)  

    Some properties of a program data flow can be used to guide program testing. The presented approach aims to exercise use-definition chains that appear in the program. Two such data oriented testing strategies are proposed; the first involves checking liveness of every definition of a variable at the point(s) of its possible use; the second deals with liveness of vectors of variables treated as arguments to an instruction or program block. Reliability of these strategies is discussed with respect to a program containing an error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Framework for Software Fault Tolerance in Real-Time Systems

    Page(s): 355 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3496 KB)  

    Real-time systems often have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be expensive and the cost is exacerbated for systems which utilize concurrent processes. The concurrency present in most real-time systems and the further difficulties introduced by timing constraints suggest that providing tolerance for software faults may be inordinately expensive or complex. We believe that this need not be the case, and propose a straightforward pragmatic approach to software fault tolerance'which is believed to be applicable to many real-time systems. The approach takes advantage of the structure of real-time systems to simplify error recovery, and a classification scheme for errors is introduced. Responses to each type of error are proposed which allow service to be maintained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Noisy Substring Matching Problem

    Page(s): 365 - 370
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2432 KB)  

    Let T(U) be the set of words in the dictionary H which contains U as a substring. The problem considered here is the estimation of the set T(U) when U is not known, but Y, a noisy version of U is available. The suggested set estimate S*(Y) of T(U) is a proper subset of H such that its every element contains at least one substring which resembles Y most according to the Levenshtein metric. The proposed algorithm for-the computation of S*(Y) requires cubic time. The algorithm uses the recursively computable dissimilarity measure Dk(X, Y), termed as the kth distance between two strings X and Y which is a dissimilarity measure between Y and a certain subset of the set of contiguous substrings of X. Another estimate of T(U), namely SM(Y) is also suggested. The accuracy of SM(Y) is only slightly less than that of S*(Y), but the computation time of SM(Y) is substantially less than that of S*(Y). Experimental results involving 1900 noisy substrings and dictionaries which are subsets of 1023 most common English words [11] indicate that the accuracy of the estimate S*(Y) is around 99 percent and that of SM(Y) is about 98 percent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments on "Protocols for Deadlock Detection in Distributed Database Systems"

    Page(s): 371
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    The two-phase deadlock detection protocol in the above paperl detects false deadlocks. This is contrary to what the authors claim. The false detection o. f deadlocks is shown using a counterexample. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 1982 Referees List

    Page(s): 372
    Save to Project icon | Request Permissions | PDF file iconPDF (136 KB)  
    Freely Available from IEEE
  • IEEE Computer Soceity Publications

    Page(s): 372-a
    Save to Project icon | Request Permissions | PDF file iconPDF (208 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org