By Topic

Software Engineering, IEEE Transactions on

Issue 6 • Date Nov. 1984

Filter Results

Displaying Results 1 - 25 of 31
  • [Front cover]

    Publication Year: 1984 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 1984 , Page(s): nil1
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • A Note from the Editor-in-Chief

    Publication Year: 1984 , Page(s): 613
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Representative instances and γ-acyclic relational schemes

    Publication Year: 1984 , Page(s): 614 - 618
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1520 KB)  

    In this paper, we study under what conditions will a pairwise inconsistent relational database ≪R,r≫ have a universal/representative instance L. If R is γ-acyclic and r satisfies all existence constraints, then it is possible to construct a universal instance L, using unmarked nulls, whose total projections onto R yield exactly the relations in r. We show that L would actually be a representative instance under a set of functional dependencies if R satisfies the additional mild condition: for any functional dependency X → A where A is a single attribute, whenever XA is contained in two relation schemes R and R' of R, it follows that R ∩R' is a relation scheme of R, having X as one of its keys. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge Representation for Model Management Systems

    Publication Year: 1984 , Page(s): 619 - 628
    Cited by:  Papers (100)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2823 KB)  

    This paper examines the concept of a model management system, what its functions are, and how they are to be achieved in a decision support context. The central issue is model representation which involves knowledge representation and knowledge management within a database environment. The model abstraction structure is introduced as a vehicle for model representation which supports both heuristic and deterministic inferencing as well as the conceptual/external schema notion familiar to database management. The model abstraction is seen as a special instance of the frame construct in artificial intelligence. Model management systems are characterized as frame-systems and a database implementation of this approach is described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • View Definition and Generalization for Database Integration in a Multidatabase System

    Publication Year: 1984 , Page(s): 628 - 645
    Cited by:  Papers (108)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4305 KB)  

    Access to a heterogeneous distributed collection of databases can be simplified by providing users with a logically integrated interface or global view. There are two aspects to database integration. Firstly, the local schemas may model objects and relationships differently and, secondly, the databases may contain mutually inconsistent data. This paper identifies several kinds of structural and data inconsistencies that might exist. It describes a versatile view definition facility for the functional data model and illustrates the use of this facility for resolving inconsistencies. In particular, the concept of generalization is extended to this model, and its importance to database integration is emphasized. The query modification algorithm for the relational model is extended to the semantically richer functional data model with generalization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Site Initialization, Recovery, and Backup in a Distributed Database System

    Publication Year: 1984 , Page(s): 645 - 650
    Cited by:  Papers (17)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3539 KB)  

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all intersite synchronization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Methodology for Data Schema Integration in the Entity Relationship Model

    Publication Year: 1984 , Page(s): 650 - 664
    Cited by:  Papers (66)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2826 KB)  

    The conceptual design of databases is usually seen as divided into two steps: view modeling, during which user requirements are formally expressed by means of several user oriented conceptual schemata, and schema integration, whose goal is to merge such schemata into a unique global conceptual schema. This paper is devoted to describe a methodology for schema integration. An enriched entity relationship model is chosen as the data model. The integration process consists of three steps: first, several types of conflicts between the different user schemata are checked and solved; second, schemata are merged into a draft integrated schema, that is, third, enriched and restructured according to specific goals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Scheme for Batch Verification of Integrity Assertions in a Database System

    Publication Year: 1984 , Page(s): 664 - 680
    Cited by:  Papers (5)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4420 KB)  

    A database management system can ensure the semantic integrity of a database via an integrity control subsystem. A technique for implementation of such a subsystem is proposed. After a database is updated by transactions, its integrity must be verified by evaluation of a set of semantic integrity assertions. For evaluation of an integrity assertion a number of database pages need to be transferred from the secondary storage to the fast memory. Since certain pages may be required for evaluation of different integrity assertions, the order of the evaluation of the integrity assertions determines the total number of pages fetched from the secondary storage. Hence, the schedule for the evaluation determines the cost of the database verification process. We show that the search for an optimal schedule is an NP-hard problem. Four approximation algorithms that find suboptimal schedules are proposed. They are based on the utilization of intersections among sets of pages required for the evaluation of different integrity assertions. The theoretical worst case behaviors of these algorithms are studied. Finally, the algorithms are compared via a simulation study to a naive, random order verification approach. The methods proposed for minimizing the costs of the batch integrity verification also apply to other problems that can be abstracted to the directed traveling salesman optimization problem. For example, the methods are applicable to multiple to multiple-query optimization and to concurrency control via the predicate locks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Optimal Selection of Multilist Database Structures

    Publication Year: 1984 , Page(s): 681 - 687
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1876 KB)  

    The optimal selection of secondary indexes asks for the quantitative evaluation of the performance of a number of candidate secondary indexes in order to determine the particular combination of indexes which satisfies the anticipated user transactions at a minimal cost. Previous studies determine the optimal selection by assuming that the cost of satisfying a query using a secondary index is not affected by the existence of other indexes in the database. This assumption is realistic when the inverted file organization is used to organize secondary indexes. The main reason is that inverted files do not alter the size of the file. However, the assumption is not valid when the next most popular method for structuring secondary indexes is used, namely, the multilist database structures. This is so, because each multilist increases the size of the file. This paper studies the secondary index selection problem by making the assumption that the multilist organization is utilized to structure secondary indexes and develops a dynamic programming algorithm to solve it. The practical significance of the study lies in the fact that multilists can be easily implemented on network databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Criteria for Software Reliability Model Comparisons

    Publication Year: 1984 , Page(s): 687 - 691
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2392 KB)  

    A set of criteria is proposed for the comparison of software reliability models. The intention is to provide a logically organized basis for determining the superior models and for the presentation of model characteristics. It is hoped that in the future, a software manager will be able to more easily select the model most suitable for his/her requirements from among the preferred ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of Error Recovery Blocks Used for Cooperating Processes

    Publication Year: 1984 , Page(s): 692 - 700
    Cited by:  Papers (18)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2252 KB)  

    Three alternatives for implementing recovery blocks (RB's) are conceivable for backward error recovery in concurrent processing. These are the asynchronous, synchronous, and the pseudorecovery point implementations. Asynchronous RB's are based on the concept of maximum autonomy in each of concurrent processes. Consequently, establishment of RB's in a process is made independently of others and unbounded rollback propagations become a serious problem. In order to completely avoid unbounded rollback propagations, it is necessary to synchronize the establishment of recovery blocks in all cooperating processes. Process autonomy is sacrificed and processes are forced to wait for commitments from others to establish a recovery line, leading to inefficiency in time utilization. As a compromise between asynchronous and synchronous RB's we propose to insert pseudorecovery points (PRP's) so that unbounded rollback propagations may be avoided while maintaining process autonomy. We developed probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables. With these models we have estimated 1) the interval between two successive recovery lines for asynchronous RB's, 2) mean loss in computation power for the synchronized method, and 3) additional overhead and rollback distance in case PRP's are used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependability Evaluation of Software Systems in Operation

    Publication Year: 1984 , Page(s): 701 - 714
    Cited by:  Papers (69)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3306 KB)  

    This paper deals with evaluation of the dependability (considered as a generic term, whose main measures are reliability, availability, and maintainability) of software systems during their operational life, in contrast to most of the work performed up to now, devoted mainly to development and validation phases. The failure process due to design faults, and the behavior of a software system up to the first failure and during its life cycle are successively examined. An approximate model is derived which enables one to account for the failures due to the design faults in a simple way when evaluating a system's dependability. This model is then used for evaluating the dependability of 1) a software system tolerating design faults, and 2) a computing system with respect to physical and design faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Successful Software Development

    Publication Year: 1984 , Page(s): 714 - 727
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4171 KB)  

    In 1980, System Development Corporation (SDC) delivered software for a modern air defense system (ADS) for a foreign country. Development of the ADS software was a successful SCD project where all products were delivered within budget and within an ambitious 25 month schedule. This paper describes SDC's approach and experience in developing ADS software. SDC's software development approach included the first time use of an off-the-shelf operating system for a major air defense system, the application of a selective set of modern software development techniques, and use of a matrix management structure. SDC's successful application on ADS of a commercial operating system, a higher order language, a Program Design Language, a Program Production Library, structured walk-throughs, structured programming techniques, incremental build implementation and testprocedures, interactive development, and word processing is described. A discussion of the advantages realized and difficulties encountered in the ADS matrix management structure is presented. The paper concludes with a summary of how SDC will develop software on future projects as a result of its experience on ADS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Methodology for Collecting Valid Software Engineering Data

    Publication Year: 1984 , Page(s): 728 - 738
    Cited by:  Papers (222)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3101 KB)  

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To ensure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50 percent of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Concurrent General Purpose Operator Interface

    Publication Year: 1984 , Page(s): 738 - 748
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3014 KB)  

    Compact interactive control consoles are rephcing traditional control rooms as operator interfaces for physical processes. In the irust major application of concurrent programming outside the area of operating systems, this paper presents a design for a general purpose operator interface which uses a color graphics terminal with a touch-sensitive screen as the control console. Operators interact with a process through a collection of application-dependent displays generated interactively by users familiar with the physical process. The use of concurrent programming results in a straightforward and reliable design which may easily be extended to support multiple devices of varying types in the control console. An implementation of the Operator Interface in Concurrent Pascal currently in progress is also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Industrial Software Engineering Retraining Course: Development Considerations and Lessons Learned

    Publication Year: 1984 , Page(s): 748 - 755
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3208 KB)  

    Israel Aircraft Industries has recently been conducting a novel six-month intensive course to retrain practicing engineers to become software engineers working on embedded computer systems. The first course was concluded in January 1982 and the second course began in November 1982. This paper describes the objectives, educational philosophy, course content, and practical experience of the first course. It also describes how the second course was modified as a result of the lessons learned from the successes and failures of the first course. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Execution Monitoring

    Publication Year: 1984 , Page(s): 756 - 764
    Cited by:  Papers (36)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2538 KB)  

    Today's programming methodology emphasizes the study of static aspects of programs. In practice, however, monitoring a program in execution, i.e., monitoring a process, is routinely done by any programmer whose task it is to produce a reliable piece of software. There are two reasons why one might want to examine the dynamic aspects of a program: first, to evaluate the performance of a program, and hence to assess its overall behavior; and second, to demonstrate the presence of programming errors, isolate erroneous program code, and correct it. This latter task is commonly called ``debugging a program'' and requires a detailed insight into the innards of a program being executed. Today, many computer systems are being used to measure and control real-world processes. The pace of execution of these systems and their control programs is therefore bound to timing constraints imposed by the real-world process. As a step towards solving the problems associated with execution monitoring of real-time programs, we develop a set of appropriate concepts and define the basic requirements for a real-time monitoring facility. As a test case for the theoretical treatment of the topic, we design hardware and software for an experimental real-time monitoring system and describe its implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring for Deadlock and Blocking in Ada Tasking

    Publication Year: 1984 , Page(s): 764 - 777
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3672 KB)  

    We present a deadlock monitoring algodrithm for Ada tasking programs which is based on transforming the source program. The transformations introduce a new task called the monitor, which receives information from all other tasks about their tasking activities. The monitor detects deadlocks consisting of circular entry calls as well as some noncircular blocking situations. The correctness of the program transformations is formulated and proved using an operational state graph model of tasking. The main issue in the correctness proof is to show that the deadlock monitor algorithm works correctly without having simultaneous information about the state of the program. In the course of this work, we have developed some useful techniques for programming tasking applications, such as a method for uniformly introducing task identifiers. We argue that the ease of finding and justifying program transformations is a good test of the generality and uniformity of a programming language. The complexity of the full Ada language makes it difficult to safely apply transformational methods to arbitrary programs. We discuss several problems with the current semantics of Ada's tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concurrent Maintenance of Binary Search Trees

    Publication Year: 1984 , Page(s): 777 - 784
    Cited by:  Papers (7)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2269 KB)  

    The problem of providing efficient concurrent access for independent processes to a dynamic search structure is the topic of this paper. We develop concurrent algorithms for search, update, insert, and delete in a simple variation of binary search trees, called external trees. The algorithm for deletion, which is usually the most difficult operation, is relatively easy in this data structure. The advantages of the data structure and the algorithms are that they are simple, flexible, and efficient, so that they can be used as a part in the design of more complicated concurrent algorithms where maintaining a dynamic search structure is necessary. In order to increase the efficiency of the algorithms we introduce maintenance processes that independently reorganize the data structure and relieve the user processes of nonurgent operations. We also discuss questions of transactions in a dynamic environment and replicated copies of the data structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Counting Paths: Nondeterminism as Linear Algebra

    Publication Year: 1984 , Page(s): 785 - 794
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2641 KB)  

    Nondeterminism is considered to be ignorance about the actual state transition sequence performed during a computation. The number of distinct potential paths from state i to j forms a matrix [nij]. The behavior of a nondeterministic program is defined to be this multiplicity matrix of the state transitions. The standard programming constructs have behaviors defined in terms of the behaviors of their constituents using matrix addition and multiplication only. The spectral radius of the matrix assigned to an iterating component characterizes its convergence. The spectral radius is shown to be either 0 or else ¿ 1. The program converges iff the spectral radius is zero, diverges deterministically iff the spectral radius is one, and has a proper nondeterministic divergence iff the spectral radius exceeds one. If the machine has an infinite number of states the characterization of convergence is given graph theoretically. The spectral radii of synchronous and interleaved parallel noncommunicating systems are easily computed in terms of the spectral radii of the components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Required Element Testing

    Publication Year: 1984 , Page(s): 795 - 803
    Cited by:  Papers (78)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2647 KB)  

    In this paper we introduce two classes of program testing strategies that consist of specifying a set of required elements for the program and then covering those elements with appropriate test inputs. In general, a required element has a structural and a functional component and is covered by a test case if the test case causes the features specified in the structural component to be executed under the conditions specified in the functional component. Data flow analysis is used to specify the structural component and data flow interactions are used as a basis for developing the functional component. The strategies are illustrated with examples and some experimental evaluations of their effectiveness are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Concurrency Measure

    Publication Year: 1984 , Page(s): 804 - 810
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1588 KB)  

    With the new advents of technology and the availability of microprocessors and minicomputers, parallel and distributed processing is gaining widespread acceptability. In such systems resources are shared among a number of processes. Accesses to the resources must be synchronized in order to guarantee proper operation of a system. In this research work, a measure, called maximal compatibility, is developed to measure the degree of concurrency (parallelism) a synchronization policy achieves. A set of accesses is considered compatible if it only contains accesses that are permitted to occur simultaneously. A policy is maximally compatible if it allows every compatible set of accesses to occur simultaneously and if the maximum number of requests is always satisfied without allowing incompatible accesses to occur simultaneously. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selectors: High-Level Resource Schedulers

    Publication Year: 1984 , Page(s): 810 - 825
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4182 KB)  

    Resource sharing problems can be described in three basically independent modular components. ¿ The constraints the resource places upon sharing because of physcal limitations and consistency requirements. ¿ The desired ordering of resource requests to achieve efficiency-either efficiency of resource utilization or efficiency of processes making the requests. ¿ Modifications to the ordering to prevent starvation of processes waiting for requests which might otherwise never receive service. A high-level nonprocedural language to specify these components of resource sharing problems is described. General deadlock and starvation properties of selectors are proven. Solutions to several classic resource sharing problems are shown to illustrate the expressiveness of this language. Proof techniques for this high-level language are introduced to show how to prove particular selectors are or are not deadlock and starvation free. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Algebraic Specification of HDLC Procedures and Its Verification

    Publication Year: 1984 , Page(s): 825 - 836
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4686 KB)  

    It is well known that algebraic specification methods are promising for specifying programs and for verifying their various properties formally. In this paper, an algebraic specification of information transfer procedures of high-level data link control (HDLC) procedures is presented and some of the main properties of the specification are shown. First, we introduce abstract states, state transition functions, and output functions corresponding to elementary notions extracted from the description of HDLC procedures in ISO 3309-1979 (E) and ISO 4335-1979 (E). Second, we show axioms which represent the relations between the values of functions before and after the state transitions. Then, it is proved that the specification is ``consistent,'' ``sufficiently complete,'' and ``nonredundant.'' Also it is shown that an implementation which realizes the specification is naturally derived. In the last section, verification of various properties of HDLC procedures is formulated in the same framework as the algebraic specification, and some verification examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org