By Topic

Software Engineering, IEEE Transactions on

Issue 4 • Date April 1989

Filter Results

Displaying Results 1 - 12 of 12
  • Time-by-example query language for historical databases

    Page(s): 464 - 478
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1522 KB)  

    The authors propose a graphical query language, Time-by-Example (TBE), which has suitable constructs for interacting with historical relational databases in a natural way. TBE is user-friendly. It follows the graphical, two-dimensional approach of such previous languages as Query-by-Example (QBE), Aggregation-by-Example (ABE), and Summary-Table-by-Example (STBE). TBE also uses the hierarchical window (subquery) concept of ABE and STBE. TBE manipulates triple-valued (set-triple-valued) attributes and historical relations. Set-theoretic expressions are followed to deal with time intervals. The BNF specification for TBE is given.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Necessary and sufficient ergodicity condition for open synchronized queueing networks

    Page(s): 367 - 380
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1068 KB)  

    A necessary and sufficient ergodicity condition for complex open queueing systems is given. The queueing networks considered belong to a particular class of unbounded Markov stochastic Petri nets. These systems can include synchronization features like fork and join arrivals and departures, and feedback between behavior of different queues. Grouped and correlated arrivals and departures are also allowed. An example and a proof of the ergodicity results are presented.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Petri net representation of discrete event simulations

    Page(s): 381 - 393
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1241 KB)  

    In the context of discrete event simulation, the marking of a stochastic Petri net (SPN) corresponds to the state of the underlying stochastic process of the simulation and the firing of a transition corresponds to the occurrence of an event. A study is made of the modeling power of SPNs with timed and immediate transitions, showing that such Petri nets provide a general framework for simulation. The principle result is that for any (finite or) countable state GSMP (generalized semi-Markov process) there exists an SPN having a marking process that mimics the GSMP in the sense that the two processes (and their underlying general state-space Markov chains) have the same finite dimensional distributions.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Petri net analysis of a replicated file system

    Page(s): 394 - 401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (822 KB)  

    The authors present a stochastic Petri net model of a replicated file system in a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date) as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is not fixed, but is a parameter of the model. Two different majority protocols are examined.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A macroscopic profile of program compilation and linking

    Page(s): 427 - 436
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    To profile the changes made to programs during development and maintenance, the authors have instrumented the 'make' utility that is used to compile and link programs. With minor modifications, they have used 'make' to find out how much time programmers spend waiting for compiling and linking, how many modules are compiled each time a program is linked, and the change in size of the compiled modules. Measurements show that most programs are relinked after only one or two modules are recompiled, and that over 90% of all recompilations yield object code that is less than 100 bytes larger in size. The authors are using these results to guide the design of an incremental programming environment, particularly with respect to an incremental linker.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated concurrency-coherency controls for multisystem data sharing

    Page(s): 437 - 448
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1274 KB)  

    The authors propose an integrated control mechanism and analyze the performance gain due to its use. An extension to the data sharing system structure is examined in which a shared intermediate memory is used for buffering and for early commit processing. Read-write-synchronization and write-serialization problems arise. The authors show how the integrated concurrency protocol can be used to overcome both problems. A queueing model is used to quantify the performance improvement. Although using intermediate memory as a buffering device produces a moderate performance benefit, the analysis shows that more substantial gains can be realized when this technique is combined with the use of an integrated concurrency-coherency control protocol.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A theory of attributed equivalence in databases with application to schema integration

    Page(s): 449 - 463
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1408 KB)  

    The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set. This common foundation is based on the basic principle of integrating attributes. Any pair of objects whose identifying attributes can be integrated can themselves be integrated. Several definitions of attribute equivalence are presented. These definitions can be used to specify the exact nature of the relationship between a pair of attributes. Based on these definitions, several strategies for attribute integration are presented and evaluated.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance characterization of quorum-consensus algorithms for replicated data

    Page(s): 492 - 496
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (529 KB)  

    The authors develop a model and define performance measures for a replicated data system that makes use of a quorum-consensus algorithm to maintain consistency. They consider two measures: the proportion of successfully completed transactions in systems where a transaction aborts if data is not available, and the mean response time in systems where a transaction waits until data becomes available. Based on the model, the authors show that for some quorum assignment there is an optimal degree of replication beyond which performance degrades. There exist other quorum assignments which have no optimal degree of replication. The authors also derive optimal read and write quorums which maximize the proportion of successful transactions.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments, with reply, on "Axiomatizing software test data adequacy" by E.J. Weyuker

    Page(s): 496 - 501
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (531 KB)  

    E.J. Weyuker (ibid., vol.SE-12, p.1128-38, Dec. 1986) recently proposed a set of properties which should be satisfied by any reasonable criterion used to claim that a computer program has been adequately tested. The author called these properties 'axioms'. She also evaluated several well-known testing strategies with respect to these properties, and concluded that some of the commonly used strategies failed to satisfy several of the properties. The commenters question both the fundamental nature of the properties and the precision with which they are presented, and illustrate how a number of ideas in E.J. Weyuker's paper can be simplified and clarified through greater precision and a more consistent set of definitions. They also reanalyze the testing strategies after accounting for these inconsistencies. The strategies tend to work better as a result of this reanalysis. The author rebuts the commenter's arguments.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generalized expert system for database design

    Page(s): 479 - 491
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1216 KB)  

    Generalized Expert System for Database Design (GESDD) is a compound expert system made up of two parts: (1) an expert system for generating methodologies for database design, called ESGM; and (2) an expert system for database design, called ESDD. ESGM provides a tool for the database design expert to specify different design methodologies or to modify existing ones. The database designer uses ESDD in conjunction with one of these methodologies to design a database starting from the requirement specification phase and producing a logical schema in one of the well-known data models, namely, the hierarchical data model, the network data model, or the relational data model. The system is evolutive in the sense that an existing methodology can be modified or a novel methodology can be added to the existing ones. GESDD is a menu-driven system and it is coded in Prolog View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal methods for protocol testing: a detailed study

    Page(s): 413 - 426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1220 KB)  

    The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed system software design paradigm with application to computer networks

    Page(s): 402 - 412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (924 KB)  

    A paradigm for the system and software design of distributed systems is presented with application to an actual large-scale computer network involving both local area networks and a wide area network. A number of design principles are offered with particular reference to how they can be applied to the design of distributed systems. The author's major point is an explanation of how to make design decisions about distributed systems in a way which will enhance maintainability and understandability of the software and, at the same time, result in good system performance. The aim is to recognize the implications for software quality of various decisions which must be made in the process of specifying a distributed system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org