By Topic

IEEE Transactions on Software Engineering

Issue 1 • Jan 1989

Filter Results

Displaying Results 1 - 12 of 12
  • Deduction graphs: an algorithm and applications

    Publication Year: 1989, Page(s):60 - 67
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (700 KB)

    A deduction graph (DG) for logically deducing a new functional dependency (FD) or function-free Horn formula (extended from Horn clauses) from a subset of a given FDs or function-free headed Horn clauses in a relational database or rule-based expert systems is defined. An algorithm with a polynomial time complexity for constructing a DG based on a number of rules is designed. Applications of DGs t... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simplified framework for reduction in strength

    Publication Year: 1989, Page(s):86 - 92
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (624 KB)

    Reduction in strength is a traditional transformation for speeding up loop execution on sequential processors. The inverse transformation, induction variable substitution, can also speed up loops by decreasing register requirements, although it is typically a normalizing step in the detection of array dependences by parallelizing compilers. The author presents a simple framework for performing the... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimizing joins in fragmented database systems on a broadcast local network

    Publication Year: 1989, Page(s):26 - 38
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1168 KB)

    The problem of optimizing joins between two fragmented relations on a broadcast local network is analyzed. Data redundancy is considered. Semantic information associated with fragments are used to eliminate necessary processing. More than one physical copies of a fragment is allowed to be used in a strategy to achieve more parallelism. Join-analysis graphs are introduced to represent joins on two ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resilient objects in broadband networks

    Publication Year: 1989, Page(s):68 - 72
    Cited by:  Papers (7)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (484 KB)

    An object is said to be resilient if operations on the object can be performed even if some nodes of the network fail. To support resiliency, copies of the objects are stored on different nodes, and access to different copies is coordinated. The properties of broadcast networks are utilized to devise a distributed scheme for implementing resilient objects. All the copies of an object are equivalen... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A dynamic voting scheme in distributed systems

    Publication Year: 1989, Page(s):93 - 97
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (496 KB)

    A dynamic weighted voting scheme for consistency and recovery control of replicated files in distributed systems is presented. The purpose of a replicated file is to improve the availability of a logical file in the presence of site failures and network partitions. The accessible physical copies of a replicated file will be mutually consistent and behave as a single copy. The recovery scheme requi... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A pessimistic consistency control algorithm for replicated files which achieves high availability

    Publication Year: 1989, Page(s):39 - 46
    Cited by:  Papers (15)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (704 KB)

    A consistency control algorithm is described for managing replicated files in the face of network partitioning due to node or communication link failures. It adopts a pessimistic approach in that mutual consistency among copies of a file is maintained by permitting files to be accessed only in a single partition at any given time. The algorithm simplifies the Davcev-Burkhard dynamic voting algorit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some inference rules for integer arithmetic for verification of flowchart programs on integers

    Publication Year: 1989, Page(s):1 - 9
    Cited by:  Papers (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (660 KB)

    Significant modifications of the first-order rules have been developed so that they can be applied directly to algebraic expressions. The importance and implication of normalization of formulas in any theorem prover are discussed. It is shown how the properties of the domain of discourse have been taken care of either by the normalizer or by the inference rules proposed. Using a nontrivial example... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The delay due to dynamic two-phase locking

    Publication Year: 1989, Page(s):72 - 82
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (716 KB)

    An analytic formula for the delay due to two-phase locking is developed in terms of mean values for the input parameters using an open queuing network model in equilibrium. The results of simulations, using various realistic probability distributions governing the number of locks that transactions request, are presented to validate the formula. Reasonably good accuracy is achieved for gamma distri... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data and time abstraction techniques for analyzing multilevel concurrent systems

    Publication Year: 1989, Page(s):47 - 59
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1188 KB)

    It is argued that the design and analysis of a concurrent system can be made simpler and more intuitive if execution times of abstract operations are arbitrarily but systematically defined. This technique (time abstraction) is complementary to data abstraction and is more effective when used in combination with data abstraction. As examples, a bounced-buffer monitor and a multilevel concurrency sc... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A modified priority based probe algorithm for distributed deadlock detection and resolution

    Publication Year: 1989, Page(s):10 - 17
    Cited by:  Papers (45)  |  Patents (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (796 KB)

    A modified, priority-based probe algorithm for deadlock detection and resolution in distributed database system is presented. Various examples are used to show that the original priority-based algorithm, presented by M.K. Sinha and N. Natarajan (1985), either fails to detect deadlocks or reports deadlocks that do not exist in many situations. A modified algorithm that eliminates these problems is ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of a model for program error prediction

    Publication Year: 1989, Page(s):82 - 86
    Cited by:  Papers (28)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (416 KB)

    A model is presented for estimating the number of errors remaining in a program at the beginning of the testing phase of development. The relationships between the errors occurring in a program and the various factors that affect software development, such as programmer skill, are statistically analyzed. The model is then derived using the factors significantly identified in the analysis. On the b... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heuristics for join processing using nonclustered indexes

    Publication Year: 1989, Page(s):18 - 25
    Cited by:  Papers (8)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (748 KB)

    The author examines join processing when the access paths available are nonclustered indexes on the joining attribute(s) for both relations involved in the join. He uses a bipartite graph model to represent the pages from the two relations that contain tuples to be joined. The minimization of the number of page accesses needed to compute a join in the author's database environment is explored from... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tse-eic@computer.org