By Topic

Computers, IEEE Transactions on

Issue 9 • Date Sep 1992

Filter Results

Displaying Results 1 - 16 of 16
  • Task allocation for maximizing reliability of distributed computer systems

    Publication Year: 1992 , Page(s): 1156 - 1168
    Cited by:  Papers (47)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (932 KB)  

    For distributed systems, system reliability is defined as the probability that the system can run an entire task successfully. When the system's hardware configuration is fixed, the system reliability is mainly dependent on the software design. The task allocation problem is addressed with the goal of maximizing the system reliability. A quantitative problem model, algorithms for optimal and suboptimal solutions, and simulation results are provided and discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approximate method for the performance analysis of PLAYTHROUGH rings

    Publication Year: 1992 , Page(s): 1137 - 1155
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1796 KB)  

    Analytical models are presented and shown to approximate adequately simulation results for average message queuing time, service time, and control frame round trip time on ring-topology local area networks. These LANs use a PLAYTHROUGH protocol, a data link layer medium access control protocol that achieves concurrent transfer of multiple messages of arbitrary length. The analytical predictions of data message service time and control frame round trip time are used in a queuing system model of average message waiting times versus throughput for this class of multiserver circuit-switched ring under assumptions of uniform and symmetric traffic and a shortest outbound distance first service discipline at each node. The analytical models are validated using simulation results. The analysis includes both the effects of competing traffic originating at other nodes on the ring and the effects of the medium access control mechanism overhead on the waiting times experienced by messages arriving at an arbitrarily chosen node View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Foresighted instruction scheduling under timing constraints

    Publication Year: 1992 , Page(s): 1169 - 1172
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB)  

    When data dependency graph arcs representing data dependency information are annotated with minimum and maximum timing information, new algorithms are required. Foresighted compaction is a list scheduling technique in which look ahead is used in making decisions. Foresighted compaction is very effective in reducing, failure inherent in greedy compaction algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact parametric analysis of stochastic Petri nets

    Publication Year: 1992 , Page(s): 1176 - 1180
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    An algorithm for exact parametric analysis of stochastic Petri nets is presented. The algorithm is derived from the theory of decomposition and aggregation of Markov chains. The transition rate of interest is confined into a diagonal submatrix of the associated Markov chain by row and column permutations. Every time a new value is assigned to the transition, a smaller Markov chain is analyzed. As a result, the computational cost is greatly reduced View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the complexity of search algorithms

    Publication Year: 1992 , Page(s): 1172 - 1176
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    The average complexity for searching a record in a sorted file of records that are stored on a tape is analyzed for four search algorithms, namely, sequential search, binary search, Fibonacci search, and a modified version of Fibonacci search. The theoretical results are consistent with the recent simulation results by S. Nishihara and N. Nishino (1987). The results show that sequential search, Fibonacci search, and modified Fibonacci search are all better than binary search on a tape View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast addition of large integers

    Publication Year: 1992 , Page(s): 1069 - 1077
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    The basic computational model of a massively parallel processor is discussed, and three massively parallel algorithms using carry-lookahead techniques for binary addition of large integers are presented. It is shown how performance can be improved by exploiting the average case behavior of large n-bit additions and the asymmetry of the computation time of two particular operations. Even better performance is obtained by grouping multiple bits per processor. Performance measurements of all the algorithms are presented and discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal partitioning of cache memory

    Publication Year: 1992 , Page(s): 1054 - 1068
    Cited by:  Papers (33)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1344 KB)  

    A model for studying the optimal allocation of cache memory among two or more competing processes is developed and used to show that, for the examples studied, the least recently used (LRU) replacement strategy produces cache allocations that are very close to optimal. It is also shown that when program behavior changes, LRU replacement moves quickly toward the steady-state allocation if it is far from optimal, but converges slowly as the allocation approaches the steady-state allocation. An efficient combinatorial algorithm for determining the optimal steady-state allocation, which, in theory, could be used to reduce the length of the transient, is described. The algorithm generalizes to multilevel cache memories. For multiprogrammed systems, a cache-replacement policy better than LRU replacement is given. The policy increases the memory available to the running process until the allocation reaches a threshold time beyond which the replacement policy does not increase the cache memory allocated to the running process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized transforms for multiple valued circuits and their fault detection

    Publication Year: 1992 , Page(s): 1101 - 1109
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    Simple transforms for obtaining canonical representation of multiple-valued (MV) functions in polarity k, k ∈ {0, 1,. . ., pn-1}, are presented, where p and n denote the radix and the number of variables of a function. The coefficients in a canonical representation are called spectral coefficients. Various relationships between the functional values of a function and its spectral coefficients are given. Fault detection in an arbitrary MV network is considered using test patterns and spectral techniques. Upper bounds on the number of test patterns for detection of stuck-at and bridging faults at the input lines are shown to be pn and n-1, respectively. Fault detection by spectral techniques is done based on the number of spectral coefficients affected by a fault, and hence it is independent of the technology used for construction of networks and the type of fault. Test set generation for detection of any fault in {E}, where {E} denotes all faults in the network, is given. An upper bound on the number of test patterns required to detect all faults in {E} is obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Necessary and sufficient conditions on block codes correcting/detecting errors of various types

    Publication Year: 1992 , Page(s): 1189 - 1193
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    Necessary and sufficient conditions are given for block codes to be capable of correcting up to t1 symmetric errors, up to t2 unidirectional errors, and up to t 3 asymmetric errors, as well as detecting from t1+1 up to d1 symmetric errors that are not of the unidirectional type, from t2+1 up to d2 unidirectional errors that are not of the asymmetric type, and from t3+1 up to d3 asymmetric errors. Many known conditions on block codes concerning error correction and/or detection appear as special cases of this general result. Further, some codes turn out to have stronger error correcting/detection capabilities than they were originally designed for View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ELM-a fast addition algorithm discovered by a program

    Publication Year: 1992 , Page(s): 1181 - 1184
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    A new addition algorithm, ELM, is presented. This algorithm makes use of a tree of simple processors and requires O(log n) time, where n is the number of bits in the augend and addend. The sum itself is computed in one pass through the tree. This algorithm was discovered by a VLSI CAD tool, FACTOR, developed for use in synthesizing CMOS VLSI circuits View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relating the cyclic behavior of linear and intrainverted feedback shift registers

    Publication Year: 1992 , Page(s): 1088 - 1100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    Feedback shift registers (FSRs) are sometimes implemented with inversions between stages to improve their testability and their ability to locate faults. These intrainverted FSRs (IFSRs) can be realized with less overhead than standard linear feedback shift registers (LFSRs). It is shown how to relate the cyclic behavior of the LFSR and the corresponding IFSR, based on the same feedback polynomial, so that IFSRs can be designed to exploit the inherent implementation advantages while exhibiting the well-known behavior of LFSRs. In particular, it is shown that the cyclic and serial output behavior of LFSRs can be emulated by IFSRs when loaded with the appropriate initial states for most feedback shift register lengths and feedback polynomials. How the initial state for the IFSR can be derived, given the feedback polynomial and the initial state of the desired cycle in the LFSR, is described. Conditions under which such mapping of behavior cannot be guaranteed are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synergistic fault-tolerance for memory chips

    Publication Year: 1992 , Page(s): 1078 - 1087
    Cited by:  Papers (39)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    The discovery of a principle of synergistic fault tolerance is described, and it is shown analytically why it occurs. The performance of its hardware implementation, in the form of a VLSI memory chip, is reported. An analysis of the error-correction scheme implemented in the hardware is presented, and limitations to the use of error-correcting codes for fault tolerance are explained. Methods for circumventing these limitations with the use of redundant circuits are discussed, analyzing the effect of bitline and wordline redundancy. The result of the analysis shows how the combination of error-correcting codes with redundant circuitry results in a fault-tolerance synergism View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient diagnosis of multiprocessor systems under probabilistic models

    Publication Year: 1992 , Page(s): 1126 - 1136
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (968 KB)  

    The problem of fault diagnosis in multiprocessor systems is considered under a probabilistic fault model. The focus is on minimizing the number of tests that must be conducted to correctly diagnose the state of every processor in the system with high probability. A diagnosis algorithm that can correctly diagnose these states with probability approaching one in a class of systems performing slightly greater than a linear number of tests is presented. A nearly matching lower bound on the number of tests required to achieve correct diagnosis in arbitrary systems is proved. Lower and upper bounds on the number of tests required for regular systems are presented. A class of regular systems which includes hypercubes is shown to be correctly diagnosable with high probability. In all cases, the number of tests required under this probabilistic model is shown to be significantly less than under a bounded-size fault set model. These results represent a very great improvement in the performance of system-level diagnosis techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detailed modeling and reliability analysis of fault-tolerant processor arrays

    Publication Year: 1992 , Page(s): 1193 - 1200
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB)  

    A method for the generation of detailed models of fault-tolerant processor arrays, based on stochastic Petri nets (SPNs), is presented. A compact SPN model of the array associates with each transition a set of attributes that includes a discrete probability distribution. Depending on the type of component and the reconfiguration scheme, these probabilities are determined using simulation or closed-form expressions and correspond to the survival of the array given that a number of components required by the reconfiguration process are faulty View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the complexity of two circle strongly connecting problems

    Publication Year: 1992 , Page(s): 1185 - 1188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    Given n demand points in the plane, the circle strongly connecting problem (CSCP) is to locate n circles in the plane, each with its center in a different demand point, and determine the radius of each circle such that the corresponding digraph G=( V, E), in which a vertex ν1 in V stands for the point pi, and a directed edge ⟨νi, νj⟩ in E, if and only if pj located within the circle of p i, is strongly connected, and the sum of the radii of these n circles is minimal. The constrained circle strongly connecting problem is similar to the CSCP except that the points are given in the plane with a set of obstacles and a directed edge ⟨νi, νj⟩ in E, if and only if pj is located within the circle of pi and no obstacles exist between them. It is proven that both these geometric problems are NP-hard. An O( n log n) approximation algorithm that can produce a solution no greater than twice an optimal one is also proposed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A performance modeling and evaluation of the Cambridge Fast Ring

    Publication Year: 1992 , Page(s): 1110 - 1125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1240 KB)  

    Performance of the Cambridge Fast Ring (CFR), a high-speed slotted ring with normal slots, is studied. It is shown that the CFR can be represented by a multiqueue multiple cyclic server model with a 1-limited service discipline and with a restriction that only one server at a time can be serving a queue. Exact necessary and sufficient stability conditions are stated. An approximate analytic M/G/1 vacation model in which analysis concentrates on one station while the others are represented by a vacation period is developed to estimate the expected message waiting times. It is shown that the model is accurate and usable over a wide range of parameters. A performance evaluation of the CFR based on this model is presented. The performance is compared to that of a variant which does not restrict the number of slots a station may simultaneously use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Paolo Montuschi
Politecnico di Torino
Dipartimento di Automatica e Informatica
Corso Duca degli Abruzzi 24 
10129 Torino - Italy
e-mail: pmo@computer.org