By Topic

Computers, IEEE Transactions on

Issue 6 • Date June 2000

Filter Results

Displaying Results 1 - 9 of 9
  • Guest editors' introduction

    Publication Year: 2000 , Page(s): 529 - 531
    Save to Project icon | Request Permissions | PDF file iconPDF (213 KB)  
    Freely Available from IEEE
  • Procedures for static compaction of test sequences for synchronous sequential circuits

    Publication Year: 2000 , Page(s): 596 - 607
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    We propose three static compaction techniques for test sequences of synchronous sequential circuits. We apply the proposed techniques to test sequences generated for benchmark circuits by various test generation procedures. The results show that the test sequences generated by all the test generation procedures considered can be significantly compacted. The compacted sequences thus have shorter test application times and smaller memory requirements. As a by-product, the fault coverage is sometimes increased as well. Additionally, the ability to significantly reduce the length of the test sequences indicates that it may be possible to reduce test generation time if superfluous input vectors are not generated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Incorporating yield enhancement into the floorplanning process

    Publication Year: 2000 , Page(s): 532 - 541
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    The traditional goals of the floorplanning process for a new integrated circuit have been minimizing the total chip area and reducing the routing cost, i.e., the total length of the interconnecting wires. Recently, it has been shown that, for certain types of chips, the floorplan can affect the yield of the chip as well. Consequently, it becomes desirable to consider the expected yield, in addition to the cost of routing, when selecting a floorplan. The goal of this paper is to investigate the two seemingly unrelated, and often conflicting, objectives of yield enhancement and routing complexity minimization. We analyze the possible trade-offs between the two and then present a constructive algorithm for incorporating yield enhancement as a secondary objective into the floorplanning process, with the main objective still being the minimization of the overall routing costs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continuous learning automata solutions to the capacity assignment problem

    Publication Year: 2000 , Page(s): 608 - 620
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    The Capacity Assignment (CA) problem focuses on finding the best possible set of capacities for the links that satisfies the traffic requirements in a prioritized network while minimizing the cost. Most approaches consider a single class of packets flowing through the network, but, in reality, different classes of packets with different packet lengths and priorities are transmitted over the networks. In this paper, we assume that the traffic consists of different classes of packets with different average packet lengths and priorities. We shall look at three different solutions to this problem. K. Marayuma and D.T. Tang (1977) proposed a single algorithm composed of several elementary heuristic procedures. A. Levi and C. Ersoy (1994) introduced a simulated annealing approach that produced substantially better results. In this paper, we introduce a new method which uses continuous learning automata to solve the problem. Our new schemes produce superior results when compared with either of the previous solutions and is, to our knowledge, currently the best known solution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerant processor arrays based on the 1½-track switches with flexible spare distributions

    Publication Year: 2000 , Page(s): 542 - 552
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    A mesh-connected processor array consists of many similar processing elements (PEs) which can be executed in both parallel and pipeline processing. For the implementation of an array of large numbers of processors, some fault-tolerant issues are necessary to enhance the (fabrication-time) yield and the (run-time) reliability. In this paper, we propose a fault-tolerant reconfigurable processor array using single-track switches like Kung et al.'s model. The reconfiguration process in our model is executed based on the concept of the “compensation path” like Kung et al.'s method, too. In our model, spare PEs are not necessarily put around the array, but are more flexibly put in the array by changing connections between spare PEs and nonspare PEs while retaining the connections among nonspare PEs in the same manner in Kung et al.'s model. The proposed model has such a desirable property that physical distances between logically adjacent PEs in the reconfigured array are within a constant, that is, independent of sizes of arrays. We show that the hardware overhead of the proposed model is a little greater than that of Kung et al.'s model, while the yield of the proposed model is much better than that of Kung et al.'s model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerant Newton-Raphson and Goldschmidt dividers using time shared TMR

    Publication Year: 2000 , Page(s): 588 - 595
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    Iterative division algorithms based on multiplication are popular because they are fast and may utilize an already existing hardware multiplier. Two popular methods based on multiplication are Newton-Raphson and Goldschmidt's algorithm. To achieve concurrent error correction, Time Shared Triple Modular Redundancy (TSTMR) may be applied to both kinds of dividers. The hardware multiplier is divided into thirds, and the rest of the divider logic replicated around each part, to provide three independent dividers. While this reduces the size of the fault-tolerant dividers over that of traditional TMR, latency may be increased. However, both division algorithms can be modified to use lower precision multiplications during the early iterations. This saves multiply cycles and, hence, produces a faster divider View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-checking detection and diagnosis of transient, delay, and crosstalk faults affecting bus lines

    Publication Year: 2000 , Page(s): 560 - 574
    Cited by:  Papers (38)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    We present a self-checking detection and diagnosis scheme for transient, delay, and crosstalk faults affecting bus lines of synchronous systems. Faults that are likely to result in the connected logic sampling incorrect bus data are on-line detected. The position of the affected line(s) within the considered bus is identified and properly encoded. The proposed scheme is self-checking with respect to a realistic set of possible internal faults, including node stuck-ats, transistor stuck-ons, transistor stuck-opens, resistive bridgings, transient faults, delays and crosstalks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the adaptation of Viterbi algorithm for diagnosis of multiple bridging faults

    Publication Year: 2000 , Page(s): 575 - 587
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    This paper proposes a very efficient method to diagnosis multiple bridging faults. This method is based on differential or Delta IDDQ probabilistic signatures, as well as on the Viterbi algorithm, mainly used in telecommunications systems for error correction. The proposed method can be seen as a significant improvement over an existing one based on maximum likelihood estimation. The use of the (adapted) Viterbi algorithm allows us to take into account additional information not considered previously. The existing and the proposed method are first described. Then, simulation and experimental results are presented to validate the concept in the context of double faults. Bounds on false diagnosis probability are also provided, estimating the number of test/diagnosis vectors required to reach a given diagnosis reliability for a given number of gates. The bounds allow us to show that this probability exponentially decreases with the number of test vectors and that for, a given value of this probability, the number of vectors required is O(log2(G)), where G is the number of gates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient reconfiguration algorithm for degradable VLSI/WSI arrays

    Publication Year: 2000 , Page(s): 553 - 559
    Cited by:  Papers (27)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    This paper considers the problem of reconfiguring two-dimensional degradable VLSI/WSI arrays under the constraint of row and column rerouting. The goal of the reconfiguration problem is to derive a fault-free subarray T from the defective host array such that the dimensions of T are larger than some specified minimum. This problem has been shown to be NP-complete under various switching and routing constraints. However, we show that a special case of the reconfiguration problem is optimally solvable in linear time. Using this result, a new fast and efficient reconfiguration algorithm is proposed. Empirical study shows that the new algorithm indeed produces good results in terms of the percentages of harvest and degradation of VLSI/WSI arrays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Paolo Montuschi
Politecnico di Torino
Dipartimento di Automatica e Informatica
Corso Duca degli Abruzzi 24 
10129 Torino - Italy
e-mail: pmo@computer.org