By Topic

IBM Journal of Research and Development

Issue 2 • Date March 1984

Filter Results

Displaying Results 1 - 14 of 14
  • Error-Correcting Codes for Semiconductor Memory Applications: A State-of-the-Art Review

    Publication Year: 1984 , Page(s): 124 - 134
    Cited by:  Papers (168)  |  Patents (40)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (842 KB)  

    This paper presents a state-of-the-art review of error-correcting codes for computer semiconductor memory applications. The construction of four classes of error-correcting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required for commonly used data lengths is provided. The implementation aspects of error correction and error detection are also discussed, and certain algorithms useful in extending the error-correcting capability for the correction of soft errors such as α-particle-induced errors are examined in some detail. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Table of Contents

    Publication Year: 1984 , Page(s): 125
    Save to Project icon | PDF file iconPDF (92 KB)  
    Freely Available from IEEE
  • An Introduction to Arithmetic Coding

    Publication Year: 1984 , Page(s): 135 - 149
    Cited by:  Papers (132)  |  Patents (69)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1157 KB)  

    Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval of the number line between 0 and 1, and retains one of the partitions as the new interval. Thus, the algorithm successively deals with smaller intervals, and the code string, viewed as a magnitude, lies in each of the nested intervals. The data string is recovered by using magnitude comparisons on the code string to recreate how the encoder must have successively partitioned and retained each nested subinterval. Arithmetic coding differs considerably from the more familiar compression coding techniques, such as prefix (Huffman) codes. Also, it should not be confused with error control coding, whose object is to detect and correct errors in computer operations. This paper presents the key notions of arithmetic compression coding by means of simple examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Universal Reed-Solomon Decoder

    Publication Year: 1984 , Page(s): 150 - 158
    Cited by:  Papers (24)  |  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (637 KB)  

    Two architectures for universal Reed-Solomon decoders are given. These decoders, called time-domain decoders, work directly on the raw data word as received without the usual syndrome calculation or power-sum-symmetric functions. Up to the limitations of the working registers, the decoders can decode any Reed-Solomon codeword or BCH codeword in the presence of both errors and erasures. Provision is also made for decoding extended codes and shortened codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation and Evaluation of a (b,k)-Adjacent Error-Correcting/Detecting Scheme for Supercomputer Systems

    Publication Year: 1984 , Page(s): 159 - 169
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (964 KB)  

    This paper describes a coding scheme developed for a specific supercomputer architecture and structure. The code considered is a shortened (b,k)-adjacent single-error-correcting double-error probabilistic-detecting code with b=5, k=1, and code group width = 4. An evaluation of the probabilistic double-error-detection capability of the code was performed for different organizations of the coding/decoding strategies for the codewords. This led to the selection of a system organization encompassing the traditional feature of memory data error protection and also providing for the detection of major addressing errors that may result from faults affecting the interconnection network communication modules. The cost of implementation is a limited amount of extra hardware and a negligible degradation in the double-error-detection properties of the code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault Alignment Exclusion for Memory Using Address Permutation

    Publication Year: 1984 , Page(s): 170 - 176
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    A significant improvement in memory fault tolerance, beyond what is already provided by the use of an appropriate error-correcting code (ECC), can be achieved by electronic chip swapping, without any compromise of data integrity as large numbers of faults are allowed to accumulate. Since most large and medium-sized semiconductor memories are organized so that each bit position of the system word (ECC codeword) is fed from a different chip, and quite often from a different array card, or at least from distinct partitions of an array card, the various bit positions have separate address circuitry on the array cards. This fact is important, and can be exploited to provide effective address permutation capability, which allows the realignment of faults which would otherwise have caused an uncorrectable multiple error in an ECC codeword. When faults occur in a codeword to produce an uncorrectable error (UE), the addressing within one of the error bit position array cards can be altered using simple EX-OR circuitry and storage latches. The content of the latches is computed using a fault map of the memory together with an algorithm. These techniques are referred to as Fault Alignment Exclusion (FAE) using address permutation. Practical considerations as to the complexity of the fault map, the number of storage latches per bit position, and the overall effectiveness of the permutation to disperse the expected numbers of errors are presented in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-Tolerant Design Techniques for Semiconductor Memory Applications

    Publication Year: 1984 , Page(s): 177 - 183
    Cited by:  Papers (13)  |  Patents (8)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (609 KB)  

    Advances in semiconductor memory technology towards higher-density and higher-performance memory chips have created new reliability challenges for the memory system designer. An example would be the multiple-bit-per-chip organization with the chip outputs used in the same word. This design structure would be prone to uncorrectable errors with conventionally implemented single-error-correcting double-error-detecting codes. With these newer chips, memory system designers will have to give special attention not only to the types of failures but to ways of minimizing the system impact of reliability defects. In this paper, a number of design approaches are presented for minimizing the effects of chip failures through the use of organizational techniques and through enhancements to conventional error checking and correction facilities. The fault-tolerant design techniques described are compatible with most existing memory designs. An evaluative comparison of these techniques is included, and their application and utility are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-Tolerant Memory Simulator

    Publication Year: 1984 , Page(s): 184 - 195
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (920 KB)  

    Memory systems in modern computers employ a variety of methods to achieve fault tolerance, such as single- or double-error correction, page deallocation, or the use of spare chips or cells. Such methods ensure that the failure rate of the system is considerably less than the sum of the failure rates of the components. However, these methods also complicate the task of evaluating system reliability. The memory reliability function is too intractable to handle analytically, and we must turn to Monte Carlo methods. This paper describes the Fault-Tolerant Memory Simulator (FTMS), an interactive APL program which uses Monte Carlo simulation to evaluate the reliability of fault-tolerant memory systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A General-Purpose Memory Reliability Simulator

    Publication Year: 1984 , Page(s): 196 - 205
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (820 KB)  

    With rapid advances in computer memory capacity and performance, coupled with corresponding increases in the expense of field service calls, memory reliability and optimal maintenance strategies have become more and more important in terms of customer satisfaction and field service cost. At the same time, significant improvements in error correction and recovery over recent years have made the prediction of uncorrectable error and field service frequency much more difficult. This paper describes a Monte Carlo simulator which can predict uncorrectable error rates and field-replaceable-unit replacement rates for a wide range of memory architectures and under a variety of maintenance strategies. The model provides valuable information for performing sensitivity studies of intrinsic failure rates for memory components, for performing tradeoff studies of alternative storage module and card organizations, for evaluating system functions, and for establishing optimum maintenance strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Correctable Errors in the IBM 3380 Disk File

    Publication Year: 1984 , Page(s): 206 - 211
    Cited by:  Papers (29)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    A method of analyzing the correctable errors in disk files is presented. It allows one to infer the most probable error in the encoded-data stream given only the unencoded readback and error-correction information. This method is applied to the errors observed in seven months of operation of four IBM 3380 head-disk assemblies. It is shown that nearly all the observed errors can be explained as single-bit errors at the input to the channel decoder. About 90 percent of the errors were related to imperfections in the disk surfaces. The remaining 10 percent were mostly due to heads which were unusually susceptible to random noise-induced errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Exhaustive Pattern Generation for Logic Testing

    Publication Year: 1984 , Page(s): 212 - 219
    Cited by:  Papers (18)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (615 KB)  

    Exhaustive pattern logic testing schemes provide all possible input patterns with respect to an output in the set of test patterns. This paper is concerned with the problem that arises when this is to be done simultaneously with respect to a number of outputs, using a single test set. More specifically, in this paper we describe an iterative procedure for generating a test set consisting of n-dimensional vectors which exhaustively covers all k-subspaces simultaneously, i.e., the projections of n-dimensional vectors in the test set onto any input subset of a specified size k contain all possible patterns of k-tuples. For any given k, we first find an appropriate N (N>k) and generate an efficient N-dimensional test set for exhaustive coverage of all k-subspaces. We next develop a constructive procedure to expand the corresponding test matrix (formed by taking test vectors as its rows) such that a test set of N2-dimensional vectors exhaustively covering the same k-subspaces is obtained. This procedure may be repeated to cover arbitrarily large n (equation after i iterations), while keeping the same k. It is shown that the size of the test set obtained this way grows in size which becomes proportional to (log n) raised to the power of [log (q+1)], where q is a function of k only, and is approximated (bounded closely below) by k2/4 in binary cases. This approach applies to nonbinary cases as well except that the value of q in an r-ary case is approximated by a number lying between k2/4 and k2/2. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recent Papers by IBM Authors

    Publication Year: 1984 , Page(s): 220 - 225
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (507 KB)  

    Reprints of the papers listed here may usually be obtained by writing directly to the authors. The authors' IBM divisions or groups are identified as follows: CHQ is Corporate Headquarters; CPD, Communication Products Division; DSD, Data Systems Division; FED, Field Engineering Division: FSD, Federal Systems Division; GPD, General Products Division; GSD, General Systems Division; GTD, General Technology Division; IPD, Information Products Division; ISG, Information Systems Group; IS&CG, Information Systems & Communications Group; IS&TG, Information Systems & Technology Group; NAD, National Accounts Division; NMD, National Marketing Division; RES, Research Division; SPD, System Products Division; and SRI, Systems Research Institute. Papers are listed alphabetically by authors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recent Books by IBM Authors

    Publication Year: 1984 , Page(s): 226
    Save to Project icon | PDF file iconPDF (97 KB)  
    Freely Available from IEEE
  • Recent IBM Patents

    Publication Year: 1984 , Page(s): 227 - 230
    Save to Project icon | PDF file iconPDF (234 KB)  
    Freely Available from IEEE

Aims & Scope

The IBM Journal of Research and Development is a peer-reviewed technical journal, published bimonthly, which features the work of authors in the science, technology and engineering of information systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Clifford A. Pickover
IBM T. J. Watson Research Center