By Topic

Computers, IEEE Transactions on

Issue 1 • Date Jan. 1970

Filter Results

Displaying Results 1 - 22 of 22
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (406 KB)  
    Freely Available from IEEE
  • IEEE Computer Group

    Page(s): nil1
    Save to Project icon | Request Permissions | PDF file iconPDF (179 KB)  
    Freely Available from IEEE
  • [Breaker page]

    Page(s): nil1
    Save to Project icon | Request Permissions | PDF file iconPDF (179 KB)  
    Freely Available from IEEE
  • The Extended Resolution Digital Differential Analyzer: A New Computing Structure for Solving Differential Equations

    Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1553 KB)  

    In conventional digital differential analyzers (DDA), the word length used for the transmission of information between integrators is restricted to at most a single magnitude bit and a sign bit. This restriction seriously limits integrator frequency response and has to a large extent been responsible for the failure of DDAs to achieve widespread acceptance as general purpose differential analyzers. In this paper it is shown that DDA speed and accuracy can be greatly improved by using increment word lengths which are approximately one-half the length of integrand registers providing that integration formulas more accurate than Euler integration are used. The programming of such machines for the solution of both linear and nonlinear differential equations is discussed and a quantitative evaluation of performance improvement is presented. At the same time, an effort is made to isolate the principal difficulties in hardware implementation which result from extending the integrator increment resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Interactive Computer Approach to Tolerance Analysis

    Page(s): 10 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2365 KB)  

    An interactive technique for statistical tolerance analysis has been developed for a computer with graphic display. The computer program called TAP provides for random perturbation of parameter values, repeated evaluation of system performance, and display of distribution histograms. The interactive capability of the program enables the designer to introduce modifications to the system so that the relative effects of these changes can be quickly evaluated with the graphic display. The program has been implemented on a computer facility consisting of a CDC 3300 digital computer, an ITT digital display scope, and an EAI 8800 analog computer. Two applications are discussed to illustrate the adaptability of the technique to either linear or nonlinear problems. The digital simulation of an active circuit is used to show how the interaction with graphic display saves time in evaluating alternative designs. A hybrid simulation of an equalizer for a digital transmission system illustrates that complex time domain problems can be economically analyzed using TAP. In addition, it is shown how TAP is used to measure the reproducibility of an analog simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Generalized Technique for Spectral Analysis

    Page(s): 16 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1483 KB)  

    A technique is presented to implement a class of orthogonal transformations on the order of pN logp N operations. The technique is due to Good [1] and implements a fast Fourier transform, fast Hadamard transform, and a variety of other orthogonal decompositions. It is shown how the Kronecker product can be mathematically defined and efficiently implemented using a matrix factorization method. A generalized spectral analysis is suggested, and a variety of examples are presented displaying various properties of the decompositions possible. Finally, an eigenvalue presentation is provided as a possible means of characterizing some of the transforms with similar parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computer Simulation of Pulse Propagation Through a Periodic Loaded Transmission Line

    Page(s): 25 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1710 KB)  

    In view of the speed of today's hardware components, wirings between IC pins cannot be regarded as ``short circuit jumper+some C'' but should be treated as a piece of transmission line. The central problem in design of any equipment employing a number of IC is thus to predict the extent of gradual deterioration of waveforms as pulses propagate along a length of transmission line loaded with many lumped constant loads at distances. A simulation technique for this problem is introduced in the paper, and the technique is more practical than those previously described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Error-Detecting Binary Adder: A Hardware-Shared Implementation

    Page(s): 34 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (881 KB)  

    A design for a binary adder-checker system which employs residue codes to detect any error resulting from a single fixed fault is presented. In an adder, special functional relationships must exist, regardless of the particular logical realization. Consequently, for adders with either serial or parallel carry propagation, the worst possible error can be described precisely. Certain residue codes may then be used to detect that error by means of a simple checking algorithm with a minimnum of extra circuitry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Scheme for Synchronizing High-Speed Logic: Part I

    Page(s): 39 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1712 KB)  

    In this paper we concern ourselves with the problem of obtaining high sequence rate sequential machines; machines which are constructed from realistic devices to operate at an input sequence rate which is independent of the machine complexity. To accomplish this result we have only to show a construction to realize acceptably synchronous devices from badly timed, restricted fan-in and fan-out devices. Once a complete set of synchronous devices is obtained, the results of Arden [1] and Arthurs [2] apply, and we know that any finite state machine has a realization using these devices which accepts input sequence members at a rate that is characteristic of the set of devices, not of the machine. The technique we propose for achieving this result is to produce a lattice of interconnected clock pulse sources called clock pulse propagators (CPP). These devices generate clock pulses which are acceptably synchronized with respect to the outputs of neighboring CPP's but are not required to be in synchronization with some machine-wide standard as in current practice. Once it is established that such a network is possible, techniques already known can be applied in the utilization of the clock pulses to synchronize logic and signals. Part I of the paper concerns the analysis of CPP networks and Part II1 covers the synthesis of sequential machines using CPP networks as clocking sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Organization of High-Speed Memory for Parallel Block Transfer of Data

    Page(s): 47 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1521 KB)  

    This paper describes the organization of a multi-module memory, designed to facilitate parallel block transfers. All modules are assumed to be identical, and the individual modules can fetch or store no more than one word or word group during any single memory cycle. Parallel block transfers are made possible in multimodule memories by utilizing a device called the memory circulator and by organizing the memory in a particular way. The memory circulator consists of a bank of interconnected registers, one for each memory, and control circuitry. The memory system is organized so that ascending logical addresses are distributed cyclically among the modules. If there are 2b modules, then any individual word is accessed by using the least significant b bits of a memory address to select a module and by using the remaining bits to select an address within a module. The memory circulator can load and store a contiguous block of 2b words by selecting all modules and broadcasting a single address to all modules. A contiguous block can be displaced in memory by a multiple of 2b words by broadcasting different load and store addresses for a block of data. The circulator control circuitry includes a masking capability so that blocks smaller than 2b can be moved in this fashion. When the displacement of a block transfer is not a multiple of 2b a physical circulation of the data in the memory circulator registers is required. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iteratively Realized Sequential Circuits

    Page(s): 54 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1741 KB)  

    Synthesis techniques are presented for realizing an arbitrary synchronous flow table in the form of an array of identical modules interconnected in a regular pattern. Several types of structures and their corresponding modules are considered, and a relationship between these structures and earlier work on combinational circuits is shown. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic Procedures for Realizing Synchronous Sequential Machines Using Flip-Flop Memory: Part II

    Page(s): 66 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1717 KB)  

    This paper is Part II of a two-part study of systematic procedures for realizing synchronous sequential machines using flip-flop memory. In this study the methods of Dolotta and McCluskey, and Weiner and Smith are generalized so that they can be used to obtain directly good realizations of machines using flip-flop memory. In Part I the generalizations were simple, straightforward, and required a minimal amount of changes to the basic methods. For machines using trigger flip-flop memory or a combination of trigger and set-reset, or trigger and J-K flip-flops, these generalizations usually yield significantly better realizations than those obtained from the use of the ungeneralized versions of these methods along with methods for transforming the resulting next-state functions into flip-flop input functions. Minimization of changes of these methods was achieved at the expense of imposing the restriction that the inputs to each of the set-reset and J-K flip-flop types be complementary. In the present paper further generalizations for obtaining even better realizations using trigger, set-reset, and J-K flip-flops are developed by dropping the aforementioned restriction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Logic-in-Memory Computer

    Page(s): 73 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1461 KB)  

    If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Modified Matrix Algorithm for Determining the Complete Connection Matrix of a Switching Network

    Page(s): 78 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB)  

    An efficient matrix algorithm is described which enables one to determine the complete connection matrix in only two steps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simple Convergent Algorithm for Rapid Solution of Polynomial Equations

    Page(s): 79 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    Extensions to a straightforward, always convergent method for solving polynomial equations given in a previous paper are considered. The extensions consist of additional simple calculations and logic instructions which considerably improve convergence rate for the cases when multiple roots exist or when roots are close together. It is believed that in terms of simplicity and convergence properties, the approach is more efficient than presently available methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mathematical ``Lower Bounds'' and the Logic Circuit Designer

    Page(s): 80 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB)  

    The use of published theorems on least times to perform arithmetic operations as aids in optimizing logic circuit designs is discussed. An illustrative example is presented involving the optimum maximum fan-in of circuits in a binary adder. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contributors

    Page(s): 82 - 84
    Save to Project icon | Request Permissions | PDF file iconPDF (2734 KB)  
    Freely Available from IEEE
  • Abstracts of Current Computer Literature

    Page(s): 85 - 94
    Save to Project icon | Request Permissions | PDF file iconPDF (2629 KB)  
    Freely Available from IEEE
  • Descriptor-in-Context Index

    Page(s): 94 - 99
    Save to Project icon | Request Permissions | PDF file iconPDF (1275 KB)  
    Freely Available from IEEE
  • Identifier index

    Page(s): 99 - 100
    Save to Project icon | Request Permissions | PDF file iconPDF (307 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 100
    Save to Project icon | Request Permissions | PDF file iconPDF (160 KB)  
    Freely Available from IEEE
  • Information for authors

    Page(s): nil2
    Save to Project icon | Request Permissions | PDF file iconPDF (246 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Albert Y. Zomaya
School of Information Technologies
Building J12
The University of Sydney
Sydney, NSW 2006, Australia
http://www.cs.usyd.edu.au/~zomaya
albert.zomaya@sydney.edu.au