By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 1992

Filter Results

Displaying Results 1 - 17 of 17
  • Analysis and verification of an analog VLSI incremental outer-product learning system

    Publication Year: 1992 , Page(s): 488 - 497
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1084 KB)  

    An architecture is described for the microelectronic implementation of arbitrary outer-product learning rules in analog floating-gate CMOS matrix-vector multiplier networks. The weights are stored permanently on floating gates and are updated under uniform UV illumination with a general incremental analog four-quadrant outer-product learning scheme, performed locally on-chip by a single transistor per matrix element on average. From the mechanism of floating gate relaxation under UV radiation, the authors derive the learning parameters and their dependence on the illumination level and circuit parameters. It is shown that the weight increments consists of two parts: one term contains the outer product of two externally applied learning vectors; the other part represents a uniform weight decay, with time constant originating from the floating gate relaxation. The authors address the implementation of supervised and unsupervised learning algorithms with emphasis on the delta rule. Experimental results from a simple implementation of the delta rule on an 8×7 linear network are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of the ANNA neural network chip to high-speed character recognition

    Publication Year: 1992 , Page(s): 498 - 505
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1128 KB)  

    A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A VLSI neural processor for image data compression using self-organization networks

    Publication Year: 1992 , Page(s): 506 - 518
    Cited by:  Papers (61)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1356 KB)  

    An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm. The performance of this self-organization network and that of a conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results. The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each quantization vector. A mixed-signal design technique with analog circuitry to perform neural computation and digital circuitry to process multiple-bit address information are used. A prototype chip for a 25-D adaptive vector quantizer of 64 code words was designed, fabricated, and tested. It occupies a silicon area of 4.6 mm×6.8 mm in a 2.0 μm scalable CMOS technology and provides a computing capability as high as 3.2 billion connections/s. The experimental results for the chip and the winner-take-all circuit test structure are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A scalable optoelectronic neural system using free-space optical interconnects

    Publication Year: 1992 , Page(s): 404 - 413
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    The design of a scalable, fully connected 3-D optoelectronic neural system that uses free-space optical interconnects with silicon-VLSI-based hybrid optoelectronic circuits is proposed. The system design uses a hardware-efficient combination of pulsewidth-modulating optoelectronic neurons and pulse-amplitude-modulating electronic synapses. Low-area, high-linear-dynamic-range analog synapse and neuron circuits are proposed. SPICE circuit simulations and an experimental demonstration of the free-space optical interconnection system are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of floating gate neural network memory cells in standard VLSI CMOS technology

    Publication Year: 1992 , Page(s): 347 - 353
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    Several floating gate MOSFET structures, for potential use as analog memory elements in neural networks, have been fabricated in a standard 2 μm double-polysilicon CMOS process. Their physical and programming characteristics are compared with each other and with similar structures reported in the literature. None of the circuits under consideration require special fabrication techniques. The criteria used to determine the structure most suitable for neural network memory applications include the symmetry of charging and discharging characteristics, programming voltage magnitudes, the area required, and the effectiveness of geometric field enhancement techniques. This work provides a layout for an analog neural network memory based on previously unexplored criteria and results. The authors have found that the best designs (a) use the poly1 to poly2 oxide for injection; (b) need not utilize `field enhancement' techniques; (c) use poly1 to diffusion oxide for a coupling capacitor; and (d) size capacitor ratios to provide a wide range of possible programming voltages View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog CMOS implementation of a multilayer perceptron with nonlinear synapses

    Publication Year: 1992 , Page(s): 457 - 465
    Cited by:  Papers (36)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    A neurocomputer based on a high-density analog integrated circuit developed in a 3 μm CMOS technology has been built. The 1.6 mm×2.4 mm chip contains 18 neurons and 161 synapses in three layers, and provides 16 inputs and 4 outputs. The weights are stored on storage capacitors of the synapses. A formalization of the error back-propagation algorithm which allows the use of very small nonlinear synapses is shown. The influence of offset voltages in the synapses on the circuit performance is analyzed. Some experimental results are reported and discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The design, fabrication, and test of a new VLSI hybrid analog-digital neural processing element

    Publication Year: 1992 , Page(s): 363 - 374
    Cited by:  Papers (9)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB)  

    A hybrid analog-digital neural processing element with the time-dependent behavior of biological neurons has been developed. The hybrid processing element is designed for VLSI implementation and offers the best attributes of both analog and digital computation. Custom VLSI layout reduces the layout area of the processing element, which in turn increases the expected network density. The hybrid processing element operates at the nanosecond time scale, which enables it to produce real-time solutions to complex spatiotemporal problems found in high-speed signal processing applications. VLSI prototype chips have been designed, fabricated, and tested with encouraging results. Systems utilizing the time-dependent behavior of the hybrid processing element have been simulated and are currently in the fabrication process. Future applications are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lneuro 1.0: a piece of hardware LEGO for building neural network systems

    Publication Year: 1992 , Page(s): 414 - 422
    Cited by:  Papers (24)  |  Patents (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB)  

    Neural network simulations on a parallel architecture are reported. The architecture is scalable and flexible enough to be useful for simulating various kinds of networks and paradigms. The computing device is based on an existing coarse-grain parallel framework (INMOS transputers), improved with finer-grain parallel abilities through VLSI chips, and is called the Lneuro 1.0 (for LEP neuromimetic) circuit. The modular architecture of the circuit makes it possible to build various kinds of boards to match the expected range of applications or to increase the power of the system by adding more hardware. The resulting machine remains reconfigurable to accommodate a specific problem to some extent. A small-scale machine has been realized using 16 Lneuros, to experimentally test the behavior of this architecture. Results are presented on an integer version of Kohonen feature maps. The speedup factor increases regularly with the number of clusters involved (to a factor of 80). Some ways to improve this family of neural network simulation machines are also investigated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voiced-speech representation by an analog silicon model of the auditory periphery

    Publication Year: 1992 , Page(s): 477 - 487
    Cited by:  Papers (40)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1008 KB)  

    An analog CMOS integration of a model for the auditory periphery is presented. The model consists of middle ear, basilar membrane, and hair cell/synapse modules which are derived from neurophysiological studies. The circuit realization of each module is discussed, and experimental data of each module's response to sinusoidal excitation are given. The nonlinear speech processing capabilities of the system are demonstrated using the voiced syllable |ba|. The multichannel output of the silicon model corresponds to the time-varying instantaneous firing rates of auditory nerve fibers that have different characteristic frequencies. These outputs are similar to the physiologically obtained responses. The actual implementation uses subthreshold CMOS technology and analog continuous-time circuits, resulting in a real-time, micropower device with potential applications as a preprocessor of auditory stimuli View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VLSI implementation of synaptic weighting and summing in pulse coded neural-type cells

    Publication Year: 1992 , Page(s): 394 - 403
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (780 KB)  

    Presents the hardware realization for synaptic weighting and summing using pulse-coded neural-type cells (NTCs). The basic information processing element (NTC) encodes the information into the form of pulse duty cycles using voltage-controlled resistors, for which a pulse duty cycle modulation technique is proposed. Summation is executed by a simple capacitor circuit as a current integrator. Layouts and measurements on a fabricated integrated design are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A modular CMOS design of a Hamming network

    Publication Year: 1992 , Page(s): 444 - 456
    Cited by:  Papers (34)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1092 KB)  

    A modular design approach for the CMOS implementation of a Hamming network is proposed. The Hamming network is an optimum minimum error classifier for binary patterns and is very suitable for a VLSI implementation due to its primarily feedforward structure. First, a modular chip that contains an array of N×M exclusive-NOR transconductors computes the matching scores between M encoded exemplar patterns (with N elements per exemplar) and an unknown input pattern. Then, a winner-take-all (WTA) circuit selects the exemplar pattern that most resembles the input pattern. By interconnecting multiple modular chips, the number and size of the patterns that can be stored in the network can be easily expanded. Measured experimental results are given to illustrate the performance and limitations of the hardware implementations of the Hamming network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analog neural hardware implementation using charge-injection multipliers and neutron-specific gain control

    Publication Year: 1992 , Page(s): 354 - 362
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analog implementation of discrete-time cellular neural networks

    Publication Year: 1992 , Page(s): 466 - 476
    Cited by:  Papers (56)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (920 KB)  

    An analog circuit structure for the realization of discrete-time cellular neural networks (DTCNNs) is introduced. The computation is done by a balanced clocked circuit based on the idea of conductance multipliers and operational transconductance amplifiers. The circuit is proposed for a one-neighborhood on a hexagonal grid, but can also be modified to larger neighborhoods and/or other grid topologies. A layout was designed for a standard CMOS process, and the corresponding HSPICE simulation results are given. A test chip containing 16 cells was fabricated, and measurements of the transfer characteristics are provided. The functional behavior is demonstrated for a simple example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional abilities of a stochastic logic neural network

    Publication Year: 1992 , Page(s): 434 - 443
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    The authors have studied the information processing ability of stochastic logic neural networks, which constitute one of the pulse-coded artificial neural network families. These networks realize pseudoanalog performance with local learning rules using digital circuits, and therefore suit silicon technology. The synaptic weights and the outputs of neurons in stochastic logic are represented by stochastic pulse sequences. The limited range of the synaptic weights reduces the coding noise and suppresses the degradation of memory storage capacity. To study the effect of the coding noise on an optimization problem, the authors simulate a probabilistic Hopfield model (Gaussian machine) which has a continuous neuron output function and probabilistic behavior. A proper choice of the coding noise amplitude and scheduling improves the network's solutions of the traveling salesman problem (TSP). These results suggest that stochastic logic may be useful for implementing probabilistic dynamics as well as deterministic dynamics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated pulse stream neural networks: results, issues, and pointers

    Publication Year: 1992 , Page(s): 385 - 393
    Cited by:  Papers (52)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB)  

    Results from working analog VLSI implementations of two different pulse stream neural network forms are reported. The circuits are rendered relatively invariant to processing variations, and the problem of cascadability of synapses to form large systems is addressed. A strategy for interchip communication of large numbers of neural states has been implemented in silicon and results are presented. The circuits demonstrated confront many of the issues that blight massively parallel analog systems, and offer solutions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The TInMANN VLSI chip

    Publication Year: 1992 , Page(s): 375 - 384
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB)  

    A massively parallel, all-digital, stochastic architecture-TInMANN-that acts as a Kohonen self-organizing feature map is described. A VLSI design is shown for a TInMANN neuron which fits within a small, inexpensive MOSIS TinyChip frame, yet which can be configured to build networks of arbitrary size. The neuron operates at a speed of 15 MHz, making it capable of processing 195000 three-dimensional training examples per second. Three man-months were required to synthesize the neuron and its associated level-sensitive scan logic using the OASIS silicon compiler. The ease of synthesis allowed many performance trade-offs to be examined, while the automatic testability features of the compiler helped the designers achieve 100% fault coverage of the chip. These factors served served to create a fast, dense, and reliable neural chip View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Mod 2 Neurocomputer system design

    Publication Year: 1992 , Page(s): 423 - 433
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    The Mod 2 Neurocomputer, the latest in a series of neurocomputing systems at the Naval Air Warfare Center Weapons Division, is a neural network processing system incorporating individual neural networks as subsystems in a layered hierarchical architecture. The Mod 2 is designed to support parallel processing of image data at sensor (real-time) rates. Basic concepts implemented in the Mod 2 are (1) maintaining data representations as frames of data processed as a whole at each layer, (2) a general interconnect design supporting data transfer requirements such as generation of parallel pathways, fan-up/fan-down, and feedforward and feedback, and (3) a neuroprocessing block supporting several neural network paradigms. The basis for the system implementation is the Intel 80170NX neural network processor. Examples are given for the implementation strategy for neural substructures such as the multilayer perceptron and temporal and spatiotemporal image processing, as well as the implementation of a multifunction processing system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope