By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 1993

Filter Results

Displaying Results 1 - 16 of 16
  • A feedforward artificial neural network based on quantum effect vector-matrix multipliers

    Page(s): 427 - 433
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • UV-activated conductances allow for multiple time scale learning

    Page(s): 434 - 440
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    Ultraviolet (UV) photoinjection of electrons through SiO2 provides a convenient and simple method for programming analog, nonvolatile memories in CMOS circuits. The time scales involved in the UV programming process are well suited to multiple time scale learning algorithms providing several orders of magnitude in programming rate. The method requires no special processing technology. Measured characteristics of the UV photoinjection devices and experimental results from a synapse circuit built using these devices are presented. This synapse circuit includes a continuously adjustable weight, an electronic learn/hold control and slow forgetting dynamics, while allowing an unimpeded multiplication operation at all times View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analog CMOS chip set for neural networks with arbitrary topologies

    Page(s): 441 - 444
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    An analog CMOS chip set for implementations of artificial neural networks (ANNs) has been fabricated and tested. The chip set consists of two cascadable chips: a neuron chip and a synapse chip. Neurons on the neuron chips can be interconnected at random via synapses on the synapse chips thus implementing an ANN with arbitrary topology. The neuron test chip contains an array of 4 neurons with well defined hyperbolic tangent activation functions which is implemented by using parasitic lateral bipolar transistors. The synapse test chip is a cascadable 4×4 matrix-vector multiplier with variable, 10-b resolution matrix elements. The propagation delay of the test chips was measured to 2.6 μs per layer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive neural processing node

    Page(s): 413 - 426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB)  

    The design and test results for two analog adaptive VLSI processing chips are described. These chips use pulse coded signals for communication between processing nodes and analog weights for information storage. The weight modification rule, implemented on chip, uses concepts developed by E. Oja (1982) and later extended by T. Leen et al. (1989) and T. Sanger (1989). Experimental results demonstrate that the network produces linearly separable outputs that correspond to dominant features of the inputs. Such representations allow for efficient additional neural processing. Part of the adaptation rule also includes a small number of fixed inputs and a variable lateral inhibition mechanism. Experimental results from the first chip show the operation of function blocks that make a single processing node. These function blocks include forward transfer function, weight modification, and inhibition. Experimental results from the second chip show the ability of an array of processing elements to extract important features from the input data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A charge-based on-chip adaptation Kohonen neural network

    Page(s): 462 - 469
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    A charge-based on-chip synapse adaptation Kohonen neural network circuit is proposed. The properties of the approach are low power dissipation and high density due to the charge transfer mechanism and the novel compact device configurations. The prototype chip which contains 12×10 synapses with a density of 190 synapses/mm2 was fabricated with 2-μm standard CMOS technology. The experimental results from the prototype chip demonstrated successful unsupervised learning and classification as theoretically predicted View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog implementation of a Kohonen map with on-chip learning

    Page(s): 456 - 461
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    Kohonen maps are self-organizing neural networks that classify and quantify n-dimensional data into a one- or two-dimensional array of neurons. Most applications of Kohonen maps use simulations on conventional computers, eventually coupled to hardware accelerators or dedicated neural computers. The small number of different operations involved in the combined learning and classification process, however, makes the Kohonen model particularly suited to a dedicated VLSI implementation, taking full advantage of the parallelism and speed that can be obtained on the chip. A fully analog implementation of a one-dimensional Kohonen map, with on-chip learning and refreshment of on-chip analog synaptic weights, is proposed. The small number of transistors in each cell allows a high degree of parallelism in the operations, which greatly improves the computation speed compared to other implementations. The storage of analog synaptic weights, based on the principle of current copiers, is emphasized. It is shown that this technique can be used successfully for the realization of VLSI Kohonen maps View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Silicon retina with correlation-based, velocity-tuned pixels

    Page(s): 529 - 541
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1172 KB)  

    A functional two-dimensional silicon retina that computes a complete set of local direction-selective outputs is reported. The chip motion computation uses unidirectional delay lines as tuned filters for moving edges. Photoreceptors detect local changes in image intensity, and the outputs from these photoreceptors are coupled into the delay line, where they propagate with a particular speed in one direction. If the velocity of the moving edges matches that of the delay line, then the signal on the delay line is reinforced. The output of each pixel is the power in the delay line signal, computed within each pixel. This power computation provides the essential nonlinearity for velocity selectivity. The delay line architecture differs from the usual pairwise correlation models in that motion information is aggregated over an extended spatiotemporal range. As a result, the detectors are sensitive to motion over a wide range of spatial frequencies. The design of functional one- and two-dimensional silicon retinas with direction-selective, velocity-tuned pixels is described. It is shown that pixels with three hexagonal directions of motion selectivity are approximately (225 μm)2 in area in a 2-μm CMOS technology and consume less than 5 μW of power View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A single 1.5-V digital chip for a 106 synapse neural network

    Page(s): 387 - 393
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    A digital-chip architecture for a 106-synapse neural network is proposed. It runs on a 1.5-V dry cell to allow its use in portable equipment. An on-chip DRAM cell array stores synapse weights digitally to provide easy programmability and automatic refreshing. A pitch-matched interconnection and a combinational unit circuit for summing product allow a tight layout. A dynamic data transfer circuit and the 1.5-V operation of the entire chip reduce the power dissipation, but the parallel processing nonetheless provides high speed at the 1.5-V supply. Estimated power dissipation of 75 mW and a processing speed of 1.37 giga connections per second are predicted for the chip. The memory and the processing circuits can be integrated on a 15.4-mm×18.6-mm chip by using a 0.5-μm CMOS design rule. A scaled-down version of the chip that has an 8-kb DRAM cell array was fabricated, and its operation was confirmed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multilevel neural network for A/D conversion

    Page(s): 470 - 483
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB)  

    A multilevel neuron is introduced, and its use in a neural network multilevel A/D converter is shown. An energy function suited for multilevel neural networks is defined for which local minima problems for A/D conversion are removed by modifying the method proposed by B.W. Lee and B.J. Sheu (1989, 1991). This energy function extends others in the sense that it allows one to consider more than two discrete levels in the neuron output and threshold settings. It is shown how to build and implement multilevel nonlinearities, and a way of implementing a multilevel neural network for A/D conversion by taking advantage of BiCMOS technologies is demonstrated. Computer simulations are included to illustrate how this design functions, and individual component VLSI chips measurements for multilevel A/D conversion are presented to show how each component operates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A CMOS analog adaptive BAM with on-chip learning and weight refreshing

    Page(s): 445 - 455
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (852 KB)  

    The transconductance-mode (T-mode) approach is extended to implement analog continuous-time neural network hardware systems to include on-chip Hebbian learning and on-chip analog weight storage capability. The demonstration vehicle used is a 5+5-neuron bidirectional associative memory (BAM) prototype fabricated in a standard 2-μm double-metal double-polysilicon CMOS process. Mismatches and nonidealities in learning neural hardware are not supposed to be critical if on-chip learning is available, because they will be implicitly compensated. However, mismatches in the learning circuits themselves cannot always be compensated. This mismatch is specially important if the learning circuits use transistors operating in weak inversion. The authors estimate the expected mismatch between learning circuits in the BAM network prototype and evaluate its effect on the learning performance, using theoretical computations and Monte Carlo HSPICE simulations. These theoretical predictions are verified using experimentally measured results on the test vehicle prototype View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fuzzy inference engine in nonlinear analog mode and its application to a fuzzy logic control

    Page(s): 496 - 522
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1960 KB)  

    In this tutorial, the utility of a fuzzy system is demonstrated by providing a broad overview, emphasizing analog mode hardware, along with a discussion of the author's original work. First, the difference between deterministic words and fuzzy words is explained as well as fuzzy logic. The description of the system using mathematical equations, linguistic rules, or parameter distributions (e.g., neural networks) is discussed. Fuzzy inference and defuzzification algorithms are presented, and their hardware implementation is discussed. The fuzzy logic controller was used to stabilize a glass with wine balanced on a finger and a mouse moving around a plate on the tip of an inverted pendulum View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generic systolic array building block for neural networks with on-chip learning

    Page(s): 400 - 407
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    Neural networks require VLSI implementations for on-board systems. Size and real-time considerations show that on-chip learning is necessary for a large range of applications. A flexible digital design is preferred here to more compact analog or optical realizations. As opposed to many current implementations, the two-dimensional systolic array system presented is an attempt to define a novel computer architecture inspired by neurobiology. It is composed of generic building blocks for basic operations rather than predefined neural models. A full custom VLSI design of a first prototype has demonstrated the efficacy of this design. A complete board dedicated to Hopfield's model has been designed using these building blocks. Beyond the very specific application presented, the underlying principles can be used for designing efficient hardware for most neural network models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A programmable analog VLSI neural network processor for communication receivers

    Page(s): 484 - 495
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1440 KB)  

    An analog VLSI neural network processor was designed and fabricated for communication receiver applications. It does not require prior estimation of the channel characteristics. A powerful channel equalizer was implemented with this processor chip configured as a four-layered perceptron network. The compact synapse cell is realized with an enhanced wide-range Gilbert multiplier circuit. The output neuron consists of a linear current-to-voltage converter and a sigmoid function generator with a controllable voltage gain. Network training is performed by the modified Kalman neuro-filtering algorithm to speed up the convergence process for intersymbol interference and white Gaussian noise communication channels. The learning process is done in the companion DSP board which also keeps the synapse weight for later use of the chip. The VLSI neural network processor chip occupies a silicon area of 4.6 mm×6.8 mm and was fabricated in a 2-μm double-polysilicon CMOS technology. System analysis and experimental results are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The design of a neuro-microprocessor

    Page(s): 394 - 399
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB)  

    The architecture of a neuro-microprocessor is presented. This processor was designed using the results of careful analysis of a set of applications and extensive simulation of moderate-precision arithmetic for back-propagation networks. Simulated performance results and test-chip results for the processor are presented. This work is an important intermediate step in the development of a connectionist network supercomputer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Silicon auditory processors as computer peripherals

    Page(s): 523 - 528
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    Several research groups are implementing analog integrated circuit models of biological auditory processing. The outputs of these circuit models have taken several forms, including video format for monitor display, simple scanned output for oscilloscope display, and parallel analog outputs suitable for data-acquisition systems. Here, an alternative output method for silicon auditory models, suitable for direct interface to digital computers, is described. As a prototype of this method, an integrated circuit model of temporal adaptation in the auditory nerve that functions as a peripheral to a workstation running Unix is described. Data from a working hybrid system that includes the auditory model, a digital interface, and asynchronous software are given. This system produces a real-time X-window display of the response of the auditory nerve model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The pRAM: an adaptive VLSI chip

    Page(s): 408 - 412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    The pRAM (probabilistic RAM) is a nonlinear stochastic device with neuron like behavior. The pRAM is realizable in hardware, and the third-generation VLSI pRAM chip is described. This chip is adaptive since learning algorithms have been incorporated on-chip, using reinforcement training. The pRAM chip is also adaptive with respect to the interconnections between neurons. Results achieved from a small net of pRAM's performing a pattern-recognition task using reinforcement training are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope