By Topic

Neural Networks, IEEE Transactions on

Issue 2 • Date March 1991

Filter Results

Displaying Results 1 - 19 of 19
  • Comments on "parallel algorithms for finding a near-maximum independent set of a circle graph" [with reply]

    Page(s): 328 - 329
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB)  

    The authors refers to the work of Y. Takefuji et al. (see ibid., vol.1, pp. 263-267, Sept. (1990)), which is concerned with the problem of RNA secondary structure prediction, and draws the reader's attention to his own model and experiments in training the neural networks on small tRNA subsequences. The author admits that Takefuji et al. outline an elegant way to map the problem onto neural architectures, but suggests that such mappings can be augmented with empirical knowledge (e.g., free energy values of base pairs and substructures) and the ability to learn. In their reply, Y. Takefuji and K.-C. Lee hold that the necessity of the learning capability for the RNA secondary structure prediction is questionable. They believe that the task is to build a robust parallel algorithm considering more thermodynamic properties in the model.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pulse-stream VLSI neural networks mixing analog and digital techniques

    Page(s): 193 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    The pulse-stream technique, which represents neural states as sequences of pulses, is reviewed. Several general issues are raised, and generic methods appraised, for pulsed encoding, arithmetic, and intercommunication schemes. Two contrasting synapse designs are presented and compared. The first is based on a fully analog computational form in which the only digital component is the signaling mechanism itself-asynchronous, pulse-rate encoded digital voltage pulses. In this circuit, multiplication occurs in the voltage/current domain. The second design uses more conventional digital memory for weight storage, with synapse circuits based on pulse stretching. Integrated circuits implementing up to 15000 analog, fully programmable synaptic connections are described. A demonstrator project is described in which a small robot localization network is implemented using asynchronous, analog, pulse-stream devices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural networks-then and now

    Page(s): 316 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB)  

    The emergence of neural networks as a significant subdiscipline with corresponding attempts at application to engineering problems is traced back to the 1960s, when Frank Rosenblatt, a Cornell University psychologist, showed by mathematical analysis, digital computer simulation, and experiments with special-purpose parallel analog systems that neural networks with variable-weight connections could be trained to classify spatial patterns into prespecified categories. In his attempts to provide biologically plausible explanations of the function of the central nervous system, he investigated relatively simple networks that were amenable to analysis and more complex networks whose behavior could be predicted only in terms of gross characteristics. He assembled a sizable group involving theoreticians, experimentalists, technologists, and, later, biologists. His work caught the imagination of the press and led to the wave of febrile activity that subsided at the end of that decade View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive nearest neighbor pattern classification

    Page(s): 318 - 322
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    A variant of nearest-neighbor (NN) pattern classification and supervised learning by learning vector quantization (LVQ) is described. The decision surface mapping method (DSM) is a fast supervised learning algorithm and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN pattern classification, learning vector quantization, and a two-layer perceptron trained by error backpropagation. When the class boundaries are sharply defined (i.e., no classification error in the training set), the DSM algorithm outperforms these methods with respect to error rates, learning rates, and the number of prototypes required to describe class boundaries View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiscale optimization in neural nets

    Page(s): 263 - 274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB)  

    One way to speed up convergence in a large optimization problem is to introduce a smaller, approximate version of the problem at a coarser scale and to alternate between relaxation steps for the fine-scale and coarse-scale problems. Such an optimization method for neural networks governed by quite general objective functions is presented. At the coarse scale, there is a smaller approximating neural net which, like the original net, is nonlinear and has a nonquadratic objective function. The transitions and information flow from fine to coarse scale and back do not disrupt the optimization, and the user need only specify a partition of the original fine-scale variables. Thus, the method can be applied easily to many problems and networks. There is generally about a fivefold improvement in estimated cost under the multiscale method. In the networks to which it was applied, a nontrivial speedup by a constant factor of between two and five was observed, independent of problem size. Further improvements in computational cost are very likely to be available, especially for problem-specific multiscale neural net methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuromorphic learning of continuous-valued mappings from noise-corrupted data

    Page(s): 294 - 301
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    The effect of noise on the learning performance of the backpropagation algorithm is analyzed. A selective sampling of the training set is proposed to maximize the learning of control laws by backpropagation, when the data have been corrupted by noise. The training scheme is applied to the nonlinear control of a cart-pole system in the presence of noise. The neural computation provides the neurocontroller with good noise-filtering properties. In the presence of plant noise, the neurocontroller is found to be more stable than the teacher. A novel perspective on the application of neural network technology to control engineering is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VLSI implementation of ART1 memories

    Page(s): 214 - 221
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    A hardware implementation of long-term memory and short-term memory for binary input adaptive resonance theory (ART1) neural networks is presented. This implementation is based on chemical-electrical interactions in real neurons which are known to control axon release of chemical materials which in turn modulate the conductances of synapses. An axon-synapse-tree structure is introduced to achieve bottom-up long-term memory. The tree is realized by voltage modulation of synapse conductances. VLSI circuits are developed to realize the different functions of ART memories View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simple neuron servo

    Page(s): 248 - 251
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    A simple servo controller built from components having neuronlike features is described. This VLSI servo controller uses pulses for control and is orders of magnitude smaller than a conventional system. The basic circuit elements are described. A key element is a component and neuronlike capability that takes voltages as inputs and generates a pulse train as the output. It is shown how the circuits are combined to a proportional and derivative controller. The advantages of using a pulsed output representation to improve slow-speed operation of a friction-limited system is demonstrated. The utility of exploiting parallelism, aggregation, and redundancy to improve system-level performance given imprecise low-level components is discussed. Experimental results illustrate the properties of the system compared with conventional controllers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Current-mode subthreshold MOS circuits for analog VLSI neural systems

    Page(s): 205 - 213
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    An overview of the current-mode approach for designing analog VLSI neural systems in subthreshold CMOS technology is presented. Emphasis is given to design techniques at the device level using the current-controlled current conveyor and the translinear principle. Circuits for associative memory and silicon retina systems are used as examples. The design methodology and how it relates to actual biological microcircuits are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A real-time neural system for color constancy

    Page(s): 237 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    A neural network approach to the problem of color constancy is presented. Various algorithms based on Land's retinex theory are discussed with respect to neurobiological parallels, computational efficiency, and suitability for VLSI implementation. The efficiency of one algorithm is improved by the application of resistive grids and is tested in computer simulations; the simulations make clear the strengths and weaknesses of the algorithm. A novel extension to the algorithm is developed to address its weaknesses. An electronic system that is based on the original algorithm and that operates at video rates was built using subthreshold analog CMOS VLSI resistive grids. The system displays color constancy abilities and qualitatively mimics aspects of human color perception View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • K-winner networks

    Page(s): 310 - 315
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A special class of mutually inhibitory networks is analyzed, and parameters for reliable K-winner performance are presented. The network dynamics are modeled using interactive activation, and results are compared with the sigmoid model. For equal external inputs, network parameters that select the units with the larger initial activations (the network converges to the nearest stable state) are derived. Conversely, for equal initial activations, networks that select the units with larger external inputs (the network converges to the lowest energy stable state) are derived. When initial activations are mixed with external inputs, anomalous behavior results. These discrepancies are analyzed with several examples. Restrictions on initial states are derived which ensure accurate K-winner performance when unequal external inputs are used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gradient methods for the optimization of dynamical systems containing neural networks

    Page(s): 252 - 262
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (940 KB)  

    An extension of the backpropagation method, termed dynamic backpropagation, which can be applied in a straightforward manner for the optimization of the weights (parameters) of multilayer neural networks is discussed. The method is based on the fact that gradient methods used in linear dynamical systems can be combined with backpropagation methods for neural networks to obtain the gradient of a performance index of nonlinear dynamical systems. The method can be applied to any complex system which can be expressed as the interconnection of linear dynamical systems and multilayer neural networks. To facilitate the practical implementation of the proposed method, emphasis is placed on the diagrammatic representation of the system which generates the gradient of the performance function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A tree-structured adaptive network for function approximation in high-dimensional spaces

    Page(s): 285 - 293
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1164 KB)  

    Nonlinear function approximation is often solved by finding a set of coefficients for a finite number of fixed nonlinear basis functions. However, if the input data are drawn from a high-dimensional space, the number of required basis functions grows exponentially with dimension, leading many to suggest the use of adaptive nonlinear basis functions whose parameters can be determined by iterative methods. The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function. Additional input dimensions are incorporated only where needed. The learning procedure grows a tree whose structure depends upon the input data and the function to be approximated. This technique has a fast learning algorithm with no local minima once the network shape is fixed, and it can be used to reduce the number of required measurements in situations where there is a cost associated with sensing. Three examples are given: controlling the dynamics of a simulated planar two-joint robot arm, predicting the dynamics of the chaotic Mackey-Glass equation, and predicting pixel values in real images from pixel values above and to the left View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust stability analysis of adaptation algorithms for single perceptron

    Page(s): 325 - 328
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal least squares learning algorithm for radial basis function networks

    Page(s): 302 - 309
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recurrent correlation associative memories

    Page(s): 275 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    A model for a class of high-capacity associative memories is presented. Since they are based on two-layer recurrent neural networks and their operations depend on the correlation measure, these associative memories are called recurrent correlation associative memories (RCAMs). The RCAMs are shown to be asymptotically stable in both synchronous and asynchronous (sequential) update modes as long as their weighting functions are continuous and monotone nondecreasing. In particular, a high-capacity RCAM named the exponential correlation associative memory (ECAM) is proposed. The asymptotic storage capacity of the ECAM scales exponentially with the length of memory patterns, and it meets the ultimate upper bound for the capacity of associative memories. The asymptotic storage capacity of the ECAM with limited dynamic range in its exponentiation nodes is found to be proportional to that dynamic range. Design and fabrication of a 3-mm CMOS ECAM chip is reported. The prototype chip can store 32 24-bit memory patterns, and its speed is higher than one associative recall operation every 3 μs. An application of the ECAM chip to vector quantization is also described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog VLSI model of binaural hearing

    Page(s): 230 - 236
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    The stereausis model of biological auditory processing is proposed as a representation that encodes both binaural and spectral information in a unified framework. A working analog VLSI chip that implements this model of early auditory processing in the brain is described. The chip is a 100000-transistor integrated circuit that computes the stereausis representation in real time, using continuous-time analog processing. The chip receives two audio inputs, representing sound entering the two ears, computes the stereausis representation, and generates output signals that can directly drive a color CRT display. Outputs from the chips for a variety of artificial and speech stimuli are shown View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A programmable analog neural network processor

    Page(s): 222 - 229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB)  

    An analog neural network breadboard consisting of 256 neurons and 2048 programmable synaptic weights of 5 bits each is constructed and tested. The heart of the processor is an array of custom-programmable synapse (resistor) chips on a reconfigurable neuron board. The analog bandwidth of the system is 90 kHz. The breadboard is used to demonstrate the application of neural network learning to the problem of real-time adaptive mirror control. The processor control is 21 actuators of an adaptive mirror with a step-response setting time of 5 ms. The demonstration verified that it is possible to modify the control law of the high-speed analog loop using neural network training without stopping the control loop View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance and generalization of the classification figure of merit criterion function

    Page(s): 322 - 325
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    A criterion function-the classification figure of merit (CFM)-for training neural networks, introduced by J.B. Hampshire and A.H. Waibel (IEEE Trans. Neural Networks, vol. 1, pp. 216-218, June (1990)), is studied. It is shown that this criterion function has some highly desirable properties. CFM has optimal training-set performance, which is related (but not equivalent) to its monotonicity. However, there is no reason to expect generalization with this criterion function to be substantially better than that of the standard criterion functions. It is nonetheless preferable to use this criterion function because its ability to find classifiers which classify the training set well will also lead to improved test-set performance after training with a suitably detailed training set View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope