Volume 6 Issue 1 • Jan. 1995
Filter Results
-
Locally excitatory globally inhibitory oscillator networks
Publication Year: 1995, Page(s):283 - 286
Cited by: Papers (141)A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. In the network, an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. Compute... View full abstract»
-
Existence and uniqueness results for neural network approximations
Publication Year: 1995, Page(s):2 - 13
Cited by: Papers (29)Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approxima... View full abstract»
-
Improving the performance of Kanerva's associate memory
Publication Year: 1995, Page(s):125 - 130
Cited by: Papers (9)A parallel associative memory first proposed by Kanerva (1988) is discussed. The major appeal of this memory is its ability to be trained very rapidly. A discrepancy between Kanerva's theoretical calculation of capacity and the actual capacity is demonstrated experimentally and a corrected theory is offered. A modified method of reading from memory is suggested which results in a capacity nearly t... View full abstract»
-
A general mean-based iterative winner-take-all neural network
Publication Year: 1995, Page(s):14 - 24
Cited by: Papers (15)In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulatio... View full abstract»
-
Neighborhood sequential and random training techniques for CMAC
Publication Year: 1995, Page(s):196 - 202
Cited by: Papers (45)An adaptive control algorithm based on Albus' CMAC (Cerebellar Model Articulation Controller) was studied with emphasis on how to train CMAC systems. Two training techniques-neighborhood sequential training and random training, have been devised. These techniques were used to generate mathematical functions, and both methods successfully circumvented the training interference resulting from CMAC's... View full abstract»
-
Robust principal component analysis by self-organizing rules based on statistical physics approach
Publication Year: 1995, Page(s):131 - 143
Cited by: Papers (100) | Patents (1)This paper applies statistical physics to the problem of robust principal component analysis (PCA). The commonly used PCA learning rules are first related to energy functions. These functions are generalized by adding a binary decision field with a given prior distribution so that outliers in the data are dealt with explicitly in order to make PCA robust. Each of the generalized energy functions i... View full abstract»
-
Fuzzy multi-layer perceptron, inferencing and rule generation
Publication Year: 1995, Page(s):51 - 63
Cited by: Papers (98) | Patents (2)A connectionist expert system model, based on a fuzzy version of the multilayer perceptron developed by the authors, is proposed. It infers the output class membership value(s) of an input pattern and also generates a measure of certainty expressing confidence in the decision. The model is capable of querying the user for the more important input feature information, if and when required, in case ... View full abstract»
-
Approximation capability in C(
Publication Year: 1995, Page(s):25 - 30R ¯n) by multilayer feedforward networks and related problems
Cited by: Papers (100)In this paper, we investigate the capability of approximating functions in C(R¯n) by three-layered neural networks with sigmoidal function in the hidden layer. It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, as contrast to continuity or monotonity condition. We point out that in order to prove the neural ne... View full abstract»
-
Fast neural net simulation with a DSP processor array
Publication Year: 1995, Page(s):203 - 213
Cited by: Papers (26)This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal t... View full abstract»
-
Diagonal recurrent neural networks for dynamic systems control
Publication Year: 1995, Page(s):144 - 156
Cited by: Papers (458)A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller ... View full abstract»
-
Back-propagation network and its configuration for blood vessel detection in angiograms
Publication Year: 1995, Page(s):64 - 72
Cited by: Papers (65) | Patents (1)A neural-network classifier for detecting vascular structures in angiograms was developed. The classifier consisted of a multilayer feedforward network window in which the center pixel was classified using gray-scale information within the window. The network was trained by using the backpropagation algorithm with the momentum term. Based on this image segmentation problem, the effect of changing ... View full abstract»
-
Asymptotic level density in topological feature maps
Publication Year: 1995, Page(s):230 - 236
Cited by: Papers (29)The Kohonen algorithm entails a topology conserving mapping of an input pattern space X⊂Rn characterized by an a priori probability distribution P(x), x∈X, onto a discrete lattice of neurons r with virtual positions wr∈X. Extending results obtained by Ritter (1991) the authors show in the one-dimensional case for an arbitrary monotonously decreasing neighborhood... View full abstract»
-
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer
Publication Year: 1995, Page(s):31 - 42
Cited by: Papers (100) | Patents (2)Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for... View full abstract»
-
Adaptive detection of small sinusoidal signals in non-Gaussian noise using an RBF neural network
Publication Year: 1995, Page(s):214 - 219
Cited by: Papers (14)This paper addresses the application of locally optimum (LO) signal detection techniques to environments in which the noise density is not known a priori. For small signal levels, the LO detection rule is shown to involve a nonlinearity which depends on the noise density. The estimation of the noise density is a major part of the computational burden of LO detection rules. In this paper, adaptive ... View full abstract»
-
Optimal adaptive k-means algorithm with dynamic adjustment of learning rate
Publication Year: 1995, Page(s):157 - 169
Cited by: Papers (110) | Patents (3)Adaptive k-means clustering algorithms have been used in several artificial neural network architectures, such as radial basis function networks or feature-map classifiers, for a competitive partitioning of the input domain. This paper presents an enhancement of the traditional k-means algorithm. It approximates an optimal clustering solution with an efficient adaptive learning rate, which renders... View full abstract»
-
Learning capability assessment and feature space optimization for higher-order neural networks
Publication Year: 1995, Page(s):267 - 272
Cited by: Papers (10)A technique for evaluating the learning capability and optimizing the feature space of a class of higher-order neural networks is presented. It is shown that supervised learning can be posed as an optimization problem in which inequality constraints are used to code the information contained in the training patterns and to specify the degree of accuracy expected from the neural network. The approa... View full abstract»
-
High speed paper currency recognition by neural networks
Publication Year: 1995, Page(s):73 - 77
Cited by: Papers (50) | Patents (1)In this paper a new technique is proposed to improve the recognition ability and the transaction speed to classify the Japanese and US paper currency. Two types of data sets, time series data and Fourier power spectra, are used in this study. In both cases, they are directly used as inputs to the neural network. Furthermore, we also refer a new evaluation method of recognition ability. Meanwhile, ... View full abstract»
-
The geometrical learning of binary neural networks
Publication Year: 1995, Page(s):237 - 247
Cited by: Papers (71)In this paper, the learning algorithm called expand-and-truncate learning (ETL) is proposed to train multilayer binary neural networks (BNN) with guaranteed convergence for any binary-to-binary mapping. The most significant contribution of this paper is the development of a learning algorithm for three-layer BNN which guarantees the convergence, automatically determining a required number of neuro... View full abstract»
-
Efficient classification for multiclass problems using modular neural networks
Publication Year: 1995, Page(s):117 - 124
Cited by: Papers (214) | Patents (1)The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the backpropagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequen... View full abstract»
-
Gradient descent learning algorithm overview: a general dynamical systems perspective
Publication Year: 1995, Page(s):182 - 195
Cited by: Papers (76) | Patents (1)Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurren... View full abstract»
-
Canonical piecewise-linear networks
Publication Year: 1995, Page(s):43 - 50
Cited by: Papers (26)In this paper, mapping networks will be considered from the viewpoint of the piecewise-linear (PWL) approximation. The so-called canonical representation plays a kernel role in the PWL representation theory. While this theory has been researched intensively in the contents of mathematics and circuit simulations, little has been seen in the research area about the theoretical aspect of neural netwo... View full abstract»
-
Random noise effects in pulse-mode digital multilayer neural networks
Publication Year: 1995, Page(s):220 - 229
Cited by: Papers (27)A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are... View full abstract»
-
Limitations of neural networks for solving traveling salesman problems
Publication Year: 1995, Page(s):280 - 282
Cited by: Papers (19)Feedback neural networks enjoy considerable popularity as a means of approximately solving combinatorial optimization problems. It is now well established how to map problems onto networks so that invalid solutions are never found. It is not as clear how the networks' solutions compare in terms of quality with those obtained using other optimization techniques; such issues are addressed in this pa... View full abstract»
-
A single-iteration threshold Hamming network
Publication Year: 1995, Page(s):261 - 266
Cited by: Papers (10)We analyze in detail the performance of a Hamming network classifying inputs that are distorted versions of one of its m stored memory patterns, each being a binary vector of length n. It is shown that the activation function of the memory neurons in the original Hamming network may be replaced by a simple threshold function. By judiciously determining the threshold value, the “winner-take-a... View full abstract»
-
Analysis and synthesis of a class of discrete-time neural networks with multilevel threshold neurons
Publication Year: 1995, Page(s):105 - 116
Cited by: Papers (32)In contrast to the usual types of neural networks which utilize two states for each neuron, a class of synchronous discrete-time neural networks with multilevel threshold neurons is developed. A qualitative analysis and a synthesis procedure for the class of neural networks considered constitute the principal contributions of this paper. The applicability of the present class of neural networks is... View full abstract»
Aims & Scope
IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.
This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.