Notice
There is currently an issue with the citation download feature. Learn more

IEEE Transactions on Neural Networks

Issue 5 • Sep 1995

Filter Results

Displaying Results 1 - 25 of 32
  • Spatio-temporal feature maps using gated neuronal architecture

    Publication Year: 1995, Page(s):1119 - 1131
    Cited by:  Papers (5)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1192 KB)

    In this paper, Kohonen's self-organizing feature map is modified by a novel technique of allowing the neurons in the feature map to compete in a selective manner. The selective competition is achieved by grating the N-dimensional feature space using a spatial frequency and setting a criterion for the neurons to compete based on the region in which the input pattern resides. The spatial grating and... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the local minima free condition of backpropagation learning

    Publication Year: 1995, Page(s):1300 - 1303
    Cited by:  Papers (23)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (388 KB)

    It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P-1 sigmoidal hidden neuron and one dummy hidden neuron is used for the learning, then any suboptimal equilibrium point of the corresponding error surface is unstable in the sense of Lyapunov. This result leads to a sufficient local minima free condition for the backpropagation... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Architecture and statistical model of a pulse-mode digital multilayer neural network

    Publication Year: 1995, Page(s):1109 - 1118
    Cited by:  Papers (20)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (788 KB)

    A new architecture and a statistical model for a pulse-mode digital multilayer neural network (DMNN) are presented. Algebraic neural operations are replaced by stochastic processes using pseudo-random pulse sequences. Synaptic weights and neuron states are represented as probabilities and estimated as average rates of pulse occurrences in corresponding pulse sequences. A statistical model of error... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A synthesis procedure for brain-state-in-a-box neural networks

    Publication Year: 1995, Page(s):1071 - 1080
    Cited by:  Papers (30)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (784 KB)

    In this paper, some new qualitative properties of discrete-time neural networks based on the “brain-state-in-a-box” model are presented. These properties concern both the characterization of equilibrium points and the global dynamical behavior. Next, the analysis results are used as guidelines in developing an efficient synthesis procedure for networks that function as associative memo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neuro-genetic controller for nonminimum phase systems

    Publication Year: 1995, Page(s):1297 - 1300
    Cited by:  Papers (15)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (356 KB)

    This paper investigates a neurocontroller for nonminimum phase systems which is trained off-line with genetic algorithm (GA) and is combined in parallel with a conventional linear controller of proportional plus integral plus derivative (PID) type. Training of this kind of a neuro-genetic controller provides a solution under a given global evaluation function, which is devised based on the desired... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symmetry constraints for feedforward network models of gradient systems

    Publication Year: 1995, Page(s):1249 - 1254
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (488 KB)

    This paper concerns the use of a priori information on the symmetry of cross differentials available for problems that seek to approximate the gradient of a differentiable function. We derive the appropriate network constraints to incorporate the symmetry information, show that the constraints do not reduce the universal approximation capabilities of feedforward networks, and demonstrate how the c... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • X-tron: an incremental connectionist model for category perception

    Publication Year: 1995, Page(s):1091 - 1108
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1576 KB)

    A connectionist model for categorization (self-organization) even in the presence of multiple or mixed patterns has been presented. During self-organization, the network automatically adjusts the number of nodes in the hidden and output layers, depending on the complexity or nature of overlap between the patterns. An ambiguity measure is given based on how well the features are being interpreted b... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training neural nets with the reactive tabu search

    Publication Year: 1995, Page(s):1185 - 1200
    Cited by:  Papers (50)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1352 KB)

    In this paper the task of training subsymbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the reactive tabu search (RTS). An iterative optimization process based on a “modified local search” component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional spatio-temporal dynamics of analog image processing neural networks

    Publication Year: 1995, Page(s):1148 - 1164
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1124 KB)

    A typical analog image-processing neural network consists of a 2D array of simple processing elements. When it is implemented with CMOS LSI, two dynamics issues naturally arise: (1) parasitic capacitors of MOS transistors induce temporal dynamics. Since a processed image is given as the stable equilibrium point of temporal dynamics, a temporally unstable chip is unusable; and (2) because of the ar... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel, self-organizing, hierarchical neural networks with continuous inputs and outputs

    Publication Year: 1995, Page(s):1037 - 1044
    Cited by:  Papers (16)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (592 KB)

    Parallel, self-organizing, hierarchical neural networks (PSHNN's) are multistage networks in which stages operate in parallel rather than in series during testing. Each stage can be any particular type of network. Previous PSHNN's assume quantized, say, binary outputs. A new type of PSHNN is discussed such that the outputs are allowed to be continuous-valued. The performance of the resulting netwo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the application of orthogonal transformation for the design and analysis of feedforward networks

    Publication Year: 1995, Page(s):1061 - 1070
    Cited by:  Papers (30)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (732 KB)

    Orthogonal transformation, which can lead to compaction of information, has been used in two ways to optimize on the size of feedforward networks: 1) through the selection of optimum set of time-domain inputs, and the optimum set of links and nodes within a neural network (NN); and 2) through the orthogonalization of the data to be used in NN's, in case of processes with periodicity. The proposed ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eigenstructure bidirectional associative memory: an effective synthesis procedure

    Publication Year: 1995, Page(s):1293 - 1297
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (484 KB)

    We propose a computationally efficient synthesis procedure for a class of bidirectional associative memories. Networks are described by a system of first-order ordinary difference equations which are defined on a closed hypercube of the state-space with solutions extended to the corner of the hypercube. The proposed algorithm possesses several advantages since it is possible: 1) to exert control o... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vector mapping with a nonlinear electronic layer for distributed neural networks

    Publication Year: 1995, Page(s):1245 - 1248
    Cited by:  Papers (1)  |  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (380 KB)

    Describes a new approach for obtaining neural network functionality using fully distributed electronic transport rather than lumped electronic circuit elements. For this, vector mapping abilities of a two-dimensional nonlinear inhomogeneous layer are analyzed. This layer is modeled as an inhomogeneous inversion layer in a multiterminal field effect semiconductor device. The author gives computed r... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of the von Mises and Gaussian basis functions for approximating spherical acoustic scatter

    Publication Year: 1995, Page(s):1284 - 1287
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (364 KB)

    This paper compares the approximation accuracy of two basis functions that share a common radial basis function (RBF) neural network architecture used for approximating a known function on the unit sphere. The basis function types considered are that of a new spherical basis function, the von Mises function, and the now well-known Gaussian basis function. Gradient descent learning rules were appli... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The upper bound neural network and a class of consistent labeling problems

    Publication Year: 1995, Page(s):1132 - 1139
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (664 KB)

    The upper bound neural network (UBNN) is proposed for solving a class of consistent labeling problems (CLP). Crossbar switching is used as an illustration. The set of stable attractors of the dynamical system is identically the set of feasible solutions to the problem. CLP is a general class of NP-complete (Neyman-Pearson) problems intersecting artificial intelligence, symbolic logic, and operatio... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measure fields for function approximation

    Publication Year: 1995, Page(s):1081 - 1090
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1160 KB)

    The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: 1) the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist of the sets of points best approximated by each model; 2) the computation of the normalized discriminant functions for each induced class (which may... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive learning algorithm for principal component analysis

    Publication Year: 1995, Page(s):1255 - 1263
    Cited by:  Papers (32)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (660 KB)

    Principal component analysis (PCA) is one of the most general purpose feature extraction methods. A variety of learning algorithms for PCA has been proposed. Many conventional algorithms, however, will either diverge or converge very slowly if learning rate parameters are not properly chosen. In this paper, an adaptive learning algorithm (ALA) for PCA is proposed. By adaptively selecting the learn... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A skeleton and neural network-based approach for identifying cosmetic surface flaws

    Publication Year: 1995, Page(s):1201 - 1211
    Cited by:  Papers (6)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (980 KB)

    This paper introduces an approach to cosmetic surface flaw identification that is essentially invariant to changes in workpiece orientation and position while being efficient in the use of computer memory. Visual binary images of workpieces are characterized according to the number of pixels in progressive subskeleton iterations. Those subskeletons are constructed using a modified Zhou skeleton tr... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-association and Hebbian learning in linear neural networks

    Publication Year: 1995, Page(s):1165 - 1184
    Cited by:  Papers (8)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1740 KB)

    Studies Hebbian learning in linear neural networks with emphasis on the self-association information principle. This criterion, in one-layer networks, leads to the space of the principal components and can be generalized to arbitrary architectures. The self-association paradigm appears to be very promising because it accounts for the fundamental features of Hebbian synaptic learning and generalize... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tolerance to analog hardware of on-chip learning in backpropagation networks

    Publication Year: 1995, Page(s):1045 - 1052
    Cited by:  Papers (43)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (716 KB)

    In this paper we present results of simulations performed assuming both forward and backward computation are done on-chip using analog components. Aspects of analog hardware studied are component variability, limited voltage ranges, components (multipliers) that only approximate the computations in the backpropagation algorithm, and capacitive weight decay. It is shown that backpropagation network... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization neural network for solving flow problems

    Publication Year: 1995, Page(s):1287 - 1291
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (444 KB)

    This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, i... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for improving classification reliability of multilayer perceptrons

    Publication Year: 1995, Page(s):1140 - 1147
    Cited by:  Papers (31)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (664 KB)

    Criteria for evaluating the classification reliability of a neural classifier and for accordingly making a reject option are proposed. Such an option, implemented by means of two rules which can be applied independently of topology, size, and training algorithms of the neural classifier, allows one to improve the classification reliability. It is assumed that a performance function P is defined wh... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two digital circuits for a fully parallel stochastic neural network

    Publication Year: 1995, Page(s):1264 - 1268
    Cited by:  Papers (12)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (364 KB)

    This paper presents two digital circuits that allow the implementation of a fully parallel stochastic Hopfield neural network (SHNN). In a parallel SHNN with n neurons, the n*n stochastic signals s ij pulse with probability which are proportional to the synapse inputs, are simultaneously available. The proposed circuits calculate the summation of the stochastic input pulses to neuron i(... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gradient calculations for dynamic recurrent neural networks: a survey

    Publication Year: 1995, Page(s):1212 - 1228
    Cited by:  Papers (256)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1640 KB)

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on self-organizing semantic maps

    Publication Year: 1995, Page(s):1029 - 1036
    Cited by:  Papers (13)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (680 KB)

    This paper discusses Kohonen's self-organizing semantic map (SOSM). We show that augmentation and normalization of numerical feature data as recommended for the SOSM is entirely unnecessary to obtain semantic maps that exhibit semantic similarities between objects represented by the data. Visual displays of a small data set of 13 animals based on principal components, Sammon's algorithm, and Kohon... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope