Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 2 • Date Jun 1990

Filter Results

Displaying Results 1 - 11 of 11
  • Standardization of neural network terminology

    Publication Year: 1990 , Page(s): 244 - 245
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    Outlined are the initial activities of an ad hoc standards committee established by the IEEE Neural Networks Council to pursue this effort. A proposed list of frequently used terms to be considered by the committee is presented. Several proposed definitions are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptron-based learning algorithms

    Publication Year: 1990 , Page(s): 179 - 191
    Cited by:  Papers (72)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB)  

    A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel objective function for improved phoneme recognition using time-delay neural networks

    Publication Year: 1990 , Page(s): 216 - 228
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1180 KB)  

    Single-speaker and multispeaker recognition results are presented for the voice-stop consonants /b,d,g/ using time-delay neural networks (TDNNs) with a number of enhancements, including a new objective function for training these networks. The new objective function, called the classification figure of merit (CFM), differs markedly from the traditional mean-squared-error (MSE) objective function and the related cross entropy (CE) objective function. Where the MSE and CE objective functions seek to minimize the difference between each output node and its ideal activation, the CFM function seeks to maximize the difference between the output activation of the node representing incorrect classifications. A simple arbitration mechanism is used with all three objective functions to achieve a median 30% reduction in the number of misclassifications when compared to TDNNs trained with the traditional MSE back-propagation objective function alone View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simple procedure for pruning back-propagation trained neural networks

    Publication Year: 1990 , Page(s): 239 - 242
    Cited by:  Papers (131)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    The sensitivity of the global error (cost) function to the inclusion/exclusion of each synapse in the artificial neural network is estimated. Introduced are shadow arrays which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches, this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graph partitioning using annealed neural networks

    Publication Year: 1990 , Page(s): 192 - 203
    Cited by:  Papers (66)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1116 KB)  

    A new algorithm, mean field annealing (MFA), is applied to the graph-partitioning problem. The MFA algorithm combines characteristics of the simulated-annealing algorithm and the Hopfield neural network. MFA exhibits the rapid convergence of the neural network while preserving the solution quality afforded by simulated annealing (SA). The rate of convergence of MFA on graph bipartitioning problems is 10-100 times that of SA, with nearly equal quality of solutions. A new modification to mean-field annealing is also presented which supports partitioning graphs into three or more bins, a problem which has previously shown resistance to solution by neural networks. The temperature-behavior of MFA during graph partitioning is analyzed approximately and shown to possess a critical temperature at which most of the optimization occurs. This temperature is analogous to the gain of the neurons in a neural network and can be used to tune such networks for better performance. The value of the repulsion penalty needed to force MFA (or a neural network) to divide a graph into equal-sized pieces is also estimated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A theoretical investigation into the performance of the Hopfield model

    Publication Year: 1990 , Page(s): 204 - 215
    Cited by:  Papers (117)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (940 KB)  

    An analysis is made of the behavior of the Hopfield model as a content-addressable memory (CAM) and as a method of solving the traveling salesman problem (TSP). The analysis is based on the geometry of the subspace set up by the degenerate eigenvalues of the connection matrix. The dynamic equation is shown to be equivalent to a projection of the input vector onto this subspace. In the case of content-addressable memory, it is shown that spurious fixed points can occur at any corner of the hypercube that is on or near the subspace spanned by the memory vectors. Analysed is why the network can frequently converge to an invalid solution when applied to the traveling salesman problem energy function. With these expressions, the network can be made robust and can reliably solve the traveling salesman problem with tour sizes of 50 cities or more View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derivation of a class of training algorithms

    Publication Year: 1990 , Page(s): 229 - 232
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    A novel derivation is presented of T. Kohonen's topographic mapping training algorithm (Self-Organization and Associative Memory, 1984), based upon an extension of the Linde-Buzo-Gray (LBG) algorithm for vector quantizer design. Thus a vector quantizer is designed by minimizing an L2 reconstruction distortion measure, including an additional contribution from the effect of code noise which corrupts the output of the vector quantizer. The neighborhood updating scheme of Kohonen's topographic mapping training algorithm emerges as a special case of this code noise model. This formulation of Kohonen's algorithm is a specific instance of the robust hidden layer principle, which stabilizes the internal representations chosen by a network against anticipated noise or distortion processes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel, self-organizing, hierarchical neural networks

    Publication Year: 1990 , Page(s): 167 - 178
    Cited by:  Papers (33)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    A new neural-network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN) is presented. The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of input vectors rejected by the previous stage. The new architecture has many desirable properties, such as optimized system complexity (in the sense of minimized self-organizing number of stages), high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages operate simultaneously without waiting for data from other stages during testing. The experiments performed indicated the superiority of the new architecture over multilayered networks with back-propagation training View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks

    Publication Year: 1990 , Page(s): 154 - 166
    Cited by:  Papers (12)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1348 KB)  

    Relationships between locally interconnected neural networks that use receptive field representations and trellis or convolutional codes are explored. A fault tolerant neural network is described. It is patterned after the trellis graph description of convolutional codes and is able to tolerate errors in its inputs and failures of constituent neurons. This network incorporates learning, which adds failure tolerance; the network is able to modify its connection weights an internal representation so that spare neurons can replace neurons which fail. A brief review of trellis-coding concepts is included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning of stable states in stochastic asymmetric networks

    Publication Year: 1990 , Page(s): 233 - 238
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    Boltzmann-based models with asymmetric connections are investigated. Although they are initially unstable, these networks spontaneously self-stabilize as a result of learning. Moreover, pairs of weights symmetrize during learning; however, the symmetry is not enough to account for the observed stability. To characterize the system it is useful to consider how its entropy is affected by learning and the entropy of the information stream. The stability of an asymmetric network is confirmed with an electronic model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural networks for control systems

    Publication Year: 1990 , Page(s): 242 - 244
    Cited by:  Papers (52)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    A description is given of 11 papers from the April 1990 special issue on neural networks in control systems of IEEE Control Systems Magazine. The emphasis was on presenting as varied and current a picture as possible of the use of neural networks in control. The papers described cover: the design of associative memories using feedback neural networks; a method to use neural networks to control highly nonlinear systems; the modeling of nonlinear chemical systems using neural networks; the identification of dynamical systems; the comparison of conventional adaptive controllers and neural-network-based controllers; a method to provide adaptive control for nonlinear systems; neural networks and back-propagation; the back-propagation algorithm; the use of trained neural networks to regulate the pitch attitude of an underwater telerobot; the control of mobile robots; and the issues involved in integrating neural networks and knowledge-based systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope