By Topic

Neural Networks, IEEE Transactions on

Issue 1 • Date Jan. 2006

Filter Results

Displaying Results 1 - 25 of 35
  • Table of contents

    Publication Year: 2006 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Effects of kernel function on Nu support vector machines in extreme cases

    Publication Year: 2006 , Page(s): 1 - 9
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    How we should choose a kernel function in support vector machines (SVMs), is an important but difficult problem. In this paper, we discuss the properties of the solution of the ν-SVM's, a variation of SVM's, for normalized feature vectors in two extreme cases: All feature vectors are almost orthogonal and all feature vectors are almost the same. In the former case, the solution of the ν-SVM is nearly the center of gravity of the examples given while the solution is approximated to that of the ν-SVM with the linear kernel in the latter case. Although extreme kernels are not employed in practice, analyzes are helpful to understand the effects of a kernel function on the generalization performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursive processing of cyclic graphs

    Publication Year: 2006 , Page(s): 10 - 18
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    Recursive neural networks are a powerful tool for processing structured data. According to the recursive learning paradigm, the input information consists of directed positional acyclic graphs (DPAGs). In fact, recursive networks are fed following the partial order defined by the links of the graph. Unfortunately, the hypothesis of processing DPAGs is sometimes too restrictive, being the nature of some real-world problems intrinsically cyclic. In this paper, a methodology is proposed, which allows us to process any cyclic directed graph. Therefore, the computational power of recursive networks is definitely established, also clarifying the underlying limitations of the model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized RLS approach to the training of neural networks

    Publication Year: 2006 , Page(s): 19 - 34
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (854 KB) |  | HTML iconHTML  

    Recursive least square (RLS) is an efficient approach to neural network training. However, in the classical RLS algorithm, there is no explicit decay in the energy function. This will lead to an unsatisfactory generalization ability for the trained networks. In this paper, we propose a generalized RLS (GRLS) model which includes a general decay term in the energy function for the training of feedforward neural networks. In particular, four different weight decay functions, namely, the quadratic weight decay, the constant weight decay and the newly proposed multimodal and quartic weight decay are discussed. By using the GRLS approach, not only the generalization ability of the trained networks is significantly improved but more unnecessary weights are pruned to obtain a compact network. Furthermore, the computational complexity of the GRLS remains the same as that of the standard RLS algorithm. The advantages and tradeoffs of using different decay functions are analyzed and then demonstrated with examples. Simulation results show that our approach is able to meet the design goals: improving the generalization ability of the trained network while getting a compact network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian multioutput feedforward neural networks comparison: a conjugate prior approach

    Publication Year: 2006 , Page(s): 35 - 47
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    A Bayesian method for the comparison and selection of multioutput feedforward neural network topology, based on the predictive capability, is proposed. As a measure of the prediction fitness potential, an expected utility criterion is considered which is consistently estimated by a sample-reuse computation. As opposed to classic point-prediction-based cross-validation methods, this expected utility is defined from the logarithmic score of the neural model predictive probability density. It is shown how the advocated choice of a conjugate probability distribution as prior for the parameters of a competing network, allows a consistent approximation of the network posterior predictive density. A comparison of the performances of the proposed method with the performances of usual selection procedures based on classic cross-validation and information-theoretic criteria, is performed first on a simulated case study, and then on a well known food analysis dataset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient hyperkernel learning using second-order cone programming

    Publication Year: 2006 , Page(s): 48 - 58
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (485 KB) |  | HTML iconHTML  

    The kernel function plays a central role in kernel methods. Most existing methods can only adapt the kernel parameters or the kernel matrix based on empirical data. Recently, Ong et al. introduced the method of hyperkernels which can be used to learn the kernel function directly in an inductive setting. However, the associated optimization problem is a semidefinite program (SDP), which is very computationally expensive, even with the recent advances in interior point methods. In this paper, we show that this learning problem can be equivalently reformulated as a second-order cone program (SOCP), which can then be solved more efficiently than SDPs. Comparison is also made with the kernel matrix learning method proposed by Lanckriet et al. Experimental results on both classification and regression problems, with toy and real-world data sets, show that our proposed SOCP formulation has significant speedup over the original SDP formulation. Moreover, it yields better generalization than Lanckriet et al.'s method, with a speed that is comparable, or sometimes even faster, than their quadratically constrained quadratic program (QCQP) formulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A sequential dynamic heteroassociative memory for multistep pattern recognition and one-to-many association

    Publication Year: 2006 , Page(s): 59 - 68
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    Bidirectional associative memories (BAMs) have been widely used for auto and heteroassociative learning. However, few research efforts have addressed the issue of multistep vector pattern recognition. We propose a model that can perform multi step pattern recognition without the need for a special learning algorithm, and with the capacity to learn more than two pattern series in the training set. The model can also learn pattern series of different lengths and, contrarily to previous models, the stimuli can be composed of gray-level images. The paper also shows that by adding an extra autoassociative layer, the model can accomplish one-to-many association, a task that was exclusive to feedforward networks with context units and error backpropagation learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm

    Publication Year: 2006 , Page(s): 69 - 80
    Cited by:  Papers (88)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (491 KB) |  | HTML iconHTML  

    In this paper, a hybrid Taguchi-genetic algorithm (HTGA) is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network. The HTGA approach is a method of combining the traditional genetic algorithm (TGA), which has a powerful global exploration capability, with the Taguchi method, which can exploit the optimum offspring. The Taguchi method is inserted between crossover and mutation operations of a TGA. Then, the systematic reasoning ability of the Taguchi method is incorporated in the crossover operations to select the better genes to achieve crossover, and consequently enhance the genetic algorithms. Therefore, the HTGA approach can be more robust, statistically sound, and quickly convergent. First, the authors evaluate the performance of the presented HTGA approach by studying some global numerical optimization problems. Then, the presented HTGA approach is effectively applied to solve three examples on forecasting the sunspot numbers, tuning the associative memory, and solving the XOR problem. The numbers of hidden nodes and the links of the feedforward neural network are chosen by increasing them from small numbers until the learning performance is good enough. As a result, a partially connected feedforward neural network can be obtained after tuning. This implies that the cost of implementation of the neural network can be reduced. In these studied problems of tuning both network structure and parameters of a feedforward neural network, there are many parameters and numerous local optima so that these studied problems are challenging enough for evaluating the performances of any proposed GA-based approaches. The computational experiments show that the presented HTGA approach can obtain better results than the existing method reported recently in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized haar DWT and transformations between decision trees and neural networks

    Publication Year: 2006 , Page(s): 81 - 93
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    The core contribution of this paper is a three-fold improvement of the Haar discrete wavelet transform (DWT). It is modified to efficiently transform a multiclass- (rather than numerical-) valued function over a multidimensional (rather than low dimensional) domain, or transform a multiclass-valued decision tree into another useful representation. We prove that this multidimensional, multiclass DWT uses dynamic programming to minimize (within its framework) the number of nontrivial wavelet coefficients needed to summarize a training set or decision tree. It is a spatially localized algorithm that takes linear time in the number of training samples, after a sort. Convergence of the DWT to benchmark training sets seems to degrade with rising dimension in this test of high dimensional wavelets, which have been seen as difficult to implement. This multiclass multidimensional DWT has tightly coupled applications from learning "dyadic" decision trees directly from training data, rebalancing or converting preexisting decision trees to fixed depth boolean or threshold neural networks (in effect parallelizing the evaluation of the trees), or learning rule/exception sets represented as a new form of tree called an "E-tree," which could greatly help interpretation/visualization of a dataset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new synaptic plasticity rule for networks of spiking neurons

    Publication Year: 2006 , Page(s): 94 - 105
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3449 KB) |  | HTML iconHTML  

    In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability analysis of Cohen-Grossberg neural networks

    Publication Year: 2006 , Page(s): 106 - 117
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    Without assuming boundedness and differentiability of the activation functions and any symmetry of interconnections, we employ Lyapunov functions to establish some sufficient conditions ensuring existence, uniqueness, global asymptotic stability, and even global exponential stability of equilibria for the Cohen-Grossberg neural networks with and without delays. Our results are not only presented in terms of system parameters and can be easily verified and also less restrictive than previously known criteria and can be applied to neural networks, including Hopfield neural networks, bidirectional association memory neural networks, and cellular neural networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stable neural network-based observer with application to flexible-joint manipulators

    Publication Year: 2006 , Page(s): 118 - 129
    Cited by:  Papers (44)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1840 KB) |  | HTML iconHTML  

    A stable neural network (NN)-based observer for general multivariable nonlinear systems is presented in this paper. Unlike most previous neural network observers, the proposed observer uses a nonlinear-in-parameters neural network (NLPNN). Therefore, it can be applied to systems with higher degrees of nonlinearity without any a priori knowledge about system dynamics. The learning rule for the neural network is a novel approach based on the modified backpropagation (BP) algorithm. An e-modification term is added to guarantee robustness of the observer. No strictly positive real (SPR) or any other strong assumption is imposed on the proposed approach. The stability of the recurrent neural network observer is shown by Lyapunov's direct method. Simulation results for a flexible-joint manipulator are presented to demonstrate the enhanced performance achieved by utilizing the proposed neural network observer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning from neural control

    Publication Year: 2006 , Page(s): 130 - 146
    Cited by:  Papers (66)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (702 KB) |  | HTML iconHTML  

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network control of a class of nonlinear systems with actuator saturation

    Publication Year: 2006 , Page(s): 147 - 156
    Cited by:  Papers (52)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    A neural net (NN)-based actuator saturation compensation scheme for the nonlinear systems in Brunovsky canonical form is presented. The scheme that leads to stability, command following, and disturbance rejection is rigorously proved and verified using a general "pendulum type" and a robot manipulator dynamical systems. Online weights tuning law, the overall closed-loop system performance, and the boundedness of the NN weights are derived and guaranteed based on Lyapunov approach. The actuator saturation is assumed to be unknown and the saturation compensator is inserted into a feedforward path. Simulation results indicate that the proposed scheme can effectively compensate for the saturation nonlinearity in the presence of system uncertainty. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient and robust feature extraction by maximum margin criterion

    Publication Year: 2006 , Page(s): 157 - 165
    Cited by:  Papers (194)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (578 KB) |  | HTML iconHTML  

    In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem . In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ensemble-based discriminant learning with boosting for face recognition

    Publication Year: 2006 , Page(s): 166 - 178
    Cited by:  Papers (63)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    In this paper, we propose a novel ensemble-based approach to boost performance of traditional Linear Discriminant Analysis (LDA)-based methods used in face recognition. The ensemble-based approach is based on the recently emerged technique known as "boosting". However, it is generally believed that boosting-like learning rules are not suited to a strong and stable learner such as LDA. To break the limitation, a novel weakness analysis theory is developed here. The theory attempts to boost a strong learner by increasing the diversity between the classifiers created by the learner, at the expense of decreasing their margins, so as to achieve a tradeoff suggested by recent boosting studies for a low generalization error. In addition, a novel distribution accounting for the pairwise class discriminant information is introduced for effective interaction between the booster and the LDA-based learner. The integration of all these methodologies proposed here leads to the novel ensemble-based discriminant learning approach, capable of taking advantage of both the boosting and LDA techniques. Promising experimental results obtained on various difficult face recognition scenarios demonstrate the effectiveness of the proposed approach. We believe that this work is especially beneficial in extending the boosting framework to accommodate general (strong/weak) learners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised analysis of polyphonic music by sparse coding

    Publication Year: 2006 , Page(s): 179 - 196
    Cited by:  Papers (27)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2384 KB) |  | HTML iconHTML  

    We investigate a data-driven approach to the analysis and transcription of polyphonic music, using a probabilistic model which is able to find sparse linear decompositions of a sequence of short-term Fourier spectra. The resulting system represents each input spectrum as a weighted sum of a small number of "atomic" spectra chosen from a larger dictionary; this dictionary is, in turn, learned from the data in such a way as to represent the given training set in an (information theoretically) efficient way. When exposed to examples of polyphonic music, most of the dictionary elements take on the spectral characteristics of individual notes in the music, so that the sparse decomposition can be used to identify the notes in a polyphonic mixture. Our approach differs from other methods of polyphonic analysis based on spectral decomposition by combining all of the following: a) a formulation in terms of an explicitly given probabilistic model, in which the process estimating which notes are present corresponds naturally with the inference of latent variables in the model; b) a particularly simple generative model, motivated by very general considerations about efficient coding, that makes very few assumptions about the musical origins of the signals being processed; and c) the ability to learn a dictionary of atomic spectra (most of which converge to harmonic spectral profiles associated with specific notes) from polyphonic examples alone-no separate training on monophonic examples is required. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analog silicon retina with multichip configuration

    Publication Year: 2006 , Page(s): 197 - 210
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1318 KB) |  | HTML iconHTML  

    The neuromorphic silicon retina is a novel analog very large scale integrated circuit that emulates the structure and the function of the retinal neuronal circuit. We fabricated a neuromorphic silicon retina, in which sample/hold circuits were embedded to generate fluctuation-suppressed outputs in the previous study . The applications of this silicon retina, however, are limited because of a low spatial resolution and computational variability. In this paper, we have fabricated a multichip silicon retina in which the functional network circuits are divided into two chips: the photoreceptor network chip (P chip) and the horizontal cell network chip (H chip). The output images of the P chip are transferred to the H chip with analog voltages through the line-parallel transfer bus. The sample/hold circuits embedded in the P and H chips compensate for the pattern noise generated on the circuits, including the analog communication pathway. Using the multichip silicon retina together with an off-chip differential amplifier, spatial filtering of the image with an odd- and an even-symmetric orientation selective receptive fields was carried out in real time. The analog data transfer method in the present multichip silicon retina is useful to design analog neuromorphic multichip systems that mimic the hierarchical structure of neuronal networks in the visual system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity

    Publication Year: 2006 , Page(s): 211 - 221
    Cited by:  Papers (160)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (553 KB) |  | HTML iconHTML  

    We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the "address-event representation" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analog VLSI chip emulating polarization vision of octopus retina

    Publication Year: 2006 , Page(s): 222 - 232
    Cited by:  Papers (23)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1184 KB) |  | HTML iconHTML  

    Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8×5 array with two photodiodes per pixel, each consuming typically 10 μW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Facial expression recognition using kernel canonical correlation analysis (KCCA)

    Publication Year: 2006 , Page(s): 233 - 238
    Cited by:  Papers (57)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    In this correspondence, we address the facial expression recognition problem using kernel canonical correlation analysis (KCCA). Following the method proposed by Lyons et al. and Zhang et al. , we manually locate 34 landmark points from each facial image and then convert these geometric points into a labeled graph (LG) vector using the Gabor wavelet transformation method to represent the facial features. On the other hand, for each training facial image, the semantic ratings describing the basic expressions are combined into a six-dimensional semantic expression vector. Learning the correlation between the LG vector and the semantic expression vector is performed by KCCA. According to this correlation, we estimate the associated semantic expression vector of a given test image and then perform the expression classification according to this estimated semantic expression vector. Moreover, we also propose an improved KCCA algorithm to tackle the singularity problem of the Gram matrix. The experimental results on the Japanese female facial expression database and the Ekman's "Pictures of Facial Affect" database illustrate the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study on modeling of multispectral emissivity and optimization algorithm

    Publication Year: 2006 , Page(s): 238 - 242
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Target's spectral emissivity changes variously, and how to obtain target's continuous spectral emissivity is a difficult problem to be well solved nowadays. In this letter, an activation-function-tunable neural network is established, and a multistep searching method which can be used to train the model is proposed. The proposed method can effectively calculate the object's continuous spectral emissivity from the multispectral radiation information. It is a universal method, which can be used to realize on-line emissivity demarcation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear adaptive control of interconnected systems using neural networks

    Publication Year: 2006 , Page(s): 243 - 246
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    In this letter, we solve the problem of decentralized adaptive asymptotic tracking for a class of large scale systems with significant nonlinearities and uncertainties. Neural networks (NNs) are used as a control part to cancel the effect of the unknown nonlinearity. Semiglobal asymptotic stability results are obtained and the tracking error converges to zero. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recurrent neural network as a linear attractor for pattern association

    Publication Year: 2006 , Page(s): 246 - 250
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    We propose a linear attractor network based on the observation that similar patterns form a pipeline in the state space, which can be used for pattern association. To model the pipeline in the state space, we present a learning algorithm using a recurrent neural network. A least-squares estimation approach utilizing the interdependency between neurons defines the dynamics of the network. The region of convergence around the line of attraction is defined based on the statistical characteristics of the input patterns. Performance of the learning algorithm is evaluated by conducting several experiments in benchmark problems, and it is observed that the new technique is suitable for multiple-valued pattern association. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope