By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov. 2007

Filter Results

Displaying Results 1 - 25 of 36
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Global Convergence of GHA Learning Algorithm With Nonzero-Approaching Adaptive Learning Rates

    Publication Year: 2007 , Page(s): 1557 - 1571
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2253 KB) |  | HTML iconHTML  

    The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semiparametric Regression Using Student t Processes

    Publication Year: 2007 , Page(s): 1572 - 1588
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB) |  | HTML iconHTML  

    In this paper, we propose a latent factor regression model, in which priors are assigned to both the latent regression vector and the error term, by using reproducing kernels. The resulting regression function follows a stochastic process known as a student process. The model is attractive because its implementation is based on a tractable posterior predictive distribution and a simple expectation-maximization (EM) estimation algorithm. In addition, treating the transductive inference as a missing data problem, we devise the EM algorithm to deal with the parameter estimation as well as the response prediction in a single paradigm. The model is also elaborated for multivariate-response regression problems. For this purpose, we present a generalization of multivariate models and some of its properties. Experimental results show our approaches to be effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization

    Publication Year: 2007 , Page(s): 1589 - 1596
    Cited by:  Papers (62)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    Nonnegative matrix factorization (NMF) is useful to find basis information of nonnegative data. Currently, multiplicative updates are a simple and popular way to find the factorization. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, no proof has shown that multiplicative updates converge to a stationary point of the NMF optimization problem. Stationarity is important as it is a necessary condition of a local minimum. This paper discusses the difficulty of proving the convergence. We propose slight modifications of existing updates and prove their convergence. Techniques invented in this paper may be applied to prove the convergence for other bound-constrained optimization problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Adaptable Connectionist Text-Retrieval System With Relevance Feedback

    Publication Year: 2007 , Page(s): 1597 - 1613
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB) |  | HTML iconHTML  

    This paper introduces a new connectionist network for certain domain-specific text-retrieval and search applications with expert end users. A new model reference adaptive system is proposed that involves three learning phases. Initial model-reference learning is first performed based upon an ensemble set of input-output of an initial reference model. Model-reference following is needed in dynamic environments where documents are added, deleted, or updated. Relevance feedback learning from multiple expert users then optimally maps the original query using either a score-based or a click-through selection process. The learning can be implemented, in regression or classification modes, using a three-layer network. The first layer is an adaptable layer that performs mapping from query domain to document space. The second and third layers perform document-to-term mapping, search/retrieval, and scoring tasks. The learning algorithms are thoroughly tested on a domain-specific text database that encompasses a wide range of Hewlett Packard (HP) products and for a large number of most commonly used single- and multiterm queries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural Network Approach to Background Modeling for Video Object Segmentation

    Publication Year: 2007 , Page(s): 1614 - 1627
    Cited by:  Papers (36)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2156 KB) |  | HTML iconHTML  

    This paper presents a novel background modeling and subtraction approach for video object segmentation. A neural network (NN) architecture is proposed to form an unsupervised Bayesian classifier for this application domain. The constructed classifier efficiently handles the segmentation in natural-scene sequences with complex background motion and changes in illumination. The weights of the proposed NN serve as a model of the background and are temporally updated to reflect the observed statistics of background. The segmentation performance of the proposed NN is qualitatively and quantitatively examined and compared to two extant probabilistic object segmentation algorithms, based on a previously published test pool containing diverse surveillance-related sequences. The proposed algorithm is parallelized on a subpixel level and designed to enable efficient hardware implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Bayesian ARTMAP

    Publication Year: 2007 , Page(s): 1628 - 1644
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB) |  | HTML iconHTML  

    In this paper, we modify the fuzzy ARTMAP (FA) neural network (NN) using the Bayesian framework in order to improve its classification accuracy while simultaneously reduce its category proliferation. The proposed algorithm, called Bayesian ARTMAP (BA), preserves the FA advantages and also enhances its performance by the following: (1) representing a category using a multidimensional Gaussian distribution, (2) allowing a category to grow or shrink, (3) limiting a category hypervolume, (4) using Bayes' decision theory for learning and inference, and (5) employing the probabilistic association between every category and a class in order to predict the class. In addition, the BA estimates the class posterior probability and thereby enables the introduction of loss and classification according to the minimum expected loss. Based on these characteristics and using synthetic and 20 real-world databases, we show that the BA outperformes the FA, either trained for one epoch or until completion, with respect to classification accuracy, sensitivity to statistical overlapping, learning curves, expected loss, and category proliferation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Hierarchical Fast Learning Artificial Neural Network (HieFLANN)—An Autonomous Platform for Hierarchical Neural Network Construction

    Publication Year: 2007 , Page(s): 1645 - 1657
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB) |  | HTML iconHTML  

    The hierarchical fast learning artificial neural network (HieFLANN) is a clustering NN that can be initialized using statistical properties of the data set. This provides the possibility of constructing the entire network autonomously with no manual intervention. This distinguishes it from many existing networks that, though hierarchically plausible, still require manual initialization processes. The unique system of hierarchical networks begins with a reduction of the high-dimensional feature space into smaller and manageable ones. This process involves using the K-iterations fast learning artificial neural network (KFLANN) to systematically cluster a square matrix containing the Mahalanobis distances (MDs) between data set features, into homogeneous feature subspaces (HFSs). The KFLANN is used for its heuristic network initialization capabilities on a given data set and requires no supervision. Through the recurring use of the KFLANN and a second stage involving canonical correlation analysis (CCA), the HieFLANN is developed. Experimental results on several standard benchmark data sets indicate that the autonomous determination of the HFS provides a viable avenue for feasible partitioning of feature subspaces. When coupled with the network transformation process, the HieFLANN yields results showing accuracies comparable with available methods. This provides a new platform by which data sets with high-dimensional feature spaces can be systematically resolved and trained autonomously, alleviating the effects of the curse of dimensionality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchically Clustered Adaptive Quantization CMAC and Its Learning Convergence

    Publication Year: 2007 , Page(s): 1658 - 1682
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2787 KB) |  | HTML iconHTML  

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Density-Driven Generalized Regression Neural Networks (DD-GRNN) for Function Approximation

    Publication Year: 2007 , Page(s): 1683 - 1696
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2408 KB) |  | HTML iconHTML  

    This paper proposes a new nonparametric regression method, based on the combination of generalized regression neural networks (GRNNs), density-dependent multiple kernel bandwidths, and regularization. The presented model is generic and substitutes the very large number of bandwidths with a much smaller number of trainable weights that control the regression model. It depends on sets of extracted data density features which reflect the density properties and distribution irregularities of the training data sets. We provide an efficient initialization scheme and a second-order algorithm to train the model, as well as an overfitting control mechanism based on Bayesian regularization. Numerical results show that the proposed network manages to reduce significantly the computational demands of having individual bandwidths, while at the same time, provides competitive function approximation accuracy in relation to existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Solving Generally Constrained Generalized Linear Variational Inequalities Using the General Projection Neural Networks

    Publication Year: 2007 , Page(s): 1697 - 1708
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB) |  | HTML iconHTML  

    Generalized linear variational inequality (GLVI) is an extension of the canonical linear variational inequality. In recent years, a recurrent neural network (NN) called general projection neural network (GPNN) was developed for solving GLVIs with simple bound (often box-type or sphere-type) constraints. The aim of this paper is twofold. First, some further stability results of the GPNN are presented. Second, the GPNN is extended for solving GLVIs with general linear equality and inequality constraints. A new design methodology for the GPNN is then proposed. Furthermore, in view of different types of constraints, approaches for reducing the number of neurons of the GPNN are discussed, which results in two specific GPNNs. Moreover, some distinct properties of the resulting GPNNs are also explored based on their particular structures. Numerical simulation results are provided to validate the results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locally Weighted Online Approximation-Based Control for Nonaffine Systems

    Publication Year: 2007 , Page(s): 1709 - 1724
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (753 KB) |  | HTML iconHTML  

    This paper is concerned with tracking control problems for nonlinear systems that are not affine in the control signal and that contain unknown nonlinearities in the system dynamic equations. This paper develops a piecewise linear approximation to the unknown functions during the system operation. New control and parameter adaptation algorithms are designed and analyzed using Lyapunov-like methods. The objectives are to achieve semiglobal stability of the state, accurate tracking of bounded reference signals contained within a known domain , and at least boundedness of the function approximator parameter estimates. Numerical simulations are included to illustrate the effectiveness of the learning algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fixed-Final-Time-Constrained Optimal Control of Nonlinear Systems Using Neural Network HJB Approach

    Publication Year: 2007 , Page(s): 1725 - 1737
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (673 KB) |  | HTML iconHTML  

    In this paper, fixed-final time-constrained optimal control laws using neural networks (NNS) to solve Hamilton-Jacobi-Bellman (HJB) equations for general affine in the constrained nonlinear systems are proposed. An NN is used to approximate the time-varying cost function using the method of least squares on a predefined region. The result is an NN nearly -constrained feedback controller that has time-varying coefficients found by a priori offline tuning. Convergence results are shown. The results of this paper are demonstrated in two examples, including a nonholonomic system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced Pattern Training Based on Task Decomposition Using Pattern Distributor

    Publication Year: 2007 , Page(s): 1738 - 1749
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (503 KB) |  | HTML iconHTML  

    Task decomposition with pattern distributor (PD) is a new task decomposition method for multilayered feedforward neural networks (NNs). Pattern distributor network is proposed that implements this new task decomposition method. We propose a theoretical model to analyze the performance of pattern distributor network. A method named reduced pattern training (RPT) is also introduced, aiming to improve the performance of pattern distribution. Our analysis and the experimental results show that RPT improves the performance of pattern distributor network significantly. The distributor module's classification accuracy dominates the whole network's performance. Two combination methods, namely, crosstalk-based combination and genetic-algorithm (GA)-based combination, are presented to find suitable grouping for the distributor module. Experimental results show that this new method can reduce training time and improve network generalization accuracy when compared to a conventional method such as constructive backpropagation or a task decomposition method such as output parallelism (OP). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Block-Based Neural Networks for Personalized ECG Signal Classification

    Publication Year: 2007 , Page(s): 1750 - 1761
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1001 KB) |  | HTML iconHTML  

    This paper presents evolvable block-based neural networks (BbNNs) for personalized ECG heartbeat pattern classification. A BbNN consists of a 2-D array of modular component NNs with flexible structures and internal configurations that can be implemented using reconfigurable digital hardware such as field-programmable gate arrays (FPGAs). Signal flow between the blocks determines the internal configuration of a block as well as the overall structure of the BbNN. Network structure and the weights are optimized using local gradient-based search and evolutionary operators with the rates changing adaptively according to their effectiveness in the previous evolution period. Such adaptive operator rate update scheme ensures higher fitness on average compared to predetermined fixed operator rates. The Hermite transform coefficients and the time interval between two neighboring R-peaks of ECG signals are used as inputs to the BbNN. A BbNN optimized with the proposed evolutionary algorithm (EA) makes a personalized heartbeat pattern classifier that copes with changing operating environments caused by individual difference and time-varying characteristics of ECG signals. Simulation results using the Massachusetts Institute of Technology/Beth Israel Hospital (MIT-BIH) arrhythmia database demonstrate high average detection accuracies of ventricular ectopic beats (98.1%) and supraventricular ectopic beats (96.6%) patterns for heartbeat monitoring, being a significant improvement over previously reported electrocardiogram (ECG) classification results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compact Modeling of Data Using Independent Variable Group Analysis

    Publication Year: 2007 , Page(s): 1762 - 1776
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (863 KB) |  | HTML iconHTML  

    In this paper, we introduce a modeling approach called independent variable group analysis (IVGA) which can be used for finding an efficient structural representation for a given data set. The basic idea is to determine such a grouping for the variables of the data set that mutually dependent variables are grouped together whereas mutually independent or weakly dependent variables end up in separate groups. Computation of an IVGA model requires a combinatorial algorithm for grouping of the variables and a modeling algorithm for the groups. In order to be able to compare different groupings, a cost function which reflects the quality of a grouping is also required. Such a cost function can be derived, for example, using the variational Bayesian approach, which is employed in our study. This approach is also shown to be approximately equivalent to minimizing the mutual information between the groups. The modeling task is computationally demanding. We describe an efficient heuristic grouping algorithm for the variables and derive a computationally light nonlinear mixture model for modeling of the dependencies within the groups. Finally, we carry out a set of experiments which indicate that IVGA may turn out to be beneficial in many different applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamics of Generalized PCA and MCA Learning Algorithms

    Publication Year: 2007 , Page(s): 1777 - 1784
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1680 KB) |  | HTML iconHTML  

    Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Biologically Inspired Spiking Neural Network for Sound Source Lateralization

    Publication Year: 2007 , Page(s): 1785 - 1799
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2106 KB) |  | HTML iconHTML  

    In this paper, a binaural sound source lateralization spiking neural network (NN) will be presented which is inspired by most recent neurophysiological studies on the role of certain nuclei in the superior olivary complex (SOC) and the inferior colliculus (IC). The binaural sound source lateralization neural network (BiSoLaNN) is a spiking NN based on neural mechanisms, utilizing complex neural models, and attempting to simulate certain parts of nuclei of the auditory system in detail. The BiSoLaNN utilizes both excitatory and inhibitory ipsilateral and contralateral influences arrayed in only one delay line originating in the contralateral side to achieve a sharp azimuthal localization. It will be shown that the proposed model can be used both for purposes of understanding the mechanisms of an NN of the auditory system and for sound source lateralization tasks in technical applications, e.g., its use with the Darmstadt robotic head (DRH). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quarterly Time-Series Forecasting With Neural Networks

    Publication Year: 2007 , Page(s): 1800 - 1814
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (995 KB) |  | HTML iconHTML  

    Forecasting of time series that have seasonal and other variations remains an important problem for forecasters. This paper presents a neural network (NN) approach to forecasting quarterly time series. With a large data set of 756 quarterly time series from the M3 forecasting competition, we conduct a comprehensive investigation of the effectiveness of several data preprocessing and modeling approaches. We consider two data preprocessing methods and 48 NN models with different possible combinations of lagged observations, seasonal dummy variables, trigonometric variables, and time index as inputs to the NN. Both parametric and nonparametric statistical analyses are performed to identify the best models under different circumstances and categorize similar models. Results indicate that simpler models, in general, outperform more complex models. In addition, data preprocessing especially with deseasonalization and detrending is very helpful in improving NN performance. Practical guidelines are also provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synchrony in Silicon: The Gamma Rhythm

    Publication Year: 2007 , Page(s): 1815 - 1825
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1592 KB) |  | HTML iconHTML  

    In this paper, we present a network of silicon in-terneurons that synchronize in the gamma frequency range (20-80 Hz). The gamma rhythm strongly influences neuronal spike timing within many brain regions, potentially playing a crucial role in computation. Yet it has largely been ignored in neuromorphic systems, which use mixed analog and digital circuits to model neurobiology in silicon. Our neurons synchronize by using shunting inhibition (conductance based) with a synaptic rise time. Synaptic rise time promotes synchrony by delaying the effect of inhibition, providing an opportune period for interneu-rons to spike together. Shunting inhibition, through its voltage dependence, inhibits interneurons that spike out of phase more strongly (delaying the spike further), pushing them into phase (in the next cycle). We characterize the interneuron, which consists of soma (cell body) and synapse circuits, fabricated in a 0.25- mum complementary metal-oxide-semiconductor (CMOS). Further, we show that synchronized interneurons (population of 256) spike with a period that is proportional to the synaptic rise time. We use these interneurons to entrain model excitatory principal neurons and to implement a form of object binding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast Tracking Algorithm for Generalized LARS/LASSO

    Publication Year: 2007 , Page(s): 1826 - 1830
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB) |  | HTML iconHTML  

    This letter gives an efficient algorithm for tracking the solution curve of sparse logistic regression with respect to the regularization parameter. The algorithm is based on approximating the logistic regression loss by a piecewise quadratic function, using Rosset and Zhu's path tracking algorithm on the approximate problem, and then applying a correction to get to the true path. Application of the algorithm to text classification and sparse kernel logistic regression shows that the algorithm is efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NN-Based Adaptive Tracking Control of Uncertain Nonlinear Systems Disturbed by Unknown Covariance Noise

    Publication Year: 2007 , Page(s): 1830 - 1835
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (470 KB) |  | HTML iconHTML  

    A class of uncertain nonlinear systems that are additionally driven by unknown covariance noise is considered. Based on the backstepping technique, adaptive neural control schemes are developed to solve the output tracking control problem of such systems. As it is proven by stability analysis, the proposed controller guarantees that all the error variables are bounded with desired probability in a compact set while the tracking error is mean-square semiglobally uniformly ultimately bounded (M-SGUUB). The tracking performance and the effectiveness of the proposed design are evaluated by simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global \mu -Stability of Delayed Neural Networks With Unbounded Time-Varying Delays

    Publication Year: 2007 , Page(s): 1836 - 1840
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB) |  | HTML iconHTML  

    In this letter, dynamical systems with unbounded time-varying delays are investigated. We address the following question: To what extent the time-varying delays can exist while the system is stable? Moreover, a new concept of stability, global mu-stability, is proposed. Under mild conditions, we prove that the dynamical systems with unbounded time-varying delays are globally mu-stable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Synchronization Between Two Different Chaotic Neural Networks With Time Delay

    Publication Year: 2007 , Page(s): 1841 - 1845
    Cited by:  Papers (49)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    This letter presents an adaptive synchronization scheme between two different kinds of delayed chaotic neural networks (NNs) with partly unknown parameters. An adaptive controller is designed to guarantee the global asymptotic synchronization of state trajectories for two different chaotic NNs with time delay. An illustrative example is given to demonstrate the effectiveness of the present method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope