By Topic

Neural Networks, IEEE Transactions on

Issue 7 • Date July 2009

Filter Results

Displaying Results 1 - 17 of 17
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • SoftDoubleMaxMinOver: Perceptron-Like Training of Support Vector Machines

    Page(s): 1061 - 1072
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1477 KB) |  | HTML iconHTML  

    The well-known MinOver algorithm is a slight modification of the perceptron algorithm and provides the maximum-margin classifier without a bias in linearly separable two-class classification problems. DoubleMinOver as an extension of MinOver, which now includes a bias, is introduced. An O(t-1) convergence is shown, where t is the number of learning steps. The computational effort per step increases only linearly with the number of patterns. In its formulation with kernels, selected training patterns have to be stored. A drawback of MinOver and DoubleMinOver is that this set of patterns does not consist of support vectors only. DoubleMaxMinOver, as an extension of DoubleMinOver, overcomes this drawback by selectively forgetting all nonsupport vectors after a finite number of training steps. It is shown how this iterative procedure that is still very similar to the perceptron algorithm can be extended to classification with soft margins and be used for training least squares support vector machines (SVMs). On benchmarks, the SoftDoubleMaxMinOver algorithm achieves the same performance as standard SVM software. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recurrent-Neural-Network-Based Boolean Factor Analysis and Its Application to Word Clustering

    Page(s): 1073 - 1086
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1261 KB) |  | HTML iconHTML  

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Pareto Mixture for Conditional Asymmetric Fat-Tailed Distributions

    Page(s): 1087 - 1101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1551 KB) |  | HTML iconHTML  

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y, with (X, Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X=x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X=x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stability and Synchronization of Discrete-Time Markovian Jumping Neural Networks With Mixed Mode-Dependent Time Delays

    Page(s): 1102 - 1116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (823 KB) |  | HTML iconHTML  

    In this paper, we introduce a new class of discrete-time neural networks (DNNs) with Markovian jumping parameters as well as mode-dependent mixed time delays (both discrete and distributed time delays). Specifically, the parameters of the DNNs are subject to the switching from one to another at different times according to a Markov chain, and the mixed time delays consist of both discrete and distributed delays that are dependent on the Markovian jumping mode. We first deal with the stability analysis problem of the addressed neural networks. A special inequality is developed to account for the mixed time delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the stochastic stability. We then turn to the synchronization problem among an array of identical coupled Markovian jumping neural networks with mixed mode-dependent time delays. By utilizing the Lyapunov stability theory and the Kronecker product, it is shown that the addressed synchronization problem is solvable if several LMIs are feasible. Hence, different from the commonly used matrix norm theories (such as the M-matrix method), a unified LMI approach is developed to solve the stability analysis and synchronization problems of the class of neural networks under investigation, where the LMIs can be easily solved by using the available Matlab LMI toolbox. Two numerical examples are presented to illustrate the usefulness and effectiveness of the main results obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Granular Reflex Fuzzy Min–Max Neural Network for Classification

    Page(s): 1117 - 1134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2090 KB) |  | HTML iconHTML  

    Granular data classification and clustering is an upcoming and important issue in the field of pattern recognition. Conventionally, computing is thought to be manipulation of numbers or symbols. However, human recognition capabilities are based on ability to process nonnumeric clumps of information (information granules) in addition to individual numeric values. This paper proposes a granular neural network (GNN) called granular reflex fuzzy min-max neural network (GrRFMN) which can learn and classify granular data. GrRFMN uses hyperbox fuzzy set to represent granular data. Its architecture consists of a reflex mechanism inspired from human brain to handle class overlaps. The network can be trained online using granular or point data. The neuron activation functions in GrRFMN are designed to tackle data of different granularity (size). This paper also addresses an issue to granulate the training data and learn from it. It is observed that such a preprocessing of data can improve performance of a classifier. Experimental results on real data sets show that the proposed GrRFMN can classify granules of different granularity more correctly. Results are compared with general fuzzy min-max neural network (GFMN) proposed by Gabrys and Bargiela and with some classical methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning of Spatio–Temporal Codes in a Coupled Oscillator System

    Page(s): 1135 - 1147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1468 KB) |  | HTML iconHTML  

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a ldquowinnerless competitionrdquo process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using ldquoweighted order parameters (WOPs)rdquo that are analogous to ldquolocal field potentialsrdquo in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Control for a Class of Nonlinear Systems With Uncertain Hysteresis Inputs and Time-Varying State Delays

    Page(s): 1148 - 1164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (841 KB) |  | HTML iconHTML  

    In this paper, adaptive variable structure neural control is investigated for a class of nonlinear systems under the effects of time-varying state delays and uncertain hysteresis inputs. The unknown time-varying delay uncertainties are compensated for using appropriate Lyapunov-Krasovskii functionals in the design, and the effect of the uncertain hysteresis with the Prandtl-Ishlinskii (PI) model representation is also mitigated using the proposed control. By utilizing the integral-type Lyapunov function, the closed-loop control system is proved to be semi globally uniformly ultimately bounded (SGUUB). Extensive simulation results demonstrate the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lag Synchronization of Unknown Chaotic Delayed Yang–Yang-Type Fuzzy Neural Networks With Noise Perturbation Based on Adaptive Control and Parameter Identification

    Page(s): 1165 - 1180
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1533 KB) |  | HTML iconHTML  

    This paper considers the lag synchronization (LS) issue of unknown coupled chaotic delayed Yang-Yang-type fuzzy neural networks (YYFCNN) with noise perturbation. Separate research work has been published on the stability of fuzzy neural network and LS issue of unknown coupled chaotic neural networks, as well as its application in secure communication. However, there have not been any studies that integrate the two. Motivated by the achievements from both fields, we explored the benefits of integrating fuzzy logic theories into the study of LS problems and applied the findings to secure communication. Based on adaptive feedback control techniques and suitable parameter identification, several sufficient conditions are developed to guarantee the LS of coupled chaotic delayed YYFCNN with or without noise perturbation. The problem studied in this paper is more general in many aspects. Various problems studied extensively in the literature can be treated as special cases of the findings of this paper, such as complete synchronization (CS), effect of fuzzy logic, and noise perturbation. This paper presents an illustrative example and uses simulated results of this example to show the feasibility and effectiveness of the proposed adaptive scheme. This research also demonstrates the effectiveness of application of the proposed adaptive feedback scheme in secure communication by comparing chaotic masking with fuzziness with some previous studies. Chaotic signal with fuzziness is more complex, which makes unmasking more difficult due to the added fuzzy logic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Global Kernel k -Means Algorithm for Clustering in Feature Space

    Page(s): 1181 - 1194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (861 KB) |  | HTML iconHTML  

    Kernel k-means is an extension of the standard k-means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k-means with random restarts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RKHS Bayes Discriminant: A Subspace Constrained Nonlinear Feature Projection for Signal Detection

    Page(s): 1195 - 1203
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1601 KB) |  | HTML iconHTML  

    Given the knowledge of class probability densities, a priori probabilities, and relative risk levels, Bayes classifier provides the optimal minimum-risk decision rule. Specifically, focusing on the two-class (detection) scenario, under certain symmetry assumptions, matched filters provide optimal results for the detection problem. Noticing that the Bayes classifier is in fact a nonlinear projection of the feature vector to a single-dimensional statistic, in this paper, we develop a smooth nonlinear projection filter constrained to the estimated span of class conditional distributions as does the Bayes classifier. The nonlinear projection filter is designed in a reproducing kernel Hilbert space leading to an analytical solution both for the filter and the optimal threshold. The proposed approach is tested on typical detection problems, such as neural spike detection or automatic target detection in synthetic aperture radar (SAR) imagery. Results are compared with linear and kernel discriminant analysis, as well as classification algorithms such as support vector machine, AdaBoost and LogitBoost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Control for Strict-Feedback Nonlinear Systems Without Backstepping

    Page(s): 1204 - 1209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    In this brief, a new adaptive neurocontrol algorithm for a single-input-single-output (SISO) strict-feedback nonlinear system is proposed. Most of the previous adaptive neural control algorithms for strict-feedback nonlinear systems were based on the backstepping scheme, which makes the control law and stability analysis very complicated. The main contribution of the proposed method is that it demonstrates that the state-feedback control of the strict-feedback system can be viewed as the output-feedback control problem of the system in the normal form. As a result, the proposed control algorithm is considerably simpler than the previous ones based on backstepping. Depending heavily on the universal approximation property of the neural network (NN), only one NN is employed to approximate the lumped uncertain system nonlinearity. The Lyapunov stability of the NN weights and filtered tracking error is guaranteed in the semiglobal sense. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Neural Control for a Class of Strict-Feedback Nonlinear Systems With State Time Delays

    Page(s): 1209 - 1215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (310 KB) |  | HTML iconHTML  

    This brief proposes a simple control approach for a class of uncertain nonlinear systems with unknown time delays in strict-feedback form. That is, the dynamic surface control technique, which can solve the ldquoexplosion of complexityrdquo problem in the backstepping design procedure, is extended to nonlinear systems with unknown time delays. The unknown time-delay effects are removed by using appropriate Lyapunov-Krasovskii functionals, and the uncertain nonlinear terms generated by this procedure as well as model uncertainties are approximated by the function approximation technique using neural networks. In addition, the bounds of external disturbances are estimated by the adaptive technique. From the Lyapunov stability theorem, we prove that all signals in the closed-loop system are semiglobally uniformly bounded. Finally, we present simulation results to validate the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Geometric Approach to Binary Classification Based on Scaled Convex Hulls

    Page(s): 1215 - 1220
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (281 KB) |  | HTML iconHTML  

    Geometric methods are very intuitive and provide a theoretical foundation to many optimization problems in the fields of pattern recognition and machine learning. In this brief, the notion of scaled convex hull (SCH) is defined and a set of theoretical results are exploited to support it. These results allow the existing nearest point algorithms to be directly applied to solve both the separable and nonseparable classification problems successfully and efficiently. Then, the popular S-K algorithm has been presented to solve the nonseparable problems in the context of the SCH framework. The theoretical analysis and some experiments show that the proposed method may achieve better performance than the state-of-the-art methods in terms of the number of kernel evaluations and the execution time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Blank page [back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope