By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov. 2005

Filter Results

Displaying Results 1 - 25 of 44
  • Table of contents

    Publication Year: 2005 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Publication Year: 2005 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • A novel neural network for variational inequalities with linear and nonlinear constraints

    Publication Year: 2005 , Page(s): 1305 - 1317
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild conditions by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm

    Publication Year: 2005 , Page(s): 1318 - 1328
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (749 KB) |  | HTML iconHTML  

    The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential stability of impulsive high-order Hopfield-type neural networks with time-varying delays

    Publication Year: 2005 , Page(s): 1329 - 1339
    Cited by:  Papers (54)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    This paper considers the problems of global exponential stability and exponential convergence rate for impulsive high-order Hopfield-type neural networks with time-varying delays. By using the method of Lyapunov functions, some sufficient conditions for ensuring global exponential stability of these networks are derived, and the estimated exponential convergence rate is also obtained. As an illustration, an numerical example is worked out using the results obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Existence and global exponential stability of almost periodic solution for cellular neural networks with variable coefficients and time-varying delays

    Publication Year: 2005 , Page(s): 1340 - 1351
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    In this paper, we study cellular neural networks with almost periodic variable coefficients and time-varying delays. By using the existence theorem of almost periodic solution for general functional differential equations, introducing many real parameters and applying the Lyapunov functional method and the technique of Young inequality, we obtain some sufficient conditions to ensure the existence, uniqueness, and global exponential stability of almost periodic solution. The results obtained in this paper are new, useful, and extend and improve the existing ones in previous literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extension neural network-type 2 and its applications

    Publication Year: 2005 , Page(s): 1352 - 1361
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    A supervised learning pattern classifier, called the extension neural network (ENN), has been described in a recent paper. In this sequel, the unsupervised learning pattern clustering sibling called the extension neural network type 2 (ENN-2) is proposed. This new neural network uses an extension distance (ED) to measure the similarity between data and the cluster center. It does not require an initial guess of the cluster center coordinates, nor of the initial number of clusters. The clustering process is controlled by a distanced parameter and by a novel extension distance. It shows the same capability as human memory systems to keep stability and plasticity characteristics at the same time, and it can produce meaningful weights after learning. Moreover, the structure of the proposed ENN-2 is simpler and the learning time is shorter than traditional neural networks. Experimental results from five different examples, including three benchmark data sets and two practical applications, verify the effectiveness and applicability of the proposed work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PRSOM: a new visualization method by hybridizing multidimensional scaling and self-organizing map

    Publication Year: 2005 , Page(s): 1362 - 1380
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2194 KB) |  | HTML iconHTML  

    Self-organizing map (SOM) is an approach of nonlinear dimension reduction and can be used for visualization. It only preserves topological structures of input data on the projected output space. The interneuron distances of SOM are not preserved from input space into output space such that the visualization of SOM can be degraded. Visualization-induced SOM (ViSOM) has been proposed to overcome this problem. However, ViSOM is derived from heuristic and no cost function is assigned to it. In this paper, a probabilistic regularized SOM (PRSOM) is proposed to give a better visualization effect. It is associated with a cost function and gives a principled rule for weight-updating. The advantages of both multidimensional scaling (MDS) and SOM are incorporated in PRSOM. Like MDS, The interneuron distances of PRSOM in input space resemble those in output space, which are predefined before training. Instead of the hard assignment by ViSOM, the soft assignment by PRSOM can be further utilized to enhance the visualization effect. Experimental results demonstrate the effectiveness of the proposed PRSOM method compared with other dimension reduction methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite-element neural networks for solving differential equations

    Publication Year: 2005 , Page(s): 1381 - 1392
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (963 KB) |  | HTML iconHTML  

    The solution of partial differential equations (PDE) arises in a wide variety of engineering problems. Solutions to most practical problems use numerical analysis techniques such as finite-element or finite-difference methods. The drawbacks of these approaches include computational costs associated with the modeling of complex geometries. This paper proposes a finite-element neural network (FENN) obtained by embedding a finite-element model in a neural network architecture that enables fast and accurate solution of the forward problem. Results of applying the FENN to several simple electromagnetic forward and inverse problems are presented. Initial results indicate that the FENN performance as a forward model is comparable to that of the conventional finite-element method (FEM). The FENN can also be used in an iterative approach to solve inverse problems associated with the PDE. Results showing the ability of the FENN to solve the inverse problem given the measured signal are also presented. The parallel nature of the FENN also makes it an attractive solution for parallel implementation in hardware and software. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns

    Publication Year: 2005 , Page(s): 1393 - 1400
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The time dimension for scene analysis

    Publication Year: 2005 , Page(s): 1401 - 1426
    Cited by:  Papers (38)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2378 KB) |  | HTML iconHTML  

    A fundamental issue in neural computation is the binding problem, which refers to how sensory elements in a scene organize into perceived objects, or percepts. The issue of binding is hotly debated in recent years in neuroscience and related communities. Much of the debate, however, gives little attention to computational considerations. This review intends to elucidate the computational issues that bear directly on the binding issue. The review starts with two problems considered by Rosenblatt to be the most challenging to the development of perceptron theory more than 40 years ago, and argues that the main challenge is the figure-ground separation problem, which is intrinsically related to the binding problem. The theme of the review is that the time dimension is essential for systematically attacking Rosenblatt's challenge. The temporal correlation theory as well as its special form-oscillatory correlation theory-is discussed as an adequate representation theory to address the binding problem. Recent advances in understanding oscillatory dynamics are reviewed, and these advances have overcome key computational obstacles for the development of the oscillatory correlation theory. We survey a variety of studies that address the scene analysis problem. The results of these studies have substantially advanced the capability of neural networks for figure-ground separation. A number of issues regarding oscillatory correlation are considered and clarified. Finally, the time dimension is argued to be necessary for versatile computing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The self-trapping attractor neural network-part II: properties of a sparsely connected model storing multiple memories

    Publication Year: 2005 , Page(s): 1427 - 1439
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    In a previous paper, the self-trapping network (STN) was introduced as more biologically realistic than attractor neural networks (ANNs) based on the Ising model. This paper extends the previous analysis of a one-dimensional (1-D) STN storing a single memory to a model that stores multiple memories and that possesses generalized sparse connectivity. The energy, Lyapunov function, and partition function derived for the 1-D model are generalized to the case of an attractor network with only near-neighbor synapses, coupled to a system that computes memory overlaps. Simulations reveal that 1) the STN dramatically reduces intra-ANN connectivity without severly affecting the size of basins of attraction, with fast self-trapping able to sustain attractors even in the absence of intra-ANN synapses; 2) the basins of attraction can be controlled by a single free parameter, providing natural attention-like effects; 3) the same parameter determines the memory capacity of the network, and the latter is much less dependent than a standard ANN on the noise level of the system; 4) the STN serves as a useful memory for some correlated memory patterns for which the standard ANN totally fails; 5) the STN can store a large number of sparse patterns; and 6) a Monte Carlo procedure, a competitive neural network, and binary neurons with thresholds can be used to induce self-trapping. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global exponential periodicity of a class of recurrent neural networks with oscillating parameters and time-varying delays

    Publication Year: 2005 , Page(s): 1440 - 1448
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    In this paper, we present the analytical results on the global exponential periodicity of a class of recurrent neural networks with oscillating parameters and time-varying delays. Sufficient conditions are derived for ascertaining the existence, uniqueness and global exponential periodicity of the oscillatory solution of such recurrent neural networks by using the comparison principle and mixed monotone operator method. The periodicity results extend or improve existing stability results for the class of recurrent neural networks with and without time delays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain

    Publication Year: 2005 , Page(s): 1449 - 1463
    Cited by:  Papers (68)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    This paper introduces a general class of neural networks with arbitrary constant delays in the neuron interconnections, and neuron activations belonging to the set of discontinuous monotone increasing and (possibly) unbounded functions. The discontinuities in the activations are an ideal model of the situation where the gain of the neuron amplifiers is very high and tends to infinity, while the delay accounts for the finite switching speed of the neuron amplifiers, or the finite signal propagation speed. It is known that the delay in combination with high-gain nonlinearities is a particularly harmful source of potential instability. The goal of this paper is to single out a subclass of the considered discontinuous neural networks for which stability is instead insensitive to the presence of a delay. More precisely, conditions are given under which there is a unique equilibrium point of the neural network, which is globally exponentially stable for the states, with a known convergence rate. The conditions are easily testable and independent of the delay. Moreover, global convergence in finite time of the state and output is investigated. In doing so, new interesting dynamical phenomena are highlighted with respect to the case without delay, which make the study of convergence in finite time significantly more difficult. The obtained results extend previous work on global stability of delayed neural networks with Lipschitz continuous neuron activations, and neural networks with discontinuous neuron activations but without delays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing asymmetric Hopfield-type associative memory with higher order hamming stability

    Publication Year: 2005 , Page(s): 1464 - 1476
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    The problem of optimal asymmetric Hopfield-type associative memory (HAM) design based on perceptron-type learning algorithms is considered. It is found that most of the existing methods considered the design problem as either 1) finding optimal hyperplanes according to normal distance from the prototype vectors to the hyperplane surface or 2) obtaining weight matrix W=[wij] by solving a constraint optimization problem. In this paper, we show that since the state space of the HAM consists of only bipolar patterns, i.e., V=(v1,v2,...,vN)T∈{-1,+1}N, the basins of attraction around each prototype (training) vector should be expanded by using Hamming distance measure. For this reason, in this paper, the design problem is considered from a different point of view. Our idea is to systematically increase the size of the training set according to the desired basin of attraction around each prototype vector. We name this concept the higher order Hamming stability and show that conventional minimum-overlap algorithm can be modified to incorporate this concept. Experimental results show that the recall capability as well as the number of spurious memories are all improved by using the proposed method. Moreover, it is well known that setting all self-connections wii∀i to zero has the effect of reducing the number of spurious memories in state space. From the experimental results, we find that the basin width around each prototype vector can be enlarged by allowing nonzero diagonal elements on learning of the weight matrix W. If the magnitude of wii is small for all i, then the condition wii=0∀i can be relaxed without seriously affecting the number of spurious memories in the state space. Therefore, the method proposed in this paper can be used to increase the basin width around each prototype vector with the cost of slightly increasing the number of spurious memories in the state space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and analysis of a general recurrent neural network model for time-varying matrix inversion

    Publication Year: 2005 , Page(s): 1477 - 1490
    Cited by:  Papers (85)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (563 KB) |  | HTML iconHTML  

    Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Output feedback control of a class of discrete MIMO nonlinear systems with triangular form inputs

    Publication Year: 2005 , Page(s): 1491 - 1503
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB)  

    In this paper, adaptive neural network (NN) control is investigated for a class of discrete-time multi-input-multi-output (MIMO) nonlinear systems with triangular form inputs. Each subsystem of the MIMO system is in strict feedback form. First, through two phases of coordinate transformation, the MIMO system is transformed into input-output representation with the triangular form input structure unchanged. By using high-order neural networks (HONNs) as the emulators of the desired controls, effective output feedback adaptive control is developed using backstepping. The closed-loop system is proved to be semiglobally uniformly ultimate bounded (SGUUB) by using Lyapunov method. The output tracking errors are guaranteed to converge into a compact set whose size is adjustable, and all the other signals in the closed-loop system are proved to be bounded. Simulation results show the effectiveness of the proposed control scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speeding up the learning of robot kinematics through function decomposition

    Publication Year: 2005 , Page(s): 1504 - 1512
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB) |  | HTML iconHTML  

    The main drawback of using neural networks or other example-based learning procedures to approximate the inverse kinematics (IK) of robot arms is the high number of training samples (i.e., robot movements) required to attain an acceptable precision. We propose here a trick, valid for most industrial robots, that greatly reduces the number of movements needed to learn or relearn the IK to a given accuracy. This trick consists in expressing the IK as a composition of learnable functions, each having half the dimensionality of the original mapping. Off-line and on-line training schemes to learn these component functions are also proposed. Experimental results obtained by using nearest neighbors and parameterized self-organizing map, with and without the decomposition, show that the time savings granted by the proposed scheme grow polynomially with the precision required. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Connectionist-based Dempster-Shafer evidential reasoning for data fusion

    Publication Year: 2005 , Page(s): 1513 - 1530
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (923 KB) |  | HTML iconHTML  

    Dempster-Shafer evidence theory (DSET) is a popular paradigm for dealing with uncertainty and imprecision. Its corresponding evidential reasoning framework is theoretically attractive. However, there are outstanding issues that hinder its use in real-life applications. Two prominent issues in this regard are 1) the issue of basic probability assignments (masses) and 2) the issue of dependence among information sources. This paper attempts to deal with these issues by utilizing neural networks in the context of pattern classification application. First, a multilayer perceptron neural network with the mean squared error as a cost function is implemented to calculate, for each information source, posteriori probabilities for all classes. Second, an evidence structure construction scheme is developed for transferring the estimated posteriori probabilities to a set of masses along with the corresponding focal elements, from a Bayesian decision point of view. Third, a network realization of the Dempster-Shafer evidential reasoning is designed and analyzed, and it is further extended to a DSET-based neural network, referred to as DSETNN, to manipulate the evidence structures. In order to tackle the issue of dependence between sources, DSETNN is tuned for optimal performance through a supervised learning process. To demonstrate the effectiveness of the proposed approach, we apply it to three benchmark pattern classification problems. Experiments reveal that the DSETNN outperforms DSET and provide encouraging results in terms of classification accuracy and the speed of learning convergence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuron selection for RBF neural network classifier based on data structure preserving criterion

    Publication Year: 2005 , Page(s): 1531 - 1540
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    The central problem in training a radial basis function neural network is the selection of hidden layer neurons. In this paper, we propose to select hidden layer neurons based on data structure preserving criterion. Data structure denotes relative location of samples in the high-dimensional space. By preserving the data structure of samples including those that are close to separation boundaries between different classes, the neuron subset selected retains the separation margin underlying the full set of hidden layer neurons. As a direct result, the network obtained tends to generalize well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SMO-based pruning methods for sparse least squares support vector machines

    Publication Year: 2005 , Page(s): 1541 - 1546
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    Solutions of least squares support vector machines (LS-SVMs) are typically nonsparse. The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. Iterative retraining requires more intensive computations than training a single nonsparse LS-SVM. In this paper, we propose a new pruning algorithm for sparse LS-SVMs: the sequential minimal optimization (SMO) method is introduced into pruning process; in addition, instead of determining the pruning points by errors, we omit the data points that will introduce minimum changes to a dual objective function. This new criterion is computationally efficient. The effectiveness of the proposed method in terms of computational cost and classification accuracy is demonstrated by numerical experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classifiability-based omnivariate decision trees

    Publication Year: 2005 , Page(s): 1547 - 1560
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Posterior probability support vector Machines for unbalanced data

    Publication Year: 2005 , Page(s): 1561 - 1573
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (527 KB) |  | HTML iconHTML  

    This paper proposes a complete framework of posterior probability support vector machines (PPSVMs) for weighted training samples using modified concepts of risks, linear separability, margin, and optimal hyperplane. Within this framework, a new optimization problem for unbalanced classification problems is formulated and a new concept of support vectors established. Furthermore, a soft PPSVM with an interpretable parameter ν is obtained which is similar to the ν-SVM developed by Schölkopf et al., and an empirical method for determining the posterior probability is proposed as a new approach to determine ν. The main advantage of an PPSVM classifier lies in that fact that it is closer to the Bayes optimal without knowing the distributions. To validate the proposed method, two synthetic classification examples are used to illustrate the logical correctness of PPSVMs and their relationship to regular SVMs and Bayesian methods. Several other classification experiments are conducted to demonstrate that the performance of PPSVMs is better than regular SVMs in some cases. Compared with fuzzy support vector machines (FSVMs), the proposed PPSVM is a natural and an analytical extension of regular SVMs based on the statistical learning theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptual adaptive insensitivity for support vector machine image coding

    Publication Year: 2005 , Page(s): 1574 - 1581
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    Support vector machine (SVM) learning has been recently proposed for image compression in the frequency domain using a constant ε-insensitivity zone by Robinson and Kecman. However, according to the statistical properties of natural images and the properties of human perception, a constant insensitivity makes sense in the spatial domain but it is certainly not a good option in a frequency domain. In fact, in their approach, they made a fixed low-pass assumption as the number of discrete cosine transform (DCT) coefficients to be used in the training was limited. This paper extends the work of Robinson and Kecman by proposing the use of adaptive insensitivity SVMs for image coding using an appropriate distortion criterion , based on a simple visual cortex model. Training the SVM by using an accurate perception model avoids any a priori assumption and improves the rate-distortion performance of the original approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors

    Publication Year: 2005 , Page(s): 1582 - 1589
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1824 KB) |  | HTML iconHTML  

    Habituation is a form of nonassociative learning observed in a variety of species of animals. Arguably, it is the simplest form of learning. Nonetheless, the ability to habituate to certain stimuli implies plastic neural systems and adaptive behaviors. This paper describes how computational models of habituation can be applied to real robots. In particular, we discuss the problem of the oscillatory movements observed when a Khepera robot navigates through narrow hallways using a biologically inspired neurocontroller. Results show that habituation to the proximity of the walls can lead to smoother navigation. Habituation to sensory stimulation to the sides of the robot does not interfere with the robot's ability to turn at dead ends and to avoid obstacles outside the hallway. This paper shows that simple biological mechanisms of learning can be adapted to achieve better performance in real mobile robots. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope