Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 2001

Filter Results

Displaying Results 1 - 25 of 25
  • Comments on "Classification ability of single hidden layer feedforward neural networks"

    Publication Year: 2001 , Page(s): 642 - 643
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB) |  | HTML iconHTML  

    The original paper, by Huang, Chen Babri (ibid., vol.11, p.799-801, 2000), addresses a certain classification problem, and concludes that classification can be achieved using a single hidden layer neural network. It is stated that the author previously presented conclusions along similar lines in a more general setting ("General structures for classification," IEEE Trans. Circuits Syst. I, vol.41, p.372-6, 1994). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supervised and Unsupervised Pattern Recognition: Feature Extraction and Computational Intelligence [Book Review]

    Publication Year: 2001 , Page(s): 644 - 647
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE
  • Multi-Valued and Universal Binary Neurons: Theory, Learning, and Applications [Book Review]

    Publication Year: 2001 , Page(s): 647
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • Robust adaptive spread-spectrum receiver with neural net preprocessing in non-Gaussian noise

    Publication Year: 2001 , Page(s): 546 - 558
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    Multiuser communications channels based on code division multiple access (CDMA) technique exhibit non-Gaussian statistics due to the presence of highly structured multiple access interference (MAI) and impulsive ambient noise. Linear adaptive interference suppression techniques are attractive for mitigating MAI under Gaussian noise. However, the Gaussian noise hypothesis has been found inadequate in many wireless channels characterized by impulsive disturbance. Linear finite impulse response (FIR) filters adapted with linear algorithms are limited by their structural formulation as a simple linear combiner with a hyperplanar decision boundary, which are extremely vulnerable to impulsive interference. This raises the issues of devising robust reception algorithms accounting at the design stage the non-Gaussian behavior of the interference. We propose a multiuser receiver that involves an adaptive nonlinear preprocessing front-end based on a multilayer perceptron neural network, which acts as a mechanism to reduce the influence of impulsive noise followed by a postprocessing stage using linear adaptive filters for MAI suppression. Theoretical arguments supported by promising simulation results suggest that the proposed receiver, which combines the relative merits of both nonlinear and linear signal processing, presents an effective approach for joint suppression of MAI and non-Gaussian ambient noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A complete proof of global exponential convergence of a neural network for quadratic optimization with bound constraints

    Publication Year: 2001 , Page(s): 636 - 639
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB) |  | HTML iconHTML  

    Sudharsanan and Sundareshan developed (1991) a neural-network model for bound constrained quadratic minimization and proved the global exponential convergence of their proposed neural network. The global exponential convergence is a critical property of the synthesized neural network for solving the optimization problem successfully. However, Davis and Pattison (1992) presented a counterexample to show that the proof given by Sudharsanan and Sundareshan for the global exponential convergence of the neural network is not correct. Bouzerdoum and Pattison (ibid., vol.4, no.2, p.293-303, 1993) then generalized the neural-network model given by Sudharsanan and Sundareshan and derived the global exponential convergence of the neural network under an appropriate condition. In this letter, we demonstrate through an example that the global exponential convergence condition given by Bouzerdoum and Pattison is not always satisfied by the quadratic minimization problem and show that the neural-network model under the global exponential convergence condition given by Bouzerdoum and Pattison is essentially restricted to contractive networks. Subsequently, a complete proof of the global exponential convergence of the neural-network models proposed by Sudharsanan and Sundareshan and Bouzerdoum and Pattison is given for the general case, without resorting to the global exponential convergence condition given by Bouzerdoum and Pattison. An illustrative simulation example is also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture

    Publication Year: 2001 , Page(s): 516 - 531
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    We address the problem of adaptive regularization in image restoration by adopting a neural-network learning approach. Instead of explicitly specifying the local regularization parameter values, they are regarded as network weights which are then modified through the supply of appropriate training examples. The desired response of the network is in the form of a gray level value estimate of the current pixel using weighted order statistic (WOS) filter. However, instead of replacing the previous value with this estimate, this is used to modify the network weights, or equivalently, the regularization parameters such that the restored gray level value produced by the network is closer to this desired response. In this way, the single WOS estimation scheme can allow appropriate parameter values to emerge under different noise conditions, rather than requiring their explicit selection in each occasion. In addition, we also consider the separate regularization of edges and textures due to their different noise masking capabilities. This in turn requires discriminating between these two feature types. Due to the inability of conventional local variance measures to distinguish these two high variance features, we propose the new edge-texture characterization (ETC) measure which performs this discrimination based on a scalar value only. This is then incorporated into a fuzzified form of the previous neural network which determines the degree of membership of each high variance pixel in two fuzzy sets, the EDGE and TEXTURE fuzzy sets, from the local ETC value, and then evaluates the appropriate regularization parameter by appropriately combining these two membership function values View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Thresholding neural network for adaptive noise reduction

    Publication Year: 2001 , Page(s): 567 - 584
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    In the paper, a type of thresholding neural network (TNN) is developed for adaptive noise reduction. New types of soft and hard thresholding functions are created to serve as the activation function of the TNN. Unlike the standard thresholding functions, the new thresholding functions are infinitely differentiable. By using the new thresholding functions, some gradient-based learning algorithms become possible or more effective. The optimal solution of the TNN in a mean square error (MSE) sense is discussed. It is proved that there is at most one optimal solution for the soft-thresholding TNN. General optimal performances of both soft and hard thresholding TNNs are analyzed and compared to the linear noise reduction method. Gradient-based adaptive learning algorithms are presented to seek the optimal solution for noise reduction. The algorithms include supervised and unsupervised batch learning as well as supervised and unsupervised stochastic learning. It is indicated that the TNN with the stochastic learning algorithms can be used as a novel nonlinear adaptive filter. It is proved that the stochastic learning algorithm is convergent in certain statistical sense in ideal conditions. Numerical results show that the TNN is very effective in finding the optimal solutions of thresholding methods in an MSE sense and usually outperforms other noise reduction methods. Especially, it is shown that the TNN-based nonlinear adaptive filtering outperforms the conventional linear adaptive filtering in both optimal solution and learning performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mapping Boolean functions with neural networks having binary weights and zero thresholds

    Publication Year: 2001 , Page(s): 639 - 642
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    In this paper, the ability of a binary neural-network comprising only neurons with zero thresholds and binary weights to map given samples of a Boolean function is studied. A mathematical model describing a network with such restrictions is developed. It is shown that this model is quite amenable to algebraic manipulation. A key feature of the model is that it replaces the two input and output variables with a single “normalized” variable. The model is then used to provide a priori criteria, stated in terms of the new variable, that a given Boolean function must satisfy in order to be mapped by a network having one or two layers. These criteria provide necessary, and in the case of a one-layer network, sufficient conditions for samples of a Boolean function to be mapped by a binary neural network with zero thresholds. It is shown that the necessary conditions imposed by the two-layer network are, in some sense, minimal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Support vector machine multiuser receiver for DS-CDMA signals in multipath channels

    Publication Year: 2001 , Page(s): 604 - 611
    Cited by:  Papers (55)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    The problem of constructing an adaptive multiuser detector (MUD) is considered for direct sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. The emerging learning technique, called support vector machines (SVM), is proposed as a method of obtaining a nonlinear MUD from a relatively small training data block. Computer simulation is used to study this SVM MUD, and the results show that it can closely match the performance of the optimal Bayesian one-shot detector. Comparisons with an adaptive radial basis function (RBF) MUD trained by an unsupervised clustering algorithm are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind separation of signals with mixed kurtosis signs using threshold activation functions

    Publication Year: 2001 , Page(s): 618 - 624
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB) |  | HTML iconHTML  

    A parameterized activation function in the form of an adaptive threshold for a single-layer neural network, which separates a mixture of signals with any distribution (except for Gaussian), is introduced. This activation function is particularly simple to implement, since it neither uses hyperbolic nor polynomial functions, unlike most other nonlinear functions used for blind separation. For some specific distributions, the stable region of the threshold parameter is derived, and optimal values for best separation performance are given. If the threshold parameter is made adaptive during the separation process, the successful separation of signals whose distribution is unknown is demonstrated and compared against other known methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed fuzzy learning using the MULTISOFT machine

    Publication Year: 2001 , Page(s): 475 - 484
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB) |  | HTML iconHTML  

    Describes PARGEFREX, a distributed approach to genetic-neuro-fuzzy learning which has been implemented using the MULTISOFT machine, a low-cost form of personal computers built at the University of Messina. The performance of the serial version is hugely enhanced with the simple parallelization scheme described in the paper. Once a learning dataset is fixed, there is a very high super linear speedup in the average time needed to reach a prefixed learning error, i.e., if the number of personal computers increases by n times, the mean learning time becomes less than 1/n times View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedforward Neural Network Methodology

    Publication Year: 2001 , Page(s): 647 - 648
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (24 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-order MS CMAC neural network

    Publication Year: 2001 , Page(s): 598 - 603
    Cited by:  Papers (58)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    A macro structure cerebellar model articulation controller (MS CMAC) was developed by connecting several 1D CMAC in a tree structure, which decomposes a multidimensional problem into a set of 1D subproblems, to reduce the computational complexity in multidimensional CMAC. Additionally, a trapezium scheme is proposed to assist MS CMAC to model nonlinear systems. However, this trapezium scheme cannot perform a real smooth interpolation, and its working parameters are obtained through cross-validation. A quadratic splines scheme is developed herein to replace the trapezium scheme in MS CMAC, named high-order MS CMAC (HMS CMAC). The quadratic splines scheme systematically transforms the stepwise weight contents of CMAC in MS CMAC into smooth weight contents to perform the smooth outputs. Test results affirm that the HMS CMAC has acceptable generalization in continuous function-mapping problems for nonoverlapping association in training instances. Nonoverlapping association in training instances not only significantly reduces the number of training instances needed, but also requires only one learning cycle in the learning stage View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bipolar spectral associative memories

    Publication Year: 2001 , Page(s): 463 - 474
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    Nonlinear spectral associative memories are proposed as quantized frequency domain formulations of nonlinear, recurrent associative memories in which volatile network attractors are instantiated by attractor waves. In contrast to conventional associative memories, attractors encoded in the frequency domain by convolution may be viewed as volatile online inputs, rather than nonvolatile, off-line parameters. Spectral memories hold several advantages over conventional associative memories, including decoder/attractor separability and linear scalability, which make them especially well suited for digital communications. Bit patterns may be transmitted over a noisy channel in a spectral attractor and recovered at the receiver by recurrent, spectral decoding. Massive nonlocal connectivity is realized virtually, maintaining high symbol-to-bit ratios while scaling linearly with pattern dimension. For n-bit patterns, autoassociative memories achieve the highest noise immunity, whereas heteroassociative memories offer the added flexibility of achieving various code rates, or degrees of extrinsic redundancy. Due to linear scalability, high noise immunity and use of conventional building blocks, spectral associative memories hold much promise for achieving robust communication systems. Simulations are provided showing bit error rates for various degrees of decoding time, computational oversampling, and signal-to-noise ratio View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient source adaptivity in independent component analysis

    Publication Year: 2001 , Page(s): 559 - 566
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    A basic element in most independent component analysis (ICA) algorithms is the choice of a model for the score functions of the unknown sources. While this is usually based on approximations, for large data sets it is possible to achieve “source adaptivity” by directly estimating from the data the “true” score functions of the sources. We describe an efficient scheme for achieving this by extending the fast density estimation method of Silverman (1982). We show with a real and a synthetic experiment that our method can provide more accurate solutions than state-of-the-art methods when optimization is carried out in the vicinity of the global minimum of the contrast function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilayer perceptron-based DFE with lattice structure

    Publication Year: 2001 , Page(s): 532 - 545
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    The severely distorting channels limit the use of linear equalizers and the use of the nonlinear equalizers then becomes justifiable. Neural-network-based equalizers, especially the multilayer perceptron (MLP)-based equalizers, are computationally efficient alternative to currently used nonlinear filter realizations, e.g., the Volterra type. The drawback of the MLP-based equalizers is, however, their slow rate of convergence, which limit their use in practical systems. In this work, the effect of whitening the input data in a multilayer perceptron-based decision feedback equalizer (DFE) is evaluated. It is shown from computer simulations that whitening the received data employing adaptive lattice channel equalization algorithms improves the convergence rate and bit error rate performances of multilayer perceptron-based DFE. The adaptive lattice algorithm is a modification to the one developed by Ling and Proakis (1985). The consistency in performance is observed in both time-invariant and time-varying channels. Finally, it is found in this work that, for time-invariant channels, the MLP DFE outperforms the least mean squares (LMS)-based DFE. However, for time-varying channels comparable performance is obtained for the two configurations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selecting inputs for modeling using normalized higher order statistics and independent component analysis

    Publication Year: 2001 , Page(s): 612 - 617
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB) |  | HTML iconHTML  

    The problem of input variable selection is well known in the task of modeling real-world data. In this paper, we propose a novel model-free algorithm for input variable selection using independent component analysis and higher order cross statistics. Experimental results are given which indicate that the method is capable of giving reliable performance and that it outperforms other approaches when the inputs are dependent View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed-information neural control: the case of dynamic routing in traffic networks

    Publication Year: 2001 , Page(s): 485 - 502
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    Large-scale traffic networks can be modeled as graphs in which a set of nodes are connected through a set of links that cannot be loaded above their traffic capacities. Traffic flows may vary over time. Then the nodes may be requested to modify the traffic flows to be sent to their neighboring nodes. In this case, a dynamic routing problem arises. The decision makers are realistically assumed 1) to generate their routing decisions on the basis of local information and possibly of some data received from other nodes, typically, the neighboring ones and 2) to cooperate on the accomplishment of a common goal, that is, the minimization of the total traffic cost. Therefore, they can be regarded as the cooperating members of informationally distributed organizations, which, in control engineering and economics, are called team organizations. Team optimal control problems cannot be solved analytically unless special assumptions on the team model are verified. In general, this is not the case with traffic networks. An approximate resolutive method is then proposed, in which each decision maker is assigned a fixed-structure routing function where some parameters have to be optimized. Among the various possible fixed-structure functions, feedforward neural networks have been chosen for their powerful approximation capabilities. The routing functions can also be computed (or adapted) locally at each node. Concerning traffic networks, we focus attention on store-and-forward packet switching networks, which exhibit the essential peculiarities and difficulties of other traffic networks. Simulations performed on complex communication networks point out the effectiveness of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theoretical limitations of a Hopfield network for crossbar switching

    Publication Year: 2001 , Page(s): 456 - 462
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    It has been reported through simulations that Hopfield networks for crossbar switching almost always achieve the maximum throughput. It has therefore appeared that Hopfield networks of high-speed computation by parallel processing could possibly be used for crossbar switching. However, it has not been determined whether they can always achieve the maximum throughput. In the paper, the capabilities and limitations of a Hopfield network for crossbar switching are considered. The Hopfield network considered in the paper is generated from the most familiar and seemingly the most powerful neural representation of crossbar switching. Based on a theoretical analysis of the network dynamics, we show what switching control the Hopfield network can or cannot produce. Consequently, we are able to show that a Hopfield network cannot always achieve the maximum throughput View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A neural network-based approximation method for discrete-time nonlinear servomechanism problem

    Publication Year: 2001 , Page(s): 591 - 597
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    A feedback controller that solves the discrete-time nonlinear servomechanism problem relies on the solution of a set of nonlinear functional equations known as the discrete regulator equations. The exact solution of the discrete regulator equations is usually unavailable due to the nonlinearity of the system. The paper proposes to approximately solve the discrete regulator equations using a feedforward neural network. This approach leads to an effective way to practically solve the discrete nonlinear servomechanism problem. The approach has been illustrated using the well-known inverted pendulum on a cart system. The simulation shows that the control law designed by the proposed approach performs much better than the conventional linear control law View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ischemia detection with a self-organizing map supplemented by supervised learning

    Publication Year: 2001 , Page(s): 503 - 515
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    The problem of maximizing the performance of the detection of ischemia episodes is a difficult pattern classification problem. The motivation for developing the supervising network self-organizing map (sNet-SOM) model is to exploit this fact for designing computationally effective solutions both for the particular ischemic detection problem and for other applications that share similar characteristics. Specifically, the sNet-SOM utilizes unsupervised learning for the “simple” regions and supervised for the “difficult” ones in a two stage learning process. The unsupervised learning approach extends and adapts the self-organizing map (SOM) algorithm of Kohonen. The basic SOM is modified with a dynamic expansion process controlled with an entropy based criterion that allows the adaptive formation of the proper SOM structure. This extension proceeds until the total number of training patterns that are mapped to neurons with high entropy reduces to a size manageable numerically with a capable supervised model. The second learning phase has the objective of constructing better decision boundaries at the ambiguous regions. At this phase, a special supervised network is trained for the computationally reduced task of performing the classification at the ambiguous regions only. The utilization of sNet-SOM with supervised learning based on the radial basis functions and support vector machines has resulted in an improved accuracy of ischemia detection especially in the last case. The highly disciplined design of the generalization performance of the support vector machine allows designing the proper model for the number of patterns transferred to the supervised expert View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural and Adaptive Systems: Fundamentals Through Simulations

    Publication Year: 2001 , Page(s): 648 - 649
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (16 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eigenpaxels and a neural-network approach to image classification

    Publication Year: 2001 , Page(s): 625 - 635
    Cited by:  Papers (5)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    A expansion encoding approach to image classification is presented. Localized principal components or “eigenpaxels” are used as a set of basis functions to represent images. That is, principal-component analysis is applied locally rather than on the entire image. The “eigenpaxels” are statistically determined using a database of the images of interest. Classification based on visual similarity is achieved through the use of a single-layer error-correcting neural network. Expansion encoding and the technique of subsampling are key elements in the processing stages of the eigenpaxel algorithm. Tested using a database of frontal face images consisting of 40 individuals, the algorithm exhibits equivalent performance to other comparable but more cumbersome methods. In addition, the technique is shown to be robust to various types of image noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the structure of strict sense Bayesian cost functions and its applications

    Publication Year: 2001 , Page(s): 445 - 455
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    In the context of classification problems, the paper analyzes the general structure of the strict sense Bayesian (SSB) cost functions, those having a unique minimum when the soft decisions are equal to the posterior class probabilities. We show that any SSB cost is essentially the sum of a generalized measure of entropy, which does not depend on the targets, and an error component. Symmetric cost functions are analyzed in detail. Our results provide a further insight on the behavior of this family of objective functions and are the starting point for the exploration of novel algorithms. Two applications are proposed. First, the use of asymmetric SSB cost functions for posterior probability estimation in non-maximum a posteriori (MAP) decision problems. Second, a novel entropy minimization principle for hybrid learning: use labeled data to minimize the cost function, and unlabeled data to minimize the corresponding entropy measure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of two-dimensional recursive filters by using neural networks

    Publication Year: 2001 , Page(s): 585 - 590
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    A new design method for two-dimensional (2-D) recursive digital filters is investigated. The design of the 2-D filter is reduced to a constrained minimization problem the solution of which is achieved by the convergence of an appropriate neural network. The method is tested on a numerical example and compared with previously published methods when applied to the same example. Advantages of the proposed method over the existing ones are discussed as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope