Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 6 • Date Nov. 1997

Filter Results

Displaying Results 1 - 25 of 33
  • Comments on "A self-organizing network for hyperellipsoidal clustering (HEC)" [with reply]

    Publication Year: 1997 , Page(s): 1561 - 1563
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (68 KB)  

    In the above paper by Mao-Jain (ibid., vol.7 (1996)), the Mahalanobis distance is used instead of Euclidean distance as the distance measure in order to acquire the hyperellipsoidal clustering. We prove that the clustering cost function is a constant under this condition, so hyperellipsoidal clustering cannot be realized. We also explains why the clustering algorithm developed in the above paper can get some good hyperellipsoidal clustering results. In reply, Mao-Jain state that the Wang-Xia failed to point out that their HEC clustering algorithm used a regularized Mahalanobis distance instead of the standard Mahalanobis distance. It is the regularized Mahalanobis distance which plays an important role in realizing hyperellipsoidal clusters. In conclusion, the comments made by Wang-Xia together with this response provide some new insights into the behavior of their HEC clustering algorithm. It further confirms that the HEC algorithm is a useful tool for understanding the structure of multidimensional data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author's reply

    Publication Year: 1997 , Page(s): 1563
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (27 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corrections To "Adaptive Critic Designs"

    Publication Year: 1997 , Page(s): 1563
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (27 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power prediction in mobile communication systems using an optimal neural-network structure

    Publication Year: 1997 , Page(s): 1446 - 1455
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (234 KB)  

    Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast training of multilayer perceptrons

    Publication Year: 1997 , Page(s): 1314 - 1320
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct solution matrix methods for training the weights of the output layer; and gradient descent, the delta rule, and other proposed techniques for training the weights of the hidden layers. The approach has been implemented and tested on many problems. Experimental results, including training times and recognition accuracy, are given. Generally, the approach achieves accuracy as good as or better than perceptrons trained using error backpropagation, and the training process is much faster than the error backpropagation algorithm and also avoids local minima and paralysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Responses to transients in living and simulated neurons

    Publication Year: 1997 , Page(s): 1379 - 1385
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    This paper is concerned with synaptic coding when inputs to a neuron change over time. Experiments were performed on a living and simulated embodiment of a prototypical inhibitory synapse. These were used to test a simple model composed of a fixed delay preceding a nonlinear encoder. Based on these results, we present a qualitative model for phenomena previously observed in the living preparation, including hysteresis and dependence of discharge regularity on rate of change of presynaptic spike rate. As change is the rule rather than the exception in nature, understanding neurons responses to nonstationarity is essential for understanding their function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning convergence of CMAC technique

    Publication Year: 1997 , Page(s): 1281 - 1292
    Cited by:  Papers (50)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    CMAC is one useful learning technique that was developed two decades ago but yet lacks adequate theoretical foundation. Most past studies focused on development of algorithms, improvement of the CMAC structure, and applications. Given a learning problem, very little about the CMAC learning behavior such as the convergence characteristics, effects of hash mapping, effects of memory size, the error bound, etc. can be analyzed or predicted. In this paper, we describe the CMAC technique with mathematical formulation and use the formulation to study the CMAC convergence properties. Both information retrieval and learning rules are described by algebraic equations in matrix form. Convergence characteristics and learning behaviors for the CMAC with and without hash mapping are investigated with the use of these equations and eigenvalues of some derived matrices. The formulation and results provide a foundation for further investigation of this technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A knowledge-base generating hierarchical fuzzy-neural controller

    Publication Year: 1997 , Page(s): 1531 - 1541
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduction of breast biopsies with a modified self-organizing map

    Publication Year: 1997 , Page(s): 1386 - 1396
    Cited by:  Papers (18)  |  Patents (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    A modified self-organizing map with nonlinear weight adjustments has been applied to reduce the number of breast biopsies necessary for breast cancer diagnosis. Tissue features representing texture information from digital sonographic breast images were extracted from sonograms of benign and malignant breast tumors. The resulting hyperspace of data points was then used in a modified self-organizing map that objectively segments population distributions of lesions and accurately establishes benign and malignant regions. These methods were applied to a group of 102 problematic breast cases with sonographic images, including 34 with malignant lesions. All lesions were substantiated by excisional biopsy. The system can isolate clusters of purely benign lesions from other clusters containing both benign and malignant lesions. The hybrid neural network defined a region in which about 60% of the benign lesions were located exclusive of any malignant lesions. The experimental results also suggest that the modified self-organizing map provides more accurate population distribution maps than conventional Kohonen maps View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-organizing algorithms for generalized eigen-decomposition

    Publication Year: 1997 , Page(s): 1518 - 1530
    Cited by:  Papers (23)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB)  

    We discuss a new approach to self-organization that leads to novel adaptive algorithms for generalized eigen-decomposition and its variance for a single-layer linear feedforward neural network. First, we derive two novel iterative algorithms for linear discriminant analysis (LDA) and generalized eigen-decomposition by utilizing a constrained least-mean-squared classification error cost function, and the framework of a two-layer linear heteroassociative network performing a one-of-m classification. By using the concept of deflation, we are able to find sequential versions of these algorithms which extract the LDA components and generalized eigenvectors in a decreasing order of significance. Next, two new adaptive algorithms are described to compute the principal generalized eigenvectors of two matrices (as well as LDA) from two sequences of random matrices. We give a rigorous convergence analysis of our adaptive algorithms by using stochastic approximation theory, and prove that our algorithms converge with probability one View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear backpropagation: doing backpropagation without derivatives of the activation function

    Publication Year: 1997 , Page(s): 1321 - 1327
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    The conventional linear backpropagation algorithm is replaced by a nonlinear version, which avoids the necessity for calculating the derivative of the activation function. This may be exploited in hardware realizations of neural processors. In this paper we derive the nonlinear backpropagation algorithms in the framework of recurrent backpropagation and present some numerical simulations of feedforward networks on the NetTalk problem. A discussion of implementation in analog very large scale integration (VLSI) electronics concludes the paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A partial order for the M-of-N rule-extraction algorithm

    Publication Year: 1997 , Page(s): 1542 - 1544
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    We present a method to unify the rules obtained by the M-of-N rule-extraction technique. The rules extracted from a perceptron by the M-of-N algorithm are in correspondence with sets of minimal Boolean vectors with respect to the classical partial order defined on vectors. Our method relies on a simple characterization of another partial order defined on Boolean vectors. We show that there exists also a correspondence between sets of minimal Boolean vectors with respect to this order and M-of-N rules equivalent to a perceptron. The gain is that fewer rules are generated with the second order. Independently, we prove that deciding whether a perceptron is symmetric with respect to two variables is NP-complete View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development and application of an integrated neural system for an HDCL

    Publication Year: 1997 , Page(s): 1328 - 1337
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    This study presents the development and industrial application of an integrated neural system in coating weight control for a modern hot dip coating line (HDCL) in a steel mill. The neural system consists of two multilayered feedforward neural networks and a neural adaptive controller. They perform coating weight real-time prediction, feedforward control (FFC), and adaptive feedback control (FBC), respectively. The production line analysis, neural system architecture, learning, associative memories, generalization and real-time applications are addressed in this paper. This integrated neural system has been successfully implemented and applied to an HDCL at Burns Harbor Division, Bethlehem Steel Co., Chesterton, IN. The industrial application results have shown significant improvements in reduction of coating weight transitional footage, variation of the error between the target and actual coating weight, and the coating material used. Some practical aspects for applying a neural system to industrial control are discussed as concluding remarks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Volterra models and three-layer perceptrons

    Publication Year: 1997 , Page(s): 1421 - 1433
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    This paper proposes the use of a class of feedforward artificial neural networks with polynomial activation functions (distinct for each hidden unit) for practical modeling of high-order Volterra systems. Discrete-time Volterra models (DVMs) are often used in the study of nonlinear physical and physiological systems using stimulus-response data. However, their practical use has been hindered by computational limitations that confine them to low-order nonlinearities (i.e., only estimation of low-order kernels is practically feasible). Since three-layer perceptrons (TLPs) can be used to represent input-output nonlinear mappings of arbitrary order, this paper explores the basic relations between DVMs and TLPs with tapped-delay inputs in the context of nonlinear system modeling. A variant of TLP with polynomial activation functions-termed “separable Volterra networks” (SVNs)-is found particularly useful in deriving explicit relations with DVM and in obtaining practicable models of highly nonlinear systems from stimulus-response data. The conditions under which the two approaches yield equivalent representations of the input-output relation are explored, and the feasibility of DVM estimation via equivalent SVN training using backpropagation is demonstrated by computer-simulated examples and compared with results from the Laguerre expansion technique (LET). The use of SVN models allows practicable modeling of high-order nonlinear systems, thus removing the main practical limitation of the DVM approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware implementation of CMAC neural network with reduced storage requirement

    Publication Year: 1997 , Page(s): 1545 - 1556
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    The cerebellar model articulation controller (CMAC) neural network has the advantages of fast convergence speed and low computation complexity. However, it suffers from a low storage space utilization rate on weight memory. In this paper, we propose a direct weight address mapping approach, which can reduce the required weight memory size with a utilization rate near 100%. Based on such an address mapping approach, we developed a pipeline architecture to efficiently perform the addressing operations. The proposed direct weight address mapping approach also speeds up the computation for the generation of weight addresses. Besides, a CMAC hardware prototype used for color calibration has been implemented to confirm the proposed approach and architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge-based fuzzy MLP for classification and rule generation

    Publication Year: 1997 , Page(s): 1338 - 1350
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recurrent neural nets as dynamical Boolean systems with application to associative memory

    Publication Year: 1997 , Page(s): 1268 - 1280
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    Discrete-time/discrete-state recurrent neural networks are analyzed from a dynamical Boolean systems point of view in order to devise new analytic and design methods for the class of both single and multilayer recurrent artificial neural networks. With the proposed dynamical Boolean systems analysis, we are able to formulate necessary and sufficient conditions for network stability which are more general than the well-known but restrictive conditions for the class of single layer networks: (1) symmetric weight matrix with (2) positive diagonal and (3) asynchronous update. In terms of design, we use a dynamical Boolean systems analysis to construct a high performance associative memory. With this Boolean memory, we can guarantee that all fundamental memories are stored, and also guarantee the size of the basin of attraction for each fundamental memory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complete memory structures for approximating nonlinear discrete-time mappings

    Publication Year: 1997 , Page(s): 1397 - 1409
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    This paper introduces a general structure that is capable of approximating input-output maps of nonlinear discrete-time systems. The structure is comprised of two stages, a dynamical stage followed by a memoryless nonlinear stage. A theorem is presented which gives a simple necessary and sufficient condition for a large set of structures of this form to be capable of modeling a wide class of nonlinear discrete time systems. In particular, we introduce the concept of a “complete memory”. A structure with a complete memory dynamical stage and a sufficiently powerful memoryless stage is shown to be capable of approximating arbitrarily wide class of continuous, causal, time invariant, approximately-finite-memory mappings between discrete-time signal spaces. Furthermore, we show that any bounded-input bounded output, time-invariant, causal memory structure has such an approximation capability if and only if it is a complete memory. Several examples of linear and nonlinear complete memories are presented. The proposed complete memory structure provides a template for designing a wide variety of artificial neural networks for nonlinear spatiotemporal processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the distribution of performance from multiple neural-network trials

    Publication Year: 1997 , Page(s): 1507 - 1517
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed toward better performance for smoother target functions and skewed toward worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stable online evolutionary learning of NN-MLP

    Publication Year: 1997 , Page(s): 1371 - 1378
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    To design the nearest-neighbor-based multilayer perceptron (NN-MLP) efficiently, the author has proposed a nongenetic-based evolutionary algorithm called the R4-rule. For off-line learning, the R4-rule can produce the smallest or nearly smallest networks with high generalization ability by iteratively performing four basic operations: recognition, remembrance, reduction, add review. This algorithm, however, cannot be applied directly to online learning because its inherent instability, which is caused by over-reduction and over-review. To stabilize the R4-rule, this paper proposes some improvements for reduction and review. The improved reduction is more robust for online learning because the fitness of each hidden neuron is defined by its overall behavior in many learning cycles. The new review is more efficient because hidden neurons are adjusted in a more careful way. The performance of the improved R 4-rule for online learning is shown by experimental results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structure optimization of neural networks with the A*-algorithm

    Publication Year: 1997 , Page(s): 1434 - 1445
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB)  

    A method for the construction of optimal structures for feedforward neural networks is introduced. On the basis of a construction of a graph of network structures and an evaluation value which is assigned to each of them, an heuristic search algorithm can be installed on this graph. The application of the A*-algorithm ensures, in theory, both the optimality of the solution and the optimality of the search. For several examples, a comparison between the new strategy and the well-known cascade-correlation procedure is carried out with respect to the performance of the resulting structures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the problem of spurious patterns in neural associative memory models

    Publication Year: 1997 , Page(s): 1483 - 1491
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB)  

    The problem of spurious patterns in neural associative memory models is discussed. Some suggestions to solve this problem from the literature are reviewed and their inadequacies are pointed out. A solution based on the notion of neural self-interaction with a suitably chosen magnitude is presented for the Hebbian learning rule. For an optimal learning rule based on linear programming, asymmetric dilution of synaptic connections is presented as another solution to the problem of spurious patterns. With varying percentages of asymmetric dilution it is demonstrated numerically that this optimal learning rule leads to near total suppression of spurious patterns. For practical usage of neural associative memory networks a combination of the two solutions with the optimal learning rule is recommended to be the best proposition View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • k-winners-take-all neural net with Θ(1) time complexity

    Publication Year: 1997 , Page(s): 1557 - 1561
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    In this article we present a k-winners-take-all (k-WTA) neural net that is established based on the concept of the constant time sorting machine by Hsu and Wang. It fits some specific applications, such as real-time processing, since its Θ(1) time complexity is independent to the problem size. The proposed k-WTA neural net produces the solution in constant time while the Hopfield network requires a relatively long transient to converge to the solution from some initial states View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structure of the high-order Boltzmann machine from independence maps

    Publication Year: 1997 , Page(s): 1351 - 1358
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    In this paper we consider the determination of the structure of the high-order Boltzmann machine (HOBM), a stochastic recurrent network for approximating probability distributions. We obtain the structure of the HOBM, the hypergraph of connections, from conditional independences of the probability distribution to model. We assume that an expert provides these conditional independences and from them we build independence maps, Markov and Bayesian networks, which represent conditional independences through undirected graphs and directed acyclic graphs respectively. From these independence maps we construct the HOBM hypergraph. The central aim of this paper is to obtain a minimal hypergraph. Given that different orderings of the variables provide in general different Bayesian networks, we define their intersection hypergraph. We prove that the intersection hypergraph of all the Bayesian networks (N!) of the distribution is contained by the hypergraph of the Markov network, it is more simple, and we give a procedure to determine a subset of the Bayesian networks that verifies this property. We also prove that the Markov network graph establishes a minimum connectivity for the hypergraphs from Bayesian networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural-network-based robust fault diagnosis in robotic systems

    Publication Year: 1997 , Page(s): 1410 - 1420
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    Fault diagnosis plays an important role in the operation of modern robotic systems. A number of researchers have proposed fault diagnosis architectures for robotic manipulators using the model-based analytical redundancy approach. One of the key issues in the design of such fault diagnosis schemes is the effect of modeling uncertainties on their performance. This paper investigates the problem of fault diagnosis in rigid-link robotic manipulators with modeling uncertainties. A learning architecture with sigmoidal neural networks is used to monitor the robotic system for any off-nominal behavior due to faults. The robustness and stability properties of the fault diagnosis scheme are rigorously established. Simulation examples are presented to illustrate the ability of the neural-network-based robust fault diagnosis scheme to detect and accommodate faults in a two-link robotic manipulator View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope