By Topic

Neural Networks, IEEE Transactions on

Issue 5 • Date Sept. 2006

Filter Results

Displaying Results 1 - 25 of 26
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Neural Networks publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Feature Selection Using a Piecewise Linear Network

    Page(s): 1101 - 1115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB) |  | HTML iconHTML  

    We present an efficient feature selection algorithm for the general regression problem, which utilizes a piecewise linear orthonormal least squares (OLS) procedure. The algorithm 1) determines an appropriate piecewise linear network (PLN) model for the given data set, 2) applies the OLS procedure to the PLN model, and 3) searches for useful feature subsets using a floating search algorithm. The floating search prevents the "nesting effect." The proposed algorithm is computationally very efficient because only one data pass is required. Several examples are given to demonstrate the effectiveness of the proposed algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks

    Page(s): 1116 - 1125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (599 KB) |  | HTML iconHTML  

    This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Core Vector Machines

    Page(s): 1126 - 1140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB) |  | HTML iconHTML  

    Kernel methods, such as the support vector machine (SVM), are often formulated as quadratic programming (QP) problems. However, given m training patterns, a naive implementation of the QP solver takes O(m 3) training time and at least O(m2) space. Hence, scaling up these QPs is a major stumbling block in applying kernel methods on very large data sets, and a replacement of the naive method for finding the QP solutions is highly desirable. Recently, by using approximation algorithms for the minimum enclosing ball (MEB) problem, we proposed the core vector machine (CVM) algorithm that is much faster and can handle much larger data sets than existing SVM implementations. However, the CVM can only be used with certain kernel functions and kernel methods. For example, the very popular support vector regression (SVR) cannot be used with the CVM. In this paper, we introduce the center-constrained MEB problem and subsequently extend the CVM algorithm. The generalized CVM algorithm can now be used with any linear/nonlinear kernel and can also be applied to kernel methods such as SVR and the ranking SVM. Moreover, like the original CVM, its asymptotic time complexity is again linear in m and its space complexity is independent of m. Experiments show that the generalized CVM has comparable performance with state-of-the-art SVM and SVR implementations, but is faster and produces fewer support vectors on very large data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiperiodicity of Discrete-Time Delayed Neural Networks Evoked by Periodic External Inputs

    Page(s): 1141 - 1151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (517 KB) |  | HTML iconHTML  

    In this paper, the multiperiodicity of a general class of discrete-time delayed neural networks (DTDNNs) is formulated and studied. Several sufficient conditions are obtained to ensure n-neuron DTDNNs can have 2n periodic orbits and these periodic orbits are locally attractive. In addition, we give the conditions for a periodic orbit to be locally or globally attractive when the periodic orbit locates in a designated region. As two typical representatives, the Hopfield neural network and the cellular neural network are examined in detail. These conditions improve and extend the existing stability results in the literature. Simulations results are also discussed in three illustrative examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global Exponential Stability of Multitime Scale Competitive Neural Networks With Nonsmooth Functions

    Page(s): 1152 - 1164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (655 KB) |  | HTML iconHTML  

    In this paper, we study the global exponential stability of a multitime scale competitive neural network model with nonsmooth functions, which models a literally inhibited neural network with unsupervised Hebbian learning. The network has two types of state variables, one corresponds to the fast neural activity and another to the slow unsupervised modification of connection weights. Based on the nonsmooth analysis techniques, we prove the existence and uniqueness of equilibrium for the system and establish some new theoretical conditions ensuring global exponential stability of the unique equilibrium of the neural network. Numerical simulations are conducted to illustrate the effectiveness of the derived conditions in characterizing stability regions of the neural network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Associative Memory Design Using Support Vector Machines

    Page(s): 1165 - 1174
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB) |  | HTML iconHTML  

    The relation existing between support vector machines (SVMs) and recurrent associative memories is investigated. The design of associative memories based on the generalized brain-state-in-a-box (GBSB) neural model is formulated as a set of independent classification tasks which can be efficiently solved by standard software packages for SVM learning. Some properties of the networks designed in this way are evidenced, like the fact that surprisingly they follow a generalized Hebb's law. The performance of the SVM approach is compared to existing methods with nonsymmetric connections, by some design examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet Adaptive Backstepping Control for a Class of Nonlinear Systems

    Page(s): 1175 - 1183
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (586 KB) |  | HTML iconHTML  

    This paper proposes a wavelet adaptive backstepping control (WABC) system for a class of second-order nonlinear systems. The WABC comprises a neural backstepping controller and a robust controller. The neural backstepping controller containing a wavelet neural network (WNN) identifier is the principal controller, and the robust controller is designed to achieve L2 tracking performance with desired attenuation level. Since the WNN uses wavelet functions, its learning capability is superior to the conventional neural network for system identification. Moreover, the adaptation laws of the control system are derived in the sense of Lyapunov function and Barbalat's lemma, thus the system can be guaranteed to be asymptotically stable. The proposed WABC is applied to two nonlinear systems, a chaotic system and a wing-rock motion system to illustrate its effectiveness. Simulation results verify that the proposed WABC can achieve favorable tracking performance by incorporating of WNN identification, adaptive backstepping control, and L2 robust control techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Artificial Neural Networks for Pattern-Based Adaptive Control

    Page(s): 1184 - 1193
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (907 KB) |  | HTML iconHTML  

    Adaptive pattern-based control strategies adapt their parameters from an analysis of response patterns exhibited by the system. This work presents an analysis of a class of artificial neural network (ANN) pattern-based adaptive control. It provides conditions under which the adaptive algorithm will converge, and it also characterizes the closed-loop stability properties. In addition, a method for monitoring the adaptation is also proposed. Several simulation examples illustrate our findings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accuracy/Diversity and Ensemble MLP Classifier Design

    Page(s): 1194 - 1211
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (877 KB) |  | HTML iconHTML  

    The difficulties of tuning parameters of multilayer perceptrons (MLP) classifiers are well known. In this paper, a measure is described that is capable of predicting the number of classifier training epochs for achieving optimal performance in an ensemble of MLP classifiers. The measure is computed between pairs of patterns on the training data and is based on a spectral representation of a Boolean function. This representation characterizes the mapping from classifier decisions to target label and allows accuracy and diversity to be incorporated within a single measure. Results on many benchmark problems, including the Olivetti Research Laboratory (ORL) face database demonstrate that the measure is well correlated with base-classifier test error, and may be used to predict the optimal number of training epochs. While correlation with ensemble test error is not quite as strong, it is shown in this paper that the measure may be used to predict number of epochs for optimal ensemble performance. Although the technique is only applicable to two-class problems, it is extended here to multiclass through output coding. For the output-coding technique, a random code matrix is shown to give better performance than one-per-class code, even when the base classifier is well-tuned View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of a Fast Discriminative Training Algorithm for Pattern Recognition

    Page(s): 1212 - 1221
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (783 KB) |  | HTML iconHTML  

    Discriminative training refers to an approach to pattern recognition based on direct minimization of a cost function commensurate with the performance of the recognition system. This is in contrast to the procedure of probability distribution estimation as conventionally required in Bayes' formulation of the statistical pattern recognition problem. Currently, most discriminative training algorithms for nonlinear classifier designs are based on gradient-descent (GD) methods for cost minimization. These algorithms are easy to derive and effective in practice, but are slow in training speed and have difficulty selecting the learning rates. To address the problem, we present our study on a fast discriminative training algorithm. The algorithm initializes the parameters by the expectation-maximization (EM) algorithm, and then uses a set of closed-form formulas derived in this paper to further optimize a proposed objective of minimizing error rate. Experiments in speech applications show that the algorithm provides better recognition accuracy in a fewer iterations than the EM algorithm and a neural network trained by hundreds of GD iterations. Although some convergent properties need further research, the proposed objective and derived formulas can benefit further study of the problem View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training Reformulated Radial Basis Function Neural Networks Capable of Identifying Uncertainty in Data Classification

    Page(s): 1222 - 1234
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (590 KB) |  | HTML iconHTML  

    This paper introduces a learning algorithm that can be used for training reformulated radial basis function neural networks (RBFNNs) capable of identifying uncertainty in data classification. This learning algorithm trains a special class of reformulated RBFNNs, known as cosine RBFNNs, by updating selected adjustable parameters to minimize the class-conditional variances at the outputs of their radial basis functions (RBFs). The experiments verify that quantum neural networks (QNNs) and cosine RBFNNs trained by the proposed learning algorithm are capable of identifying uncertainty in data classification, a property that is not shared by cosine RBFNNs trained by the original learning algorithm and conventional feed-forward neural networks (FFNNs). Finally, this study leads to a simple classification strategy that can be used to improve the classification accuracy of QNNs and cosine RBFNNs by rejecting ambiguous feature vectors based on their responses View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prune-Able Fuzzy ART Neural Architecture for Robot Map Learning and Navigation in Dynamic Environments

    Page(s): 1235 - 1249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2596 KB)  

    Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hopfield Neural Network for Image Change Detection

    Page(s): 1250 - 1264
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2046 KB) |  | HTML iconHTML  

    This paper outlines an optimization relaxation approach based on the analog Hopfield neural network (HNN) for solving the image change detection problem between two images. A difference image is obtained by subtracting pixel by pixel both images. The network topology is built so that each pixel in the difference image is a node in the network. Each node is characterized by its state, which determines if a pixel has changed. An energy function is derived, so that the network converges to stable states. The analog Hopfield's model allows each node to take on analog state values. Unlike most widely used approaches, where binary labels (changed/unchanged) are assigned to each pixel, the analog property provides the strength of the change. The main contribution of this paper is reflected in the customization of the analog Hopfield neural network to derive an automatic image change detection approach. When a pixel is being processed, some existing image change detection procedures consider only interpixel relations on its neighborhood. The main drawback of such approaches is the labeling of this pixel as changed or unchanged according to the information supplied by its neighbors, where its own information is ignored. The Hopfield model overcomes this drawback and for each pixel allows a tradeoff between the influence of its neighborhood and its own criterion. This is mapped under the energy function to be minimized. The performance of the proposed method is illustrated by comparative analysis against some existing image change detection methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Variant of Algorithm FastICA for Independent Component Analysis Attaining the CramÉr-Rao Lower Bound

    Page(s): 1265 - 1277
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (897 KB) |  | HTML iconHTML  

    FastICA is one of the most popular algorithms for independent component analysis (ICA), demixing a set of statistically independent sources that have been mixed linearly. A key question is how accurate the method is for finite data samples. We propose an improved version of the FastICA algorithm which is asymptotically efficient, i.e., its accuracy given by the residual error variance attains the Cramer-Rao lower bound (CRB). The error is thus as small as possible. This result is rigorously proven under the assumption that the probability distribution of the independent signal components belongs to the class of generalized Gaussian (GG) distributions with parameter alpha, denoted GG(alpha) for alpha>2. We name the algorithm efficient FastICA (EFICA). Computational complexity of a Matlab implementation of the algorithm is shown to be only slightly (about three times) higher than that of the standard symmetric FastICA. Simulations corroborate these claims and show superior performance of the algorithm compared with algorithm JADE of Cardoso and Souloumiac and nonparametric ICA of Boscolo on separating sources with distribution GG(alpha) with arbitrary alpha, as well as on sources with bimodal distribution, and a good performance in separating linearly mixed speech signals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Neural Network Approach to Dynamic Task Assignment of Multirobots

    Page(s): 1278 - 1287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (562 KB) |  | HTML iconHTML  

    In this paper, a neural network approach to task assignment, based on a self-organizing map (SOM), is proposed for a multirobot system in dynamic environments subject to uncertainties. It is capable of dynamically controlling a group of mobile robots to achieve multiple tasks at different locations, so that the desired number of robots will arrive at every target location from arbitrary initial locations. In the proposed approach, the robot motion planning is integrated with the task assignment, thus the robots start to move once the overall task is given. The robot navigation can be dynamically adjusted to guarantee that each target location has the desired number of robots, even under uncertainties such as when some robots break down. The proposed approach is capable of dealing with changing environments. The effectiveness and efficiency of the proposed approach are demonstrated by simulation studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate and Fast Off and Online Fuzzy ARTMAP-Based Image Classification With Application to Genetic Abnormality Diagnosis

    Page(s): 1288 - 1300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB) |  | HTML iconHTML  

    We propose and investigate the fuzzy ARTMAP neural network in off and online classification of fluorescence in situ hybridization image signals enabling clinical diagnosis of numerical genetic abnormalities. We evaluate the classification task (detecting a several abnormalities separately or simultaneously), classifier paradigm (monolithic or hierarchical), ordering strategy for the training patterns (averaging or voting), training mode (for one epoch, with validation or until completion) and model sensitivity to parameters. We find the fuzzy ARTMAP accurate in accomplishing both tasks requiring only very few training epochs. Also, selecting a training ordering by voting is more precise than if averaging over orderings. If trained for only one epoch, the fuzzy ARTMAP provides fast, yet stable and accurate learning as well as insensitivity to model complexity. Early stop of training using a validation set reduces the fuzzy ARTMAP complexity as for other machine learning models but cannot improve accuracy beyond that achieved when training is completed. Compared to other machine learning models, the fuzzy ARTMAP does not loose but gain accuracy when overtrained, although increasing its number of categories. Learned incrementally, the fuzzy ARTMAP reaches its ultimate accuracy very fast obtaining most of its data representation capability and accuracy by using only a few examples. Finally, the fuzzy ARTMAP accuracy for this domain is comparable with those of the multilayer perceptron and support vector machine and superior to those of the naive Bayesian and linear classifiers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stock Trading Using RSPOP: A Novel Rough Set-Based Neuro-Fuzzy Approach

    Page(s): 1301 - 1315
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1035 KB) |  | HTML iconHTML  

    This paper investigates the method of forecasting stock price difference on artificially generated price series data using neuro-fuzzy systems and neural networks. As trading profits is more important to an investor than statistical performance, this paper proposes a novel rough set-based neuro-fuzzy stock trading decision model called stock trading using rough set-based pseudo outer-product (RSPOP) which synergizes the price difference forecast method with a forecast bottleneck free trading decision model. The proposed stock trading with forecast model uses the pseudo outer-product based fuzzy neural network using the compositional rule of inference [POPFNN-CRI(S)] with fuzzy rules identified using the RSPOP algorithm as the underlying predictor model and simple moving average trading rules in the stock trading decision model. Experimental results using the proposed stock trading with RSPOP forecast model on real world stock market data are presented. Trading profits in terms of portfolio end values obtained are benchmarked against stock trading with dynamic evolving neural-fuzzy inference system (DENFIS) forecast model, the stock trading without forecast model and the stock trading with ideal forecast model. Experimental results showed that the proposed model identified rules with greater interpretability and yielded significantly higher profits than the stock trading with DENFIS forecast model and the stock trading without forecast model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Convolutional Neural Network Approach for Objective Video Quality Assessment

    Page(s): 1316 - 1327
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1071 KB) |  | HTML iconHTML  

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on - set of typical TV videos View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feed-Forward Support Vector Machine Without Multipliers

    Page(s): 1328 - 1331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (219 KB) |  | HTML iconHTML  

    In this letter, we propose a coordinate rotation digital computer (CORDIC)-like algorithm for computing the feed-forward phase of a support vector machine (SVM) in fixed-point arithmetic, using only shift and add operations and avoiding resource-consuming multiplications. This result is obtained thanks to a hardware-friendly kernel, which greatly simplifies the SVM feed-forward phase computation and, at the same time, maintains good classification performance respect to the conventional Gaussian kernel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Simulation of a Mixed-Mode Neuron Architecture for Sensor Conditioning

    Page(s): 1332 - 1335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    The design, analysis, and system simulation of an adaptive processor based on a current-mode mixed analog-digital circuit is presented. The processor consists of a mixed four-quadrant multiplier and a current conveyor that performs the nonlinearity. Schematics, circuit parameters, and a high-level model are shown. The results achieved when applying this processor model to conditioning several sensor types are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring the Formation of Kernel-Based Topographic Maps in a Hybrid SOM-kMER Model

    Page(s): 1336 - 1341
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB)  

    A new lattice disentangling monitoring algorithm for a hybrid self-organizing map-kernel-based maximum entropy learning rule (SOM-kMER) model is proposed. It aims to overcome topological defects owing to a rapid decrease of the neighborhood range over the finite running time in topographic map formation. The empirical results demonstrate that the proposed approach is able to accelerate the formation of a topographic map and, at the same time, to simplify the monitoring procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements of Complex-Valued Hopfield Associative Memory by Using Generalized Projection Rules

    Page(s): 1341 - 1347
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1073 KB) |  | HTML iconHTML  

    In this letter, new design methods for the complex-valued multistate Hopfield associative memories (CVHAMs) are presented. We show that the well-known projection rule proposed by Personnaz can be generalized to complex domain such that the weight matrix of the CVHAM can be designed by using a simple and effective method. The stability of the proposed CVHAM is analyzed by using energy function approach which shows that in synchronous update mode the proposed model is guaranteed to converge to a fixed point from any given initial state. Moreover, the projection geometry of the generalized projection rule (GPR) is discussed. In order to enhance the recall capability, a strategy of eliminating the spurious memories is also reported. The validity and the performance of the proposed methods are investigated by computer simulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2007 International Joint Conference on Neural Networks (IJCNN 2007)

    Page(s): 1348
    Save to Project icon | Request Permissions | PDF file iconPDF (675 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope