By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 2000

Filter Results

Displaying Results 1 - 25 of 28
  • Introduction to the special issue on neural networks for data mining and knowledge discovery

    Page(s): 545 - 549
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • General fuzzy min-max neural network for clustering and classification

    Page(s): 769 - 783
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB)  

    This paper describes a general fuzzy min-max (GFMM) neural network which is a generalization and extension of the fuzzy min-max clustering and classification algorithms of Simpson (1992, 1993). The GFMM method combines supervised and unsupervised learning in a single training algorithm. The fusion of clustering and classification resulted in an algorithm that can be used as pure clustering, pure classification, or hybrid clustering classification. It exhibits a property of finding decision boundaries between classes while clustering patterns that cannot be said to belong to any of existing classes. Similarly to the original algorithms, the hyperbox fuzzy sets are used as a representation of clusters and classes. Learning is usually completed in a few passes and consists of placing and adjusting the hyperboxes in the pattern space; this is an expansion-contraction process. The classification results can be crisp or fuzzy. New data can be included without the need for retraining. While retaining all the interesting features of the original algorithms, a number of modifications to their definition have been made in order to accommodate fuzzy input patterns in the form of lower and upper bounds, combine the supervised and unsupervised learning, and improve the effectiveness of operations. A detailed account of the GFMM neural network, its comparison with the Simpson's fuzzy min-max neural networks, a set of examples, and an application to the leakage detection and identification in water distribution systems are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern recognition via synchronization in phase-locked loop neural networks

    Page(s): 734 - 738
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    We propose a novel architecture of an oscillatory neural network that consists of phase-locked loop (PLL) circuits. It stores and retrieves complex oscillatory patterns as synchronized states with appropriate phase relations between neurons View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Underwater target classification using wavelet packets and neural networks

    Page(s): 784 - 794
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General statistical inference for discrete and mixed spaces by an approximate application of the maximum entropy principle

    Page(s): 558 - 573
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    We propose a method for learning a general statistical inference engine, operating on discrete and mixed discrete/continuous feature spaces. Such a model allows inference on any of the discrete features, given values for the remaining features. Applications are, e.g., to medical diagnosis with multiple possible diseases, fault diagnosis, information retrieval, and imputation in databases. Bayesian networks (BNs) are versatile tools that possess this inference capability. However, BNs require explicit specification of conditional independencies, which may be difficult to assess given limited data. Alternatively, Cheeseman (1983) proposed finding the maximum entropy (ME) joint probability mass function (pmf) consistent with arbitrary lower order probability constraints. This approach is in principle powerful and does not require explicit expression of conditional independence. However, until now the huge learning complexity has severely limited the use of this approach. Here we propose an approximate ME method, which also encodes arbitrary low-order constraints but while retaining quite tractable learning. Our method uses a restriction of joint pmf support (during learning) to a subset of the feature space. Results on the University of California-Irvine repository reveal performance gains over several BN approaches and over multilayer perceptrons View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification ability of single hidden layer feedforward neural networks

    Page(s): 799 - 801
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN) with any continuous bounded nonconstant activation function or any arbitrary bounded (continuous or not continuous) activation function which has unequal limits at infinities (not just perceptrons) can form disjoint decision regions with arbitrary shapes in multidimensional cases, SLFN with some unbounded activation function can also form disjoint decision regions with arbitrary shapes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry

    Page(s): 690 - 696
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include legit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47000 USA domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self organization of a massive document collection

    Page(s): 574 - 585
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    Describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the self-organizing map (SOM) algorithm. As the feature vectors for the documents statistical representations of their vocabularies are used. The main goal in our work has been to scale up the SOM algorithm to be able to deal with large amounts of high-dimensional data. In a practical experiment we mapped 6840568 patent abstracts onto a 1002240-node SOM. As the feature vectors we used 500-dimensional vectors of stochastic figures obtained as random projections of weighted word histograms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the convergence of validity interval analysis

    Page(s): 802 - 807
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    Validity interval analysis (VIA) is a generic tool for analyzing the input-output behavior of feedforward neural networks. VIA is a rule extraction technique that relies on a rule refinement algorithm. The rules are of the form Ri→R0 i.e. "if the input of the neural network is in the region Ri, then its output is in the region R0," where regions are axis parallel hypercubes. VIA conjectures, then refines and checks rules for inconsistency. This process can be computationally expensive, and the rule refinement phase becomes critical. Hence, the importance of knowing the complexity of these rule refinement algorithms. In this paper, we show that the rule refinement part of VIA always converges in one run for single-weight-layer networks, and has an exponential average rate of convergence for multilayer networks. We also discuss some variations of the standard VIA formulae View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Logic operations based on single neuron rational model

    Page(s): 739 - 747
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    This paper focuses on phase analysis to explore the single neuron local arithmetic and logic operations on their input conductances. Based on the analysis of the rational function model of local spatial summation with the equivalent circuits for steady-state membrane potentials, the prototypes spatial summation with the equivalent circuits for steady-state membrane potentials, the prototypes of logic operations are constructed. A mapping from a partition of input conductance space into functionally distinct phases is described and the multiple mode models for logic operations are established. The transitions from output voltage to input conductance in logic operations are also discussed for the connections between neurons in different layers. Our theoretical studies and software simulations indicate that the single neuron local rational logic is programmable and the selection of these functional phases can be effectively instructed by presynaptic activities. This programmability makes the single neuron more flexible in processing the input information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy auto-associative neural networks for principal component extraction of noisy data

    Page(s): 808 - 810
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (80 KB)  

    In this paper, we propose a fuzzy auto-associative neural network for principal component extraction. The objective function is based on reconstructing the inputs from the corresponding outputs of the auto-associative neural network. Unlike the traditional approaches, the proposed criterion is a fuzzy mean squared error. We prove that the proposed objective function is an appropriate fuzzy formulation of auto-associative neural network for principal component extraction. Simulations are given to show the performances of the proposed neural networks in comparison with the existing method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuro-fuzzy rule generation: survey in soft computing framework

    Page(s): 748 - 768
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    The present article is a novel attempt in providing an exhaustive survey of neuro-fuzzy rule generation algorithms. Rule generation from artificial neural networks is gaining in popularity in recent times due to its capability of providing some insight to the user about the symbolic knowledge embedded within the network. Fuzzy sets are an aid in providing this information in a more human comprehensible or natural form, and can handle uncertainties at various levels. The neuro-fuzzy approach, symbiotically combining the merits of connectionist and fuzzy approaches, constitutes a key component of soft computing at this stage. To date, there has been no detailed and integrated categorization of the various neuro-fuzzy models used for rule generation. We propose to bring these together under a unified soft computing framework. Moreover, we include both rule extraction and rule refinement in the broader perspective of rule generation. Rules learned and generated for fuzzy reasoning and fuzzy control are also considered from this wider viewpoint. Models are grouped on the basis of their level of neuro-fuzzy synthesis. Use of other soft computing tools like genetic algorithms and rough sets are emphasized. Rule generation from fuzzy knowledge-based networks, which initially encode some crude domain knowledge, are found to result in more refined rules. Finally, real-life application to medical diagnosis is provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Granular neural networks for numerical-linguistic data fusion and knowledge discovery

    Page(s): 658 - 667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    We present a neural-networks-based knowledge discovery and data mining (KDDM) methodology based on granular computing, neural computing, fuzzy computing, linguistic computing, and pattern recognition. The major issues include 1) how to make neural networks process both numerical and linguistic data in a database, 2) how to convert fuzzy linguistic data into related numerical features, 3) how to use neural networks to do numerical-linguistic data fusion, 4) how to use neural networks to discover granular knowledge from numerical-linguistic databases, and 5) how to use discovered granular knowledge to predict missing data. In order to answer the above concerns, a granular neural network (GNN) is designed to deal with numerical-linguistic data fusion and granular knowledge discovery in numerical-linguistic databases. From a data granulation point of view the GNN can process granular data in a database. From a data fusion point of view, the GNN makes decisions based on different kinds of granular data. From a KDDM point of view the GNN is able to learn internal granular relations between numerical-linguistic inputs and outputs, and predict new relations in a database. The GNN is also capable of greatly compressing low-level granular data to high-level granular knowledge with some compression error and a data compression rate. To do KDDM in huge databases, parallel GNN and distributed GNN will be investigated in the future View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extension of dynamic link matching by introducing local linear maps

    Page(s): 817 - 822
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    It is well known that dynamic link matching (DLM) is a flexible pattern matching model tolerant of deformation or nonlinear transformation. However, previous models cannot treat severely deformed data pattern in which local features do not have their counterparts in a template pattern. We extend DLM by introducing local linear maps (LLMs). Our model has a reference vector and an LLM for each lattice point of a data pattern. The reference vector maps the lattice point into a template pattern and the LLM carries the information regarding how the local neighborhood is mapped. Our model transforms local features by LLMs in a data pattern and then matches them with their counterparts in a template pattern. Therefore, our model is adaptable to larger transformations. For simplicity, we restricted LLMs to rotations. Neighboring LLMs are diffusionally coupled with each other. The model is numerically demonstrated to be very flexible in dealing with deformation and rotation compared to previous models. The framework of our model can be easily extended to models with more general LLMs (expansion, contraction, and so on) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modified cascade-correlation learning for classification

    Page(s): 795 - 798
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    The main advantages of cascade-correlation learning are the abilities to learn quickly and to determine the network size. However, recent studies have shown that in many problems the generalization performance of a cascade-correlation trained network may not be quite optimal. Moreover, to reach a certain performance level, a larger network may be required than with other training methods. Recent advances in statistical learning theory emphasize the importance of a learning method to be able to learn optimal hyperplanes. This has led to advanced learning methods, which have demonstrated substantial performance improvements. Based on these recent advances in statistical learning theory, we introduce modifications to the standard cascade-correlation learning that take into account the optimal hyperplane constraints. Experimental results demonstrate that with modified cascade correlation, considerable performance gains are obtained compared to the standard cascade-correlation learning. This includes better generalization, smaller network size, and faster learning View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SEPARATE: a machine learning method based on semi-global partitions

    Page(s): 710 - 720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    Presents a machine learning method for solving classification and approximation problems. This method uses the divide-and-conquer algorithm design technique (taken from machine learning models based on a tree), with the aim of achieving design ease and good results on the training examples and allows semi-global actions on its computational elements (a feature taken from neural networks), with the aim of attaining good generalization and good behavior in the presence of noise in training examples. Finally, some results obtained after solving several problems with a particular implementation of SEPARATE are presented together with their analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive visualization and analysis of hierarchical neural projections for data mining

    Page(s): 615 - 624
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    Dimensionality reducing mappings, often also denoted as multidimensional scaling, are the basis for multivariate data projection and visual analysis in data mining. Topology and distance preserving mapping techniques-e.g., Kohonen's self-organizing feature map (SOM) or Sammon's nonlinear mapping (NLM)-are available to achieve multivariate data projections for the following interactive visual analysis process. For large data bases, however, NLM computation becomes intractable. Also, if additional data points or data sets are to be included in the projection, a complete recomputation of the mapping is required. In general, a neural network could learn the mapping and serve for arbitrary additional data projection. However, the computational costs would also be high, and convergence is not easily achieved. In this work, a convenient hierarchical neural projection approach is introduced, where first an unsupervised neural network-e.g., a SOM-quantizes the data base, followed by fast NLM mapping of the quantized data. In the second stage of the hierarchy, an enhancement of the NLM by a recall algorithm is applied. The training and application of a second neural network, which is learning the mapping by function approximation, is quantitatively compared with this new approach. Efficient interactive visualization and analysis techniques, exploiting the achieved hierarchical neural projection for data mining, are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tropical cyclone identification and tracking system using integrated neural oscillatory elastic graph matching and hybrid RBF network track mining techniques

    Page(s): 680 - 689
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    We present an automatic and integrated neural network-based tropical cyclone (TC) identification and track mining system. The proposed system consists of two main modules: 1) TC pattern identification system using neural oscillatory elastic graph matching model; and 2) TC track mining system using hybrid radial basis function network with time difference and structural learning algorithm. For system evaluation, 120 TC cases appeared in the period between 1985 and 1998 provided by National Oceanic and Atmospheric Administration are being used. Comparing with the bureau numerical TC prediction model used by Guam and the enhanced model proposed by Jeng et al. (1991), the proposed hybrid RBF has attained an over 30% and 18% improvement in forecast errors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic self-organizing maps with controlled growth for knowledge discovery

    Page(s): 601 - 614
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    The growing self-organizing map (GSOM) algorithm is presented in detail and the effect of a spread factor, which can be used to measure and control the spread of the GSOM, is investigated. The spread factor is independent of the dimensionality of the data and as such can be used as a controlling measure for generating maps with different dimensionality, which can then be compared and analyzed with better accuracy. The spread factor is also presented as a method of achieving hierarchical clustering of a data set with the GSOM. Such hierarchical clustering allows the data analyst to identify significant and interesting clusters at a higher level of the hierarchy, and continue with finer clustering of the interesting clusters only. Therefore, only a small map is created in the beginning with a low spread factor, which can be generated for even a very large data set. Further analysis is conducted on selected sections of the data and of smaller volume. Therefore, this method facilitates the analysis of even very large data sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data strip mining for the virtual design of pharmaceuticals with neural networks

    Page(s): 668 - 679
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    A novel neural network based technique, called “data strip mining” extracts predictive models from data sets which have a large number of potential inputs and comparatively few data points. This methodology uses neural network sensitivity analysis to determine which predictors are most significant in the problem. Neural network sensitivity analysis holds all but one input to a trained neural network constant while varying each input over its entire range to determine its effect on the output. Elimination of variables through neural network sensitivity analysis and predicting performance through model cross-validation allows the analyst to reduce the number of inputs and improve the model's predictive ability at the same time. This paper demonstrates its effectiveness on a pair of problems from combinatorial chemistry with over 400 potential inputs each. For these data sets, model selection by neural sensitivity analysis outperformed other variable selection methods including the forward selection and genetic algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taking on the curse of dimensionality in joint distributions using neural networks

    Page(s): 550 - 557
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    The curse of dimensionality is severe when modeling high-dimensional discrete data: the number of possible combinations of the variables explodes exponentially. We propose an architecture for modeling high-dimensional data that requires resources (parameters and computations) that grow at most as the square of the number of variables, using a multilayer neural network to represent the joint distribution of the variables as the product of conditional distributions. The neural network can be interpreted as a graphical model without hidden random variables, but in which the conditional distributions are tied through the hidden units. The connectivity of the neural network can be pruned by using dependency tests between the variables (thus reducing significantly the number of parameters). Experiments on modeling the distribution of several discrete data sets show statistically significant improvements over other methods such as naive Bayes and comparable Bayesian networks and show that significant improvements can be obtained by pruning the network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A constructive algorithm to solve “convex recursive deletion” (CoRD) classification problems via two-layer perceptron networks

    Page(s): 811 - 816
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    A sufficient condition that a region be classifiable by a two-layer feedforward neural net (a two-layer perceptron) using threshold activation functions is that either it be a convex polytope or that intersected with the complement of a convex polytope in its interior, or that intersected with the complement of a convex polytope in its interior or... recursively. These have been called convex recursive deletion (CoRD) regions. We give a simple algorithm for finding the weights and thresholds in both layers for a feedforward net that implements such a region. The results of this work help in understanding the relationship between the decision region of a perceptron and its corresponding geometry in input space. Our construction extends in a simple way to the case that the decision region is the disjoint union of CoRD regions (requiring three layers). Therefore this work also helps in understanding how many neurons are needed in the second layer of a general three-layer network. In the event that the decision region of a network is known and is the union of CoRD regions, our results enable the calculation of the weights and thresholds of the implementing network directly and rapidly without the need for thousands of backpropagation iterations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New results on recurrent network training: unifying the algorithms and accelerating convergence

    Page(s): 697 - 709
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an online fashion. We have also developed an online version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The application of certainty factors to neural computing for rule discovery

    Page(s): 647 - 657
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    Discovery of domain principles has been a major long-term goal for scientists. The paper presents a system called DOMRUL for learning such principles in the form of rules. A distinctive feature of the system is the integration of the certainty factor (CF) model and a neural network. These two elements complement each other. The CF model offers the neural network better semantics and generalization advantage, and the neural network overcomes possible limitations such as inaccuracies and overcounting of evidence associated with certainty factors. It is a major contribution of the paper to show mathematically the quantizability nature of the CFNet since previously the quantizability of the CF model was demonstrated only empirically. The rule discovery system can be applied to any domain without restriction on both the rule number and rule size. In a hypothetical domain, DOMRUL discovered complex domain rules at a considerably higher accuracy than a commonly used rule-learning program C4.5 in both normal and noisy conditions. The scalability in a large domain is also shown. On a real data set concerning promoters prediction in molecular biology, DOMRUL learned rules with more complete semantics than C4.5 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic principal component subspaces: a hierarchical finite mixture model for data visualization

    Page(s): 625 - 636
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    Visual exploration has proven to be a powerful tool for multivariate data mining and knowledge discovery. Most visualization algorithms aim to find a projection from the data space down to a visually perceivable rendering space. To reveal all of the interesting aspects of multimodal data sets living in a high-dimensional space, a hierarchical visualization algorithm is introduced which allows the complete data set to be visualized at the top level, with clusters and subclusters of data points visualized at deeper levels. The methods involve hierarchical use of standard finite normal mixtures and probabilistic principal component projections, whose parameters are estimated using the expectation-maximization and principal component neural networks under the information theoretic criteria. We demonstrate the principle of the approach on several multimodal numerical data sets, and we then apply the method to the visual explanation in computer-aided diagnosis for breast cancer detection from digital mammograms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope