Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Combinations of Evolutionary Computation and Neural Networks, 2000 IEEE Symposium on

Date 11-13 May 2000

Filter Results

Displaying Results 1 - 25 of 31
  • 2000 IEEE Symposium On Combinations Of Evolutionary Computation And Neural N.etworks

    Publication Year: 2000 , Page(s): 0_3 - 0_8
    Save to Project icon | Request Permissions | PDF file iconPDF (183 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 2000 , Page(s): 250
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Hierarchical genetic algorithm based neural network design

    Publication Year: 2000 , Page(s): 168 - 175
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    In this paper, we propose a novel genetic algorithm based design procedure for multi-layer feedforward neural network. Hierarchical genetic algorithm is used to evolve both neural network topology and parameters. Compared with traditional genetic algorithm based designs for neural network, the proposed hierarchical approach addressed several deficiencies highlighted in literature. A multi-objective function is used herein to optimize the performance and topology of the evolved neural network. Two benchmark problems are successfully verified and the proposed algorithm proves to be competitive or even superior to the traditional back-propagation network in Mackey-Glass chaotic time series prediction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human magnetocardiogram (MCG) modeling using evolutionary artificial neural networks

    Publication Year: 2000 , Page(s): 110 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    In the present work magnetocardiogram (MCG) recordings of normal subjects were analyzed using a hybrid training algorithm. This algorithm combines genetic algorithms and a training method based on the localized Extended Kalman Filter (EKF), in order to evolve the structure and train Multi-Layered Perceptrons (MLP) networks. Our goal is to examine the predictability of the MCG signal on a short predicting horizon View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting comprehensible rules from neural networks via genetic algorithms

    Publication Year: 2000 , Page(s): 130 - 139
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB)  

    A common problem in KDD (Knowledge Discovery in Databases) is the presence of noise in the data being mined. Neural networks are robust and have a good tolerance to noise, which makes them suitable for mining very noisy data. However, they have the well-known disadvantage of not discovering any high-level rule that can be used as a support for human decision making. In this work we present a method for extracting accurate, comprehensible rules from neural networks. The proposed method uses a genetic algorithm to find a good neural network topology. This topology is then passed to a rule extraction algorithm, and the quality of the extracted rules is then fed back to the genetic algorithm. The proposed system is evaluated on three public-domain data sets and the results show that the approach is valid View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive scheme for real function optimization acting as a selection operator

    Publication Year: 2000 , Page(s): 140 - 149
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    We propose an adaptive scheme for real function optimization whose dynamics is driven by selection. The method is parametric and relies explicitly on the Gaussian density seen as an infinite search population. We define two gradient flows acting on the density parameters, in the spirit of neural network learning rules, which maximize either the function expectation relatively to the density or its logarithm. The first one leads to reinforcement learning and the second one leads to selection learning. Both can be understood as the effect of three operators acting on the density: translation, scaling, and rotation. Then we propose to approximate those systems with discrete time dynamical systems by means of three different methods: Monte Carlo integration, selection among a finite population, and reinforcement learning. This work synthesizes previously independent approaches and intends to show that evolutionary strategies and reinforcement learning are strongly related View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence analysis of a segmentation algorithm for the evolutionary training of neural networks

    Publication Year: 2000 , Page(s): 70 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    In contrast to standard genetic algorithms with generational reproduction, we adopt the viewpoint of the reactor algorithm (Dittrich and Banzhaf, 1998) which is similar to steady-state genetic algorithms, but without ranking. This permits an analysis similar to Eigen's (1971) molecular evolution model. From this viewpoint, we consider combining segments from different populations into one genotype at every time-step, which can be regarded as many-parent combinations with fined crossover points, and is comparable to cooperative evolution (Potter and De Jong, 2000). We present fixed-point analysis and phase portraits of the competitive dynamics, with the result that only the first-order (single parent) replicators exhibit global optimisation. A segmentation algorithm is developed that theoretically ensures convergence to the global optimum while keeping the cooperative or reactor aspect for a better exploration of the search space. The algorithm creates different population islands for such cases of competition that otherwise cannot be solved correctly by the population dynamics. The population blends have different segmentation boundaries which are generated by combining well converged components into new segments. This gives first-order replicators that have the appropriate dynamical properties to compete with new solutions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative co-evolutionary algorithm-how to evaluate a module?

    Publication Year: 2000 , Page(s): 150 - 157
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    When we talk about co-evolution, we often consider it as competitive co-evolution (CompCE). Examples include co-evolution of training data and neural networks, co-evolution of game players, and so on. Recently, several researchers have studied another kind of co-evolution- cooperative co-evolution (CoopCE). While CompCE tries to get more competitive individuals through evolution, the goal of CoopCE is to find individuals from which better systems can be constructed. The basic idea of CoopCE is to divide-and-conquer: divide a large system into many modules, evolve the modules separately, and then combine them together again to form the whole system. Depending on how to divide-and-conquer, different cooperative co-evolutionary algorithms (CoopCEAs) have been proposed in the literature. Results obtained so far strongly support the usefulness of CoopCEAs. To study the CoopCEAs systematically, we proposed a society model, which is a common framework of most existing CoopCEAs. From this model, we can see that there are still many open problems related to CoopCEAs. To make CoopCEAs generally useful, it is necessary to study and solve these problems. In this paper, we focus the discussion on evaluation of the modules-which is one of the key point in using CoopCEAs. To be concrete, we will apply the model to evolutionary learning of RBF-neural networks, and show the effectiveness of different evaluation methods through experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization for problem classes-neural networks that learn to learn

    Publication Year: 2000 , Page(s): 98 - 109
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1156 KB)  

    The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks-the ability of the network to efficiently solve related problems denoted as a class of problems. In a more theoretical framework the aim is to develop neural networks for adaptability-networks that learn (during evolution) to learn (during operation). Evolutionary algorithms have turned out to be a robust method for the optimization of neural networks. As this process is time consuming, it is therefore also from the perspective of efficiency desirable to design structures that are applicable to many related problems. In this paper, two different approaches to solve this problem are studied, called ensemble method and generation method. We empirically show that an averaged Lamarckian inheritance seems to be the most efficient way to optimize networks for problem classes, both for artificial regression problems as well as for real-world system state diagnosis problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computation: evolutionary, neural, molecular

    Publication Year: 2000 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB)  

    A confluence of factors emanating from computer science, biology, and technology have brought self-organizing approaches back to the fore. Neural networks in particular provide high evolvability platforms for variation-selection search strategies. The neuron doctrine and the fundamental nature of computing come into question. Is a neuron an atom of the brain or is it itself a complex information processing system whose interior molecular dynamics can be elicited and exploited through the evolution process? We argue the latter point of view, illustrating how high evolvability dynamics can be achieved with artificial neuromolecular computer designs and how such designs might in due course be implemented using molecular computing devices. A tabletop enzyme-driven prototype recently implemented in our laboratory is briefly described; it can be thought of as a sort of artificial neuron in which the context sensitivity of enzyme recognition is used to transform injected signal patterns into output activity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-standard norms in genetically trained neural networks

    Publication Year: 2000 , Page(s): 43 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB)  

    We discuss alternative norms to train neural networks (NNs). We focus on the so called multilayer perceptrons (MLPs). To achieve this we rely on a genetic algorithm called an eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NN: the backpropagation algorithm. We define four measures of distance: the mean exponential error (MEE); the mean absolute error (MAE); the maximum square error (MSE); and the maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: classification and forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm. We also discuss the resilience of genetically trained NNs to the change of the transfer function in the output layer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural network structures and isomorphisms: random walk characteristics of the search space

    Publication Year: 2000 , Page(s): 82 - 90
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (732 KB)  

    We deal with a quite general topic in evolutionary structure optimization, namely redundancy in the encoding due to isomorphic structures. This problem is well known in topology optimization of neural networks (NNs). In the context of structure optimization of NNs we observe similar phenomena of rare and frequent structures as are known from molecular biology. The degree to which isomorphic structures, i.e., classes of equivalent NN topologies, enlarge the search space depends on the restrictions of the allowed structures and on the representation of the search space. For restricted network topologies, like NNs with a maximum number of layers, some properties can be analyzed analytically. For more general structures we estimate the characteristics of the search space using data stemming from random walks. For restricted NN topologies, the search process is affected by isomorphic structures. However, in the absence of restrictions, the search space becomes so large that the bias induced by isomorphisms can be neglected View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Population optimization algorithm based on ICA

    Publication Year: 2000 , Page(s): 33 - 36
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    We propose a new population optimization algorithm called univariate marginal distribution algorithm with independent component analysis (UMDA/ICA). Our main idea is to incorporate ICA into the UMDA algorithm in order to tackle the interrelations among variables. We demonstrate that UMDA/ICA performs better than UMDA for a test function with highly correlated variables View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic modelling and time-series prediction by incremental growth of lateral delay neural networks

    Publication Year: 2000 , Page(s): 216 - 223
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB)  

    The difficult problems of predicting chaotic time series and modelling chaotic systems is approached using an innovative neural network design. By combining evolutionary techniques with others, good results can be obtained swiftly via incremental network growing. The network architecture and training algorithm make the creation of dynamic models efficient and hassle-free. The network results accurately reflect the outputs of the chaotic systems being modelled and preserve complex attractor structures of these systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new metric for evaluating genetic optimization of neural networks

    Publication Year: 2000 , Page(s): 52 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    In recent years researchers have used genetic algorithm techniques to evolve neural network topologies. Although these researchers have had the same end result in mind (namely, the evolution of topologies that are better able to solve a particular problem), the approaches they used varied greatly. Random selection of a genome coding scheme can easily result in sub-optimal genetic performance, since the efficiency of different evolutionary operations depends on how they affect schemata being processed in the population. In addition, the computational complexity involved in creating and evaluating neural networks usually does not allow for repetition of genetic experiments under different genome coding. I present an evaluation method that uses schema theory to aid the design of genetic codings for NN topology optimization. Furthermore, this methodology can help determine optimal balances between different evolutionary operators depending on the characteristics of the coding scheme. The methodology is tested on two GA-NN hybrid systems: one for natural language processing, and another for robot navigation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining incrementally evolved neural networks based on cellular automata for complex adaptive behaviors

    Publication Year: 2000 , Page(s): 121 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB)  

    There has been extensive work to construct an optimal controller for a mobile robot by evolutionary approaches such as genetic algorithm, genetic programming, and so on. However, evolutionary approaches have a difficulty to obtain the controller for complex and general behaviors. In order to overcome this shortcoming, we propose an incremental evolution method for neural networks based on cellular automata (CA) and a method of combining several evolved modules by a rule-based approach. The incremental evolution method evolves the neural network by starting with simpler environment needed simple behavior and gradually making it more complex and general for complex behaviors. The multimodules integration method can make complex and general behaviors by combining several modules evolved or programmed to do simple behavior. Experimental results show the potential of the incremental evolution and multi-modules integration methods as techniques to make the evolved neural network to do complex and general behaviors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Artificial neural development for pulsed neural network design-a simulation experiment on animat's cognitive map genesis

    Publication Year: 2000 , Page(s): 188 - 198
    Cited by:  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    We propose the artificial neural development method that generates the three-dimensional multi-regional pulsed neural network arranged in three layers of the nerve area layer, the nerve sub-area layer, and the cell layer. In this method, the neural development process consists of the first genome-controlled spatiotemporal generation of a neural network structure and the latter spiking activity-dependent regulation of it. For design of genomes, the steady state genetic algorithm is used and it is applied to genomes partially designed manually. Simulation experiments are conducted to generate pulsed neural networks of an animat that moves in an environment. We evolve and develop an animat's cognitive map as a multi-regional place recognition circuit that focuses on the place cell area. Through these experiments, we confirm that our method is useful for designing a multi-regional pulsed neural network of the animat that shows biologically realistic features View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution of recurrent cascade correlation networks with distributed collaborative species

    Publication Year: 2000 , Page(s): 240 - 249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB)  

    The vast research and experimental work of using EANN to evolve neural networks had achieved many successes, yet it also revealed some limitations. Aiming at boosting the EANN speed, improving its performance, the approach of Cooperative Co-evolution is introduced. Instead of one evolutionary algorithm that attempts to solve the whole problem, species representing simpler subtasks are evolved as separate instances of an evolutionary algorithm. The goal of this research is to investigate the performance of a distributed version of the collaborative coevolutionary species approach when discrete time steps are introduced into the problem, by applying the approach to the evolution of recurrent cascade correlation networks. A research tool is designed and implemented to simulate the evolution and the running of the recurrent neural network. Results are presented in which the Distributed Cooperative Coevolutionary Genetic Algorithm (DCCGA) produced higher quality solutions in fewer evolutionary cycles when compared to the standard genetic algorithm (GA). The performance of the two algorithms is analyzed and compared in two tasks: learning to recognize characters of the Morse code and learning a finite state grammar from examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inductive genetic programming of polynomial learning networks

    Publication Year: 2000 , Page(s): 158 - 167
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    Learning networks have been empirically proven suitable for function approximation and regression. Our concern is finding well performing polynomial learning networks by inductive Genetic Programming (iGP). The proposed iGP system evolves tree-structured networks of simple transfer polynomials in the hidden units. It discovers the relevant network topology for the task, and rapidly computes the network weights by a least-squares method. We implement evolutionary search guidance by an especially developed fitness function for controlling the overfitting with the examples. This study reports that iGP with the novel fitness function has been successfully applied to benchmark time-series prediction and data mining tasks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving neural networks using attribute grammars

    Publication Year: 2000 , Page(s): 37 - 42
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    The evolutionary optimization of neural networks involves two main design issues: how the neural network is represented genetically, and how that representation is manipulated through genetic operations. We have developed a genetic representation that uses an attribute grammar to encode both topological and architectural information about a neural network. We have defined genetic operators that are applied to the parse trees formed by the grammar. These operators provide the ability to introduce selection strategies that vary during the course of evolution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuro-evolution and natural deduction

    Publication Year: 2000 , Page(s): 64 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Natural deduction is essentially a sequential decision task, similar to many game-playing tasks. Such a task is well suited to benefit from the techniques of neuro-evolution. Symbiotic Adaptive Neuro-Evolution (SANE) (Moriarty and Miikkulainen, 1996) has proven successful at evolving networks for such tasks. This paper shows that SANE can be used to evolve a natural deduction system on a neural network. Particularly, it shows that: incremental evolution through progressively more challenging problems results in more effective networks than does direct evolution; and an effective network can be evolved faster if the network is allowed to “brainstorm” or suggest any move regardless of its applicability, even though the highest-ranked valid move is always applied. This way evolution results in neural networks with human-like reasoning behavior View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the use of biologically-inspired adaptive mutations to evolve artificial neural network structures

    Publication Year: 2000 , Page(s): 24 - 32
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    Evolutionary algorithms have been used to successfully evolve artificial neural network structures. Normally the evolutionary algorithm has several different mutation operators available to randomly change the number and location of neurons or connections. The scope of any mutation is typically limited by a user-selected parameter. Nature, however, controls the total number of neurons and synaptic connections in more predictable ways, which suggests the methods typically used by evolutionary algorithms may be inefficient. This paper describes a simple evolutionary algorithm that adaptively mutates the network structure where the adaptation emulates neuron and synaptic growth in the rhesus monkey. Our preliminary results indicate it is possible to evolve relatively sparse connected networks that exhibit quite reasonable performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Multi-Tiered Tournament Selection for evolutionary neural network synthesis

    Publication Year: 2000 , Page(s): 207 - 215
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    The paper introduces Multi-Tiered Tournament Selection. Traditional tournament selection algorithms are appropriate for single objective optimization problems but are too limited for the multi-objective task of evolving complete recognition systems. Recognition systems need to be accurate as well as small to improve generalization performance. Multi-tiered Tournament Selection is shown to improve search for smaller neural network recognition systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving neural trees for time series prediction using Bayesian evolutionary algorithms

    Publication Year: 2000 , Page(s): 17 - 23
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    Bayesian evolutionary algorithms (BEAs) are a probabilistic model for evolutionary computation. Instead of simply generating new populations as in conventional evolutionary algorithms, the BEAs attempt to explicitly estimate the posterior distribution of the individuals from their prior probability and likelihood, and then sample offspring from the distribution. We apply the Bayesian evolutionary algorithms to evolving neural trees, i.e. tree-structured neural networks. Explicit formulae for specifying the distributions on the model space are provided in the context of neural trees. The effectiveness and robustness of the method is demonstrated on the time series prediction problem. We also study the effect of the population size and the amount of information exchanged by subtree crossover and subtree mutations. Experimental results show that small-step mutation-oriented variations are most effective when the population size is small, while large-step recombinative variations are more effective for large population sizes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution and design of distributed learning rules

    Publication Year: 2000 , Page(s): 59 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    The paper describes the application of neural networks as learning rules for the training of neural networks. The learning rule is part of the neural network architecture. As a result the learning rule is non-local and globally distributed within the network. The learning rules are evolved using an evolution strategy. The survival of a learning rule is based on its performance in training neural networks on a set of tasks. Training algorithms will be evolved for single layer artificial neural networks. Experimental results show that a learning rule of this type is very capable of generating an efficient training algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.