By Topic

Combinations of Evolutionary Computation and Neural Networks, 2000 IEEE Symposium on

Date 11-13 May 2000

Filter Results

Displaying Results 1 - 25 of 31
  • 2000 IEEE Symposium On Combinations Of Evolutionary Computation And Neural N.etworks

    Page(s): 0_3 - 0_8
    Save to Project icon | Request Permissions | PDF file iconPDF (183 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 250
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Artificial neural development for pulsed neural network design-a simulation experiment on animat's cognitive map genesis

    Page(s): 188 - 198
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    We propose the artificial neural development method that generates the three-dimensional multi-regional pulsed neural network arranged in three layers of the nerve area layer, the nerve sub-area layer, and the cell layer. In this method, the neural development process consists of the first genome-controlled spatiotemporal generation of a neural network structure and the latter spiking activity-dependent regulation of it. For design of genomes, the steady state genetic algorithm is used and it is applied to genomes partially designed manually. Simulation experiments are conducted to generate pulsed neural networks of an animat that moves in an environment. We evolve and develop an animat's cognitive map as a multi-regional place recognition circuit that focuses on the place cell area. Through these experiments, we confirm that our method is useful for designing a multi-regional pulsed neural network of the animat that shows biologically realistic features View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computation: evolutionary, neural, molecular

    Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB)  

    A confluence of factors emanating from computer science, biology, and technology have brought self-organizing approaches back to the fore. Neural networks in particular provide high evolvability platforms for variation-selection search strategies. The neuron doctrine and the fundamental nature of computing come into question. Is a neuron an atom of the brain or is it itself a complex information processing system whose interior molecular dynamics can be elicited and exploited through the evolution process? We argue the latter point of view, illustrating how high evolvability dynamics can be achieved with artificial neuromolecular computer designs and how such designs might in due course be implemented using molecular computing devices. A tabletop enzyme-driven prototype recently implemented in our laboratory is briefly described; it can be thought of as a sort of artificial neuron in which the context sensitivity of enzyme recognition is used to transform injected signal patterns into output activity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring different coding schemes for the evolution of an artificial insect eye

    Page(s): 10 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB)  

    The existing literature proposes various (neuronal) architectures for object avoidance, which is one of the very fundamental tasks of autonomous, mobile robots. Due to certain hardware limitations, existing research resorts to prespecified sensor systems that remain fixed during all experiments, and modifications are done only in the controllers' software components. Only recent research (Lichtensteiger and Eggenberger, 1999) has tried to do the opposite, i.e., prespecifying a simple neural network and evolving the sensor distribution directly in hardware. Even though first experiments have been successful in evolving some solutions by means of evolutionary algorithms, they have also indicated that systematic comparisons between different evolutionary algorithms and codings schemes are required in order to optimize the evolutionary process. Since these comparisons cannot be done on the robot due to experimentation time, this paper reports the result of a set of extensive simulations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-standard norms in genetically trained neural networks

    Page(s): 43 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB)  

    We discuss alternative norms to train neural networks (NNs). We focus on the so called multilayer perceptrons (MLPs). To achieve this we rely on a genetic algorithm called an eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NN: the backpropagation algorithm. We define four measures of distance: the mean exponential error (MEE); the mean absolute error (MAE); the maximum square error (MSE); and the maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: classification and forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm. We also discuss the resilience of genetically trained NNs to the change of the transfer function in the output layer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the use of biologically-inspired adaptive mutations to evolve artificial neural network structures

    Page(s): 24 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    Evolutionary algorithms have been used to successfully evolve artificial neural network structures. Normally the evolutionary algorithm has several different mutation operators available to randomly change the number and location of neurons or connections. The scope of any mutation is typically limited by a user-selected parameter. Nature, however, controls the total number of neurons and synaptic connections in more predictable ways, which suggests the methods typically used by evolutionary algorithms may be inefficient. This paper describes a simple evolutionary algorithm that adaptively mutates the network structure where the adaptation emulates neuron and synaptic growth in the rhesus monkey. Our preliminary results indicate it is possible to evolve relatively sparse connected networks that exhibit quite reasonable performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving neural trees for time series prediction using Bayesian evolutionary algorithms

    Page(s): 17 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    Bayesian evolutionary algorithms (BEAs) are a probabilistic model for evolutionary computation. Instead of simply generating new populations as in conventional evolutionary algorithms, the BEAs attempt to explicitly estimate the posterior distribution of the individuals from their prior probability and likelihood, and then sample offspring from the distribution. We apply the Bayesian evolutionary algorithms to evolving neural trees, i.e. tree-structured neural networks. Explicit formulae for specifying the distributions on the model space are provided in the context of neural trees. The effectiveness and robustness of the method is demonstrated on the time series prediction problem. We also study the effect of the population size and the amount of information exchanged by subtree crossover and subtree mutations. Experimental results show that small-step mutation-oriented variations are most effective when the population size is small, while large-step recombinative variations are more effective for large population sizes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization for problem classes-neural networks that learn to learn

    Page(s): 98 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1156 KB)  

    The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks-the ability of the network to efficiently solve related problems denoted as a class of problems. In a more theoretical framework the aim is to develop neural networks for adaptability-networks that learn (during evolution) to learn (during operation). Evolutionary algorithms have turned out to be a robust method for the optimization of neural networks. As this process is time consuming, it is therefore also from the perspective of efficiency desirable to design structures that are applicable to many related problems. In this paper, two different approaches to solve this problem are studied, called ensemble method and generation method. We empirically show that an averaged Lamarckian inheritance seems to be the most efficient way to optimize networks for problem classes, both for artificial regression problems as well as for real-world system state diagnosis problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative co-evolutionary algorithm-how to evaluate a module?

    Page(s): 150 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    When we talk about co-evolution, we often consider it as competitive co-evolution (CompCE). Examples include co-evolution of training data and neural networks, co-evolution of game players, and so on. Recently, several researchers have studied another kind of co-evolution- cooperative co-evolution (CoopCE). While CompCE tries to get more competitive individuals through evolution, the goal of CoopCE is to find individuals from which better systems can be constructed. The basic idea of CoopCE is to divide-and-conquer: divide a large system into many modules, evolve the modules separately, and then combine them together again to form the whole system. Depending on how to divide-and-conquer, different cooperative co-evolutionary algorithms (CoopCEAs) have been proposed in the literature. Results obtained so far strongly support the usefulness of CoopCEAs. To study the CoopCEAs systematically, we proposed a society model, which is a common framework of most existing CoopCEAs. From this model, we can see that there are still many open problems related to CoopCEAs. To make CoopCEAs generally useful, it is necessary to study and solve these problems. In this paper, we focus the discussion on evaluation of the modules-which is one of the key point in using CoopCEAs. To be concrete, we will apply the model to evolutionary learning of RBF-neural networks, and show the effectiveness of different evaluation methods through experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting comprehensible rules from neural networks via genetic algorithms

    Page(s): 130 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB)  

    A common problem in KDD (Knowledge Discovery in Databases) is the presence of noise in the data being mined. Neural networks are robust and have a good tolerance to noise, which makes them suitable for mining very noisy data. However, they have the well-known disadvantage of not discovering any high-level rule that can be used as a support for human decision making. In this work we present a method for extracting accurate, comprehensible rules from neural networks. The proposed method uses a genetic algorithm to find a good neural network topology. This topology is then passed to a rule extraction algorithm, and the quality of the extracted rules is then fed back to the genetic algorithm. The proposed system is evaluated on three public-domain data sets and the results show that the approach is valid View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence analysis of a segmentation algorithm for the evolutionary training of neural networks

    Page(s): 70 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    In contrast to standard genetic algorithms with generational reproduction, we adopt the viewpoint of the reactor algorithm (Dittrich and Banzhaf, 1998) which is similar to steady-state genetic algorithms, but without ranking. This permits an analysis similar to Eigen's (1971) molecular evolution model. From this viewpoint, we consider combining segments from different populations into one genotype at every time-step, which can be regarded as many-parent combinations with fined crossover points, and is comparable to cooperative evolution (Potter and De Jong, 2000). We present fixed-point analysis and phase portraits of the competitive dynamics, with the result that only the first-order (single parent) replicators exhibit global optimisation. A segmentation algorithm is developed that theoretically ensures convergence to the global optimum while keeping the cooperative or reactor aspect for a better exploration of the search space. The algorithm creates different population islands for such cases of competition that otherwise cannot be solved correctly by the population dynamics. The population blends have different segmentation boundaries which are generated by combining well converged components into new segments. This gives first-order replicators that have the appropriate dynamical properties to compete with new solutions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Multi-Tiered Tournament Selection for evolutionary neural network synthesis

    Page(s): 207 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    The paper introduces Multi-Tiered Tournament Selection. Traditional tournament selection algorithms are appropriate for single objective optimization problems but are too limited for the multi-objective task of evolving complete recognition systems. Recognition systems need to be accurate as well as small to improve generalization performance. Multi-tiered Tournament Selection is shown to improve search for smaller neural network recognition systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Population optimization algorithm based on ICA

    Page(s): 33 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    We propose a new population optimization algorithm called univariate marginal distribution algorithm with independent component analysis (UMDA/ICA). Our main idea is to incorporate ICA into the UMDA algorithm in order to tackle the interrelations among variables. We demonstrate that UMDA/ICA performs better than UMDA for a test function with highly correlated variables View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inductive genetic programming of polynomial learning networks

    Page(s): 158 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    Learning networks have been empirically proven suitable for function approximation and regression. Our concern is finding well performing polynomial learning networks by inductive Genetic Programming (iGP). The proposed iGP system evolves tree-structured networks of simple transfer polynomials in the hidden units. It discovers the relevant network topology for the task, and rapidly computes the network weights by a least-squares method. We implement evolutionary search guidance by an especially developed fitness function for controlling the overfitting with the examples. This study reports that iGP with the novel fitness function has been successfully applied to benchmark time-series prediction and data mining tasks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neuro-evolution and natural deduction

    Page(s): 64 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Natural deduction is essentially a sequential decision task, similar to many game-playing tasks. Such a task is well suited to benefit from the techniques of neuro-evolution. Symbiotic Adaptive Neuro-Evolution (SANE) (Moriarty and Miikkulainen, 1996) has proven successful at evolving networks for such tasks. This paper shows that SANE can be used to evolve a natural deduction system on a neural network. Particularly, it shows that: incremental evolution through progressively more challenging problems results in more effective networks than does direct evolution; and an effective network can be evolved faster if the network is allowed to “brainstorm” or suggest any move regardless of its applicability, even though the highest-ranked valid move is always applied. This way evolution results in neural networks with human-like reasoning behavior View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coevolutionary design of a control system for nonlinear uncertain plants

    Page(s): 176 - 187
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (880 KB)  

    The paper proposes and analyzes an alternative for autonomously developing control systems for nonlinear, uncertain plants. The proposed alternative uses a neural-like controller with a special architecture and also a specific developing algorithm. The developing algorithm estimates the parameters and the structure of the controller during an off-line training stage. The controller obtained at the end of the off-line training is for the on-line use, no longer requiring an extra-training. The developing algorithm is implemented like a multi-agent system so that the agents cooperate one another. For each agent the developing algorithm implements a local evolution and a temporal evolution. For the partition where the agent works, the local evolution estimates, through a coevolutionary algorithm, the best “segment” of the control function for the worst possible plant. We present two different implementations of the developing algorithm. The first implementation, based on global cooperation between agents, is performed through changing controllers structures between agents. The second implementation, based on local cooperation between agents, is performed through changing individual wavelets between agents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a clustering genetic algorithm for rule extraction from artificial neural networks

    Page(s): 199 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    The main challenge to the use of supervised neural networks in data mining applications is to get explicit knowledge from these models. For this purpose, a study on knowledge acquirement from supervised neural networks employed for classification problems is presented. The methodology is based on the clustering of the hidden units activation values. A clustering genetic algorithm for rule extraction from neural networks is developed. A simple encoding scheme that yields to constant-length chromosomes is used, thus allowing the application of the standard genetic operators. A consistent algorithm to avoid some of the drawbacks of this kind of representation is also developed. In addition, a very simple heuristic is applied to generate the initial population. The individual fitness is determined based on the Euclidean distances among the objects, as well as on the number of objects belonging to each cluster. The developed algorithm is experimentally evaluated in two data mining benchmarks: Iris Plants Database and Pima Indians Diabetes Database. The results are compared with those obtained by the Modified RX Algorithm (E.R. Hruschka and N.F.F. Ebecken, 1999), which is also an algorithm for rule extraction from neural networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical genetic algorithm based neural network design

    Page(s): 168 - 175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    In this paper, we propose a novel genetic algorithm based design procedure for multi-layer feedforward neural network. Hierarchical genetic algorithm is used to evolve both neural network topology and parameters. Compared with traditional genetic algorithm based designs for neural network, the proposed hierarchical approach addressed several deficiencies highlighted in literature. A multi-objective function is used herein to optimize the performance and topology of the evolved neural network. Two benchmark problems are successfully verified and the proposed algorithm proves to be competitive or even superior to the traditional back-propagation network in Mackey-Glass chaotic time series prediction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive scheme for real function optimization acting as a selection operator

    Page(s): 140 - 149
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    We propose an adaptive scheme for real function optimization whose dynamics is driven by selection. The method is parametric and relies explicitly on the Gaussian density seen as an infinite search population. We define two gradient flows acting on the density parameters, in the spirit of neural network learning rules, which maximize either the function expectation relatively to the density or its logarithm. The first one leads to reinforcement learning and the second one leads to selection learning. Both can be understood as the effect of three operators acting on the density: translation, scaling, and rotation. Then we propose to approximate those systems with discrete time dynamical systems by means of three different methods: Monte Carlo integration, selection among a finite population, and reinforcement learning. This work synthesizes previously independent approaches and intends to show that evolutionary strategies and reinforcement learning are strongly related View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving neural networks using attribute grammars

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    The evolutionary optimization of neural networks involves two main design issues: how the neural network is represented genetically, and how that representation is manipulated through genetic operations. We have developed a genetic representation that uses an attribute grammar to encode both topological and architectural information about a neural network. We have defined genetic operators that are applied to the parse trees formed by the grammar. These operators provide the ability to introduce selection strategies that vary during the course of evolution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution and design of distributed learning rules

    Page(s): 59 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    The paper describes the application of neural networks as learning rules for the training of neural networks. The learning rule is part of the neural network architecture. As a result the learning rule is non-local and globally distributed within the network. The learning rules are evolved using an evolution strategy. The survival of a learning rule is based on its performance in training neural networks on a set of tasks. Training algorithms will be evolved for single layer artificial neural networks. Experimental results show that a learning rule of this type is very capable of generating an efficient training algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution of recurrent cascade correlation networks with distributed collaborative species

    Page(s): 240 - 249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB)  

    The vast research and experimental work of using EANN to evolve neural networks had achieved many successes, yet it also revealed some limitations. Aiming at boosting the EANN speed, improving its performance, the approach of Cooperative Co-evolution is introduced. Instead of one evolutionary algorithm that attempts to solve the whole problem, species representing simpler subtasks are evolved as separate instances of an evolutionary algorithm. The goal of this research is to investigate the performance of a distributed version of the collaborative coevolutionary species approach when discrete time steps are introduced into the problem, by applying the approach to the evolution of recurrent cascade correlation networks. A research tool is designed and implemented to simulate the evolution and the running of the recurrent neural network. Results are presented in which the Distributed Cooperative Coevolutionary Genetic Algorithm (DCCGA) produced higher quality solutions in fewer evolutionary cycles when compared to the standard genetic algorithm (GA). The performance of the two algorithms is analyzed and compared in two tasks: learning to recognize characters of the Morse code and learning a finite state grammar from examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Case studies in applying fitness distributions in evolutionary algorithms. II. Comparing the improvements from crossover and Gaussian mutation on simple neural networks

    Page(s): 91 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    Previous efforts in applying fitness distributions of Gaussian mutation for optimizing simple neural networks in the XOR problem are extended by conducting a similar analysis for three types of crossover operators. One-point, two-point and uniform crossover are applied to the best-evolved neural networks at each generation in an evolutionary trial. The maximum expected improvement under Gaussian mutation with a single fixed standard deviation is then compared to that which can be obtained using crossover. The results indicate that the benefits of each type of crossover varies as a function of the generation number. Furthermore, these fitness profiles are notably similar (i.e., there is little functional difference between the various crossover operators). This does not support a building block hypothesis for explaining the gains that can be made via recombination. The results indicate cases where mutation alone can outperform recombination and vice versa View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specifying intrinsically adaptive architectures

    Page(s): 224 - 231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    The paper describes a method for specifying (and evolving) intrinsically adaptive neural architectures. These architectures have back-propagation style gradient descent behavior built into them at a cellular level. The significance of this is that we can now use back-propagation to train evolved feedforward networks of any structure (provided that individual nodes are differentiable). Networks evolved in this way can potentially adapt to their environment in situ. This is in contrast to more conventional techniques such as using a genetic algorithm or simulated annealing to train the network. The method can be seamlessly integrated with any method for evolving neural network architectures. The performance of the method is investigated on the simple synthetic benchmarks of parity and intertwined spiral problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.