By Topic

IEEE Quick Preview
  • Abstract

SECTION I

INTRODUCTION

E VOLUTIONARY algorithms, such as genetic algorithms (GAs) [1], evolutionary programming (EP) [2], evolution strategies (ESs) [3] and their variants, have been used widely in function optimization (e.g., [4], [5], [6]), multiobjective optimization (e.g., [7], [8], [9]), discrete optimization problems (e.g., [10], [11]), learning decision rules (e.g., [12]), optimizing object recognition model (e.g., [13]), evolving fuzzy rules (e.g., [9], [14]), and evolving neural networks (e.g. [15] [16]). These algorithms are stochastic search methods that work with a population of approximate solutions (i.e., individuals) instead of just a single solution. The population undergoes an evolutionary process by the application of some operators borrowed from natural genetics, like crossover, mutation, and selection. Crossover and mutation are variation operators applied on existing solutions to produce new solutions (known as offspring). Selection is used to probabilistically promote better solutions for the next generation and eliminate those solutions that are less fit.

The basic difference between EP [2] (or ESs [3], [17]) and GAs [1] is the variation operator used for producing offspring. Both EP and ES use only mutation operator to produce offspring, while GAs use both crossover and mutation operators. Since mutation is the main operator in EP [2], a number of innovative mutation operators have been proposed such as Cauchy mutation [6], a combination of Cauchy and Gaussian mutation [18], and Lévy mutation [5]. The aim of these mutations is to introduce large variations for producing offspring so that a population can globally explore wider regions of a search space. This means that the improvement of EP has been sought by increasing its exploration capability. However, both exploration and exploitation are necessary, depending on whether an evolutionary process becomes trapped in a local optima or finds more promising regions in the search space.

This paper describes a new recurring two-stage EP (RTEP) based on mutation. The new algorithm attempts to maintain a proper balance between global explorations and local exploitations by its two recurring stages. It uses objective-oriented mutation operators and selection strategies to achieve the exploration and exploitation objectives. RTEP differs from most EP-based [2] approaches (e.g., [5], [6], [18]) and memetic algorithms [19] in three aspects. First, RTEP emphasizes repeated and alternated objective-oriented operations for achieving global exploration and local exploitation goals. The essence of this approach is that when an evolutionary process is trapped into a deep local optimum or finds a very promising region, the repeated execution of objective-oriented operations helps to handle the situation effectively. Since exploration and exploitation operations are executed alternatively in RTEP, there is no need to make a “perfect switching” from one operation to another. The conflicting goals of exploration and exploitation are expected to be distributed automatically across the generations of the recurring operations. This approach is different from the one used in EP, ES, and memetic algorithms. EP and ES-based approaches do not execute exploration and exploitation operations separately. Rather, they use a single-stage execution model with self-adaptation rules (e.g., [5], [6], [18], [20]). Even recently introduced and more sophisticated ES schemes, like Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [21], follow the single stage execution model and are usually focused on facilitating more successful mutations by adapting the mutation step size effectively and more or less ignoring the entirely different and often conflicting goals of exploration and exploitation that arise again and again throughout the evolutionary search process. On the other hand, memetic algorithms, though they take different measures for the conflicting explorative and exploitative goals, usually execute the exploration and exploitation stages only once, one after the other, but not repeatedly (e.g., [22], [23]). Most benchmark functions have numerous local optima, so repeating the exploration and exploitation stages only once, rather than repeatedly as RTEP does, has the inherent danger of being trapped in local optima, failing to reach the neighborhood of the global optima.

Second, RTEP uses only the mutation operator for both exploration and exploitation. Although there are many algorithms that use GAs [1] for exploration and local search methods or a specially designed operator for exploitation (see the review paper [24] and the references therein), RTEP is to our knowledge the first algorithm that uses only mutation both for exploration and exploitation. Both explorative and exploitative objectives can be achieved using mutation with a large and a small step size, respectively. There are some mutation-only algorithms that we have used later in this paper for comparison with RTEP. However, they either do not separate exploitations from explorations (e.g., classical EP [2], improved fast EP (IFEP) [6], adaptive EP with Lévy mutation (ALEP) [5]), or use some specialized operator other than mutation [e.g., crossover hill climbing (XHC) used by real-coded memetic algorithm (RCMA) [25], neighborhood search used by differential evolution (DE) with neighborhood search (NSDE) [26], [27], adaptive local search used by local search RCMA (LSRCMA) [28]] for exploitations. Details on each of these algorithms will appear later at relevant locations of this paper.

Third, RTEP uses the distance between similar and dissimilar individuals to employ exploitative and explorative mutation, respectively. Some other existing works, like DE [29], also employ the distance between individuals for mutation. However, RTEP is still significantly different from DE because of its recurring nature of explorations and exploitations. DE does not consider exploitations and explorations separately and does not follow any recurring behavior. RTEP has been compared with NSDE [26], [27], which is a recently introduced more sophisticated variant of DE. Results on recent benchmark functions exhibit that RTEP often performs better than NSDE, which indicates the effectiveness of its recurring explorations and exploitations over the traditional single stage execution model of NSDE.

Fourth, RTEP has been theoretically analyzed and empirically tested on a suite of 48 benchmark test functions, consisting of 23 classical test functions [6], [30] and 25 test functions recently introduced at CEC2005 [31]. Few evolutionary approaches have been tested on a similar range of benchmark problems with different characteristics. Results show that RTEP often produces better solutions than most other approaches.

The rest of this paper is organized as follows. Section II describes RTEP in detail and gives the motivations behind the various design choices. The proposed approach is theoretically analyzed in Section III. Section IV presents the experimental results of RTEP, along with discussion and comparison with other works. Finally, Section V concludes with a summary of this paper and a few suggestions on future research directions.

SECTION II

RTEP

A recurring two-stage evolutionary approach is adopted for RTEP in an attempt to balance the conflicting goals of evolution, i.e., exploration and exploitation. This approach employs one stage for global exploration and another stage for local exploitation. The two stages execute in a recurring fashion, one after another, again and again. The exploration stage employs Gaussian mutation with a large standard deviation, which is set from the distance of two dissimilar individuals along the corresponding component being mutated. The same mutation with a small standard deviation, set from the distance between two similar individuals along the selected component, is used for exploitation. The major steps of RTEP can be described as follows:

  1. Generate an initial population consisting of Formula$\mu$ individuals. Each individual Formula$\overrightarrow{x_{i}}$, Formula$i = 1, 2, \ldots, \mu$, is represented as a real-valued vector with Formula$n$ components. Formula TeX Source $$\overrightarrow{x_{i}} = \left\{x_{i}(1), x_{i}(2), \ldots x_{i}(n)\right\}\eqno{\hbox{(1)}}$$
  2. Initialize parameters Formula$K_{1}$ and Formula$K_{2}$ within a certain range. They define how many generations the exploration and exploitation stages shall each continue before alternating to the other.
  3. Calculate the fitness value of each individual Formula$\overrightarrow{x_{i}}$, Formula$i = 1, 2, \ldots, \mu$, based on the objective function. If the best fitness value of the population is acceptable, then STOP. Otherwise, CONTINUE.
  4. Repeat steps 5)–8) for Formula$K_{1}$ generations. This is a single pass of the exploration stage.
  5. For each individual Formula$\overrightarrow{x_{i}}$, Formula$i = 1, 2, \ldots, \mu$, find a set of Formula$M$ individuals that have maximum Euclidian distance from Formula$\overrightarrow{x_{i}}$ in the Formula$n$-dimensional search space. This is the set of ‘strangers’ for Formula$\overrightarrow{x_{i}}$. Pick a stranger uniformly at random from this set.
  6. Create Formula$\mu$ offspring by applying mutation on each individual Formula$\overrightarrow{x_{i}}$, Formula$i = 1, 2, \ldots, \mu$, of the population. Each individual Formula$\overrightarrow{x_{i}}$ creates a single offspring Formula$\overrightarrow{x_{i}}^{\prime}$ by the following procedure: Formula TeX Source $$\eqalignno{k& = 1 + r_{i}\ {\rm mod}\ n&\hbox{(2)}\cr{\hskip5pt} \hbox{for}\ t = 1\ to\ k\ {\bf do}\quad x_{i}^{\prime}(r_{t})& = x_{i}(r_{t}) + \sigma_{i}(r_{t})N_{t}(0, 1).\quad&\hbox{(3)}}$$ Here, (2) picks a uniform random value for Formula$k$ from Formula$\{1, 2, \ldots, n\}$. This Formula$k$ denotes the number of attributes of Formula$\overrightarrow{x_{i}}$ out of its Formula$n$ attributes that would be mutated by the subsequent for loop in (3). Each iteration of the for loop picks a random attribute of Formula$\overrightarrow{x_{i}}$ uniformly from its Formula$n$ attributes and applies Gaussian mutation on it with Formula$\hbox{mean} = 0$ and standard deviation (SD) set using the distance between Formula$\overrightarrow{x_{i}}$ and the selected stranger along this attribute. To further clarify the notations, here Formula$r_{i}$ is a uniform random integer, Formula$r_{t}$ is a random component (i.e., gene value) being mutated with each iteration of the for loop, Formula$N_{t}(0, 1)$ is a normally distributed random number with Formula$\hbox{mean} = 0$ and Formula$\hbox{standard deviation} = 1$ generated anew in each iteration, and Formula$\sigma_{i}(r_{t})$ is the SD for mutating the Formula$r_{t}$th component of Formula$\overrightarrow{x_{i}}$. It is set as the Euclidian distance along the Formula$r_{t}$th component of Formula$\overrightarrow{x_{i}}$ and its selected stranger.
  7. Compute the fitness value of each offspring Formula$\overrightarrow{x_{i}}^{\prime}$, Formula$i = 1, 2, \ldots, \mu$. Select Formula$\mu$ individuals from parents and offspring for the next generation. If the fitness value of Formula$\overrightarrow{x_{i}}^{\prime}$ is at least equal to its parent Formula$\overrightarrow{x_{i}}$, discard Formula$\overrightarrow{x_{i}}$ and select Formula$\overrightarrow{x_{i}}^{\prime}$ for the next generation. Otherwise, discard Formula$\overrightarrow{x_{i}}^{\prime}$.
  8. If the best fitness value of the population is acceptable or the maximum number of generations has been reached, then STOP. Otherwise, CONTINUE.
  9. Repeat steps 10)–13) for Formula$K_{2}$ generations. This is a single pass of the exploitation stage.
  10. For each individual Formula$\overrightarrow{x_{i}}$, Formula$i = 1, 2, \ldots, \mu$, find a set of Formula$M$ individuals that have minimum Euclidian distance from Formula$\overrightarrow{x_{i}}$ in the Formula$n$-dimensional search space. This is the set of “neighbors” for Formula$\overrightarrow{x_{i}}$. Pick a neighbor uniformly at random from this set.
  11. Create Formula$\mu$ offspring in the same way as described in Step 6). However, the neighbor is used here instead of the stranger. The SD Formula$\sigma_{i}(r_{t})$ of mutation is set from the distance along the Formula$r_{t}$th component of the current individual Formula$\overrightarrow{x_{i}}$ and one of its randomly picked neighbors rather than strangers.
  12. Compute the fitness value of each offspring Formula$\overrightarrow{x_{i}}^{\prime}$, Formula$i = 1, 2, \ldots, \mu$. Select Formula$\mu$ individuals from parents and offspring for the next generation. If the fitness value of Formula$\overrightarrow{x_{i}}^{\prime}$ is better than its parent Formula$\overrightarrow{x_{i}}$, discard Formula$\overrightarrow{x_{i}}$ and select Formula$\overrightarrow{x_{i}}^{\prime}$ for the next generation. Otherwise, discard Formula$\overrightarrow{x_{i}}^{\prime}$.
  13. If the best fitness value of the population is acceptable or the maximum number of generations has been reached, then STOP. Otherwise, go back to step 4) to start over another pass of exploration and exploitation stages.

It is clear that the essence of RTEP is its three components, i.e., exploration stage, exploitation stage, and recurring approach, which are described in the following sections.

A. Exploration Stage

This stage facilitates the exploration of the wider regions of a search space so that the chance of finding a good near-optimum solution by exploration is increased. Since exploration is a nonlocal operation, a mutation operation that can produce farther offspring and increase the population diversity is suitable for this stage. In RTEP, the Euclidian distance between the genotype of Formula$\overrightarrow{x_{i}}$ and one of its strangers along a selected component is used as the SD to mutate the component. To pick a stranger, RTEP determines the Formula$M$ individuals across the population whose genotypes have the largest Euclidian distance from the genotype of Formula$\overrightarrow{x_{i}}$ and then picks one of them uniformly at random. Since this individual and Formula$\overrightarrow{x_{i}}$ are relatively far apart in the search space, the large Euclidian distance between them could be used as the SD of the explorative mutation.

The scenario presented in Fig. 1 exemplifies how mutation involving the distance of dissimilar individuals may facilitate exploration. The oval boundaries in the figure represent two groups of individuals that are far apart in the search space. When RTEP mutates an individual Formula$\overrightarrow{x_{i}}$ of any group using the Euclidian distance between the genotype of Formula$\overrightarrow{x_{i}}$ and of another individual from the other group, some offspring (marked as solid stars) may be produced in between the two groups. At first, they might seem to be quite similar in terms of fitness, but a small amount of “hill climbing,” i.e., a few exploitation operations would reach them to the narrow global optima. This becomes possible because of combining information from dissimilar individuals for exploration.

Figure 1
Fig. 1. An example of exploration in fitness landscape. The stars and solid stars represent parents and offspring, respectively.

B. Exploitation Stage

The evolutionary approach may discover some promising regions by executing exploration operations several times. Thus, it is necessary to realize the potentials of discovered regions before further explorations. To achieve this objective, RTEP executes several exploitation operations after the completion of some exploration operations. The aim of the exploitation stage is to reach peaks of different explored regions so that the optimum solution, if exists in close proximity, can easily be found.

To select a neighbor for an individual Formula$\overrightarrow{x_{i}}$, Formula$M$ other individuals are found that have the minimum Euclidian distance of their genotypes from the genotype of Formula$\overrightarrow{x_{i}}$, and one of them is picked uniformly at random as the neighbor of Formula$\overrightarrow{x_{i}}$. The mutation used for exploitation is the same as that used for exploration. However, the only difference is that the Euclidian distance between the genotype of Formula$\overrightarrow{x_{i}}$ and one of its neighbors along the selected components is used as SD for mutation, instead of the distance between two strangers. It is expected that the genotype of the selected individual would be similar to Formula$\overrightarrow{x_{i}}$ resulting in a small SD that is suitable for exploitation.

C. Recurring Approach

It is well known that pure EAs are not suitable for fine tuning a search in complex search spaces, and the hybridization of EAs with other methods can greatly improve search efficiency [32], [33]. A number of approaches have been proposed in the literature that use GAs [1] for exploration and local search methods for exploitation. According to [24], the following four issues must be addressed when exploration and exploitation operations are executed separately and then combined in one algorithm. First, when and where should a local search method be applied within the evolutionary cycle? Second, which individuals in the population should be improved by the local search, and how should they be chosen? Third, how much computational effort should be allocated to each local search? Fourth, how can genetic operators be best integrated with the local search in order to achieve a synergistic effect? To address these questions, a number of heuristics and parameters may need to be employed in any classical EA. However, this requires a user to have rich prior knowledge, which often does not exist for complex real-world problems. Hence, a scheme that does not employ many heuristics and user-specified parameters is clearly preferred. The repeated and alternating execution of exploration and exploitation operations on all individuals in a population could be a simple solution for the first three questions, which is adopted in RTEP. The solution quality and convergence characteristics of RTEP on a wide range of benchmark functions (Section IV) show the effectiveness of the aforementioned intuition. Since RTEP uses only mutation for both exploration and exploitation operations, the problem of integrating different methods or operators does not arise.

As mentioned previously, RTEP uses the genotype distance between dissimilar and similar individuals along the selected components as the SD in mutating the corresponding components of the individuals in order to realize exploration and exploitation objectives. This makes the exploration and exploitation operations self-adaptive. The degree of exploration is high at the beginning of the evolutionary process and gradually decreases as the process progresses. A similar scenario occurs for exploitative mutations, which start with a medium step size and become very fine-tuned during the late generations. These are very much possible due to using the distance as the SD of mutation, because distances among individuals are usually high in early generations and gradually drops reflecting the maturity of the search process.

Although RTEP employs Gaussian mutation and makes use of no crossover/recombination operator, it is not a rigid design requirement of RTEP. In fact, RTEP provides a very generic framework and any mutation and/or recombination/crossover scheme could be incorporated within it. There exist lots of genetic operators whose strengths have already been demonstrated, e.g., SBX crossover and polynomial mutation [34]. Any such operator might be incorporated into RTEP, while the only requirement is to design both “explorative” and “exploitative” variants of that operator. It is open to the researchers how they define the explorative and exploitative variants, which would not be difficult in most situations. Moreover, RTEP involves only two user-specified parameters: Formula$K_{1}$ and Formula$K_{2}$, which are not large numbers because any classical EA needs two or three such parameters.

SECTION III

ANALYSIS OF RTEP

The aim of this section is to analyze RTEP based on search bias and exploration/exploitation operations. When a search operator (e.g., crossover or mutation) is applied to the individuals, some offspring are more likely to be generated than others. This tendency is called search bias and has great impact on the performance of EAs. The search bias includes search step size and search directions. Since RTEP uses only mutation for both explorations and exploitations, the following analysis presented is carried out only for mutation. The SD used in (3) determines the search step size of mutation. The probability density function of the normal distribution used in mutation is Formula TeX Source $$f_{(0, \sigma^{2})}(x) = {1\over \sigma\sqrt{2\pi}}e^{-{x^{2}\over 2\sigma^{2}}}\eqno{\hbox{(4)}}$$ where 0 is the expected value; and Formula$\sigma$ is the SD. RTEP uses the distance between two individuals along a selected component as the Formula$\sigma$ in (4) to mutate the corresponding component. However, how does Formula$\sigma$ affect the mutation step size? To find the analytical relationship between Formula$\sigma$ and step size, we can generalize the analysis method for the mean search step size in [6]. The expected value of mutation step size for producing offspring is computed as follows: Formula TeX Source $$\eqalignno{E_{(0, \sigma^{2})}(\vert x\vert) =&\, \int \limits_{-\infty}^{+\infty} \vert x\vert {1\over \sigma\sqrt{2\pi}}e^{-{x^{2}\over 2\sigma^{2}}}dx\cr =&\, 2\int \limits_{0}^{+\infty} x{1\over \sigma\sqrt{2\pi}}e^{-{x^{2}\over 2\sigma^{2}}}dx\cr =&\, \sigma\sqrt{{2\over \pi}}.&\hbox{(5)}}$$

Figure 2
Fig. 2. Gaussian distributions with mean zero and standard deviations 1 to 5.

Equation (5) tells us that the mean step size is directly proportional to Formula$\sigma$. To visualize the effect of Formula$\sigma$, the Gaussian probability density function used for introducing mutation steps is plotted for different Formula$\sigma$ s in the same scale (Fig. 2). It shows that for a large Formula$\sigma$, the density function distribution exhibits a central maximum with a long flat tail which is more likely to introduce large variations (i.e., longer jumps) for producing offspring which facilitates global explorations. Similarly, a large central maximum with a small flat tail, which is obtained for a small Formula$\sigma$, is suitable for producing small steps around the mean, which is necessary for local exploitations. Thus, an optimal value of Formula$\sigma$ is necessary to facilitate proper exploration and exploitation. However, a suitable value for Formula$\sigma$ is problem dependent. Even for a single problem, a separate Formula$\sigma$ is required for each component of individuals at different stages of the evolutionary process. A large Formula$\sigma$ is beneficial when the distance between the neighborhood of the optimal point and the current search point is larger than Formula$\sigma$ [6]. As the global optimum is unknown, adapting Formula$\sigma$ during the course of evolution becomes necessary. The self-adaptation scheme of Formula$\sigma$ described in [2], [35] is partial and does not always work well [36].

In addition to the search step size, its precise direction is also important. If an optimal step size is found but the right search direction is unknown, the evolutionary process is likely to face difficulty. To address this issue, RTEP incorporates the directional information in mutation by using the differences between corresponding components of two similar or dissimilar individuals. More particularly, RTEP randomly picks Formula$k$ (out of Formula$n$) components of the current individual Formula$\overrightarrow{x_{i}}$, and each component is mutated using the distance along it between Formula$\overrightarrow{x_{i}}$ and a similar individual (for exploitation) or a dissimilar individual (for exploration). This is illustrated in the pseudocode (step 6) of RTEP. Using multiple components provides direction information along each component and the offspring is likely to be produced in between the parents. Such an approach has several advantages. First, it provides an effective search direction by which mutation may produce offspring that can be nondominated or dominated by individuals in the current population. This would increase the efficacy of exploration or exploitation. Second, it makes mutation operations self-adaptive without using any adaptation scheme. The individuals in a population are usually widely spread over the entire search space at the beginning of an evolutionary process. As the evolutionary processes progresses, the population converges toward the optimal solution, and the distance among individuals tend to decrease. This means mutation SD tends to be larger during early generations and smaller at the end of the evolutionary process. Hence, the incorporation of such self-adaptive directional information in mutation is inherently suitable for the search process. Third, such an approach relieves users of the burden of specifying initial standard deviations for mutation.

Now, we analyze the effect of repeated alteration of explorations and exploitations. Let Formula$\sigma_{1}$ and Formula$\sigma_{2}$ be the standard deviations used by mutation for exploration and exploitation, respectively, with Formula$\sigma_{1} >\ = \sigma_{2}$. Denote Formula$x^{0}$ as a parent individual and Formula$x^{g}$ as its offspring obtained after Formula$g$ successive generations. The expected value of total variation, i.e., Formula$x^{g} - x^{0}$, introduced by successive explorative mutations for Formula$K_{1}$ generations or exploitative mutations for Formula$K_{2}$ generations or explorations followed by exploitations are given by following equations: Formula TeX Source $$\eqalignno{E(d_{1}) =&\, \sum_{i = 1}^{g}\sigma_{1}(i) \sqrt{{2\over \pi}} = gE(\sigma_{1}) \sqrt{{2\over \pi}} = cgE(\sigma_{1})&\hbox{(6)}\cr E(d_{2}) =&\, \sum_{i = 1}^{g}\sigma_{2}(i) \sqrt{{2\over \pi}} = gE(\sigma_{2}) \sqrt{{2\over\pi}} = cgE(\sigma_{2})& \hbox{(7)}\cr E(d_{3}) =&\, \sum_{i = 1}^{g/2}\sigma_{1}(i) \sqrt{{2\over \pi}} + \sum_{i = 1}^{g/2} \sigma_{2}(i) \sqrt{{2\over \pi}}\cr =&\, {g\over 2} \left(E(\sigma_{1}) + E(\sigma_{2})\right) \sqrt{{2\over \pi}} = cg{E(\sigma_{1}) + E(\sigma_{2})\over 2}\qquad&\hbox{(8)}}$$ where Formula$E(d_{1})$, Formula$E(d_{2})$, and Formula$E(d_{3})$ are the expected value of total variations by successive explorations, successive exploitations, and explorations followed by exploitations, respectively. Formula$E(\sigma_{1})$ and Formula$E(\sigma_{2})$ are the mean step length of explorative and exploitative mutations. Since Formula$E(\sigma_{1})$ is larger than Formula$E(\sigma_{2})$, Formula$E(d_{1})$ is larger than Formula$E(d_{3})$, while Formula$E(d_{2})$ is smaller than Formula$E(d_{3})$. Hence, the repeated execution of exploration (or exploitation) operations introduces more (or less) variations than that introduced by exploration followed by exploitation. It may be argued that employing a single-stage EA with expected step size Formula$E(\sigma_{3}) = E(\sigma_{1}) + E(\sigma_{2})/2$ would achieve similar effect. However, finding an appropriate value for Formula$\sigma_{3}$ by evaluating the population is not easy, while finding suitable Formula$\sigma_{1}$ and Formula$\sigma_{2}$ is fairly straightforward to determine using the distance between strangers and neighbors. The search space of most benchmark multimodal functions possesses alternating peaks and valleys, so periodically alternating between explorations and exploitations has an intuitive appeal to prove effective for advancing the search by alternating downhill and uphill moves across the search space. Experimental results in the next section (Table IV) demonstrate that executing explorative and exploitative stages sequentially (i.e., one after another) fail to yield sufficiently good results in comparison to the proposed recurring approach with alternating explorations and exploitations. Since the depths of peaks and valleys are not known in advance, the repetition of explorative and exploitative steps for some length (i.e., Formula$K_{1}$ and Formula$K_{2}$) seems a good choice for handling local optima and promising regions of the search space.

SECTION IV

EXPERIMENTAL STUDIES

This section presents the experimental results of RTEP which would help achieve better understanding of how repeating and alternating exploration and exploitation operations based on mutation may influence and improve the performance of EAs. To examine the influence, RTEP is applied on classical benchmark functions [6], [30], as well as recent benchmark functions introduced at CEC2005 [31]. These functions have been the subject of many studies in EAs.

A. RTEP on Classical Benchmark Functions

A set of 23 classical benchmark test functions [6], [30] is used in experiments. Based on their properties, the functions can be divided into three groups, namely, functions with no local minima, many local minima, and a few local minima. The analytical form of these functions is given in Table I, where Formula$D$ denotes the dimensionality of the problem, Formula$S$ is the range of the variables, and Formula$f_{\min}$ is the function value at the global optimum.

Table 1
TABLE I ANALYTICAL FORM OF BENCHMARK FUNCTIONS [6], [30]. Formula$D$ IS THE DIMENSION OF THE FUNCTION, Formula$f_{\min}$ IS THE MINIMUM VALUE OF THE FUNCTION AND SEARCH SPACE Formula$S \subseteq R^{D}$

The first seven functions, i.e., Formula$f_{1} - f_{7}$, are unimodal functions. Functions Formula$f_{8} - f_{13}$ are high-dimensional multimodal functions with many local minima. The number of local minima of these functions increase exponentially with the dimensionality. Functions Formula$f_{14} - f_{23}$ are low-dimensional multimodal functions that have only a few local minima. They are relatively easy compared with the high-dimensional functions. A more detailed description of each function can be found in [6], [30].

Three different sets of values are used to investigate the effect of Formula$K_{1}$ and Formula$K_{2}$. They are (1, 1), (2, 4), and (4, 8). The population size Formula$\mu$ is set at 50. The neighborhood size Formula$M$ is set to 10. The number of function evaluations (FEs) is set to be 150 000 for functions Formula$f_{1} - f_{13}$ and to be 10 000 for Formula$f_{14} - f_{23}$. These values are chosen to make a fair comparison with previous works.

Table 2
TABLE II PERFORMANCE OF RTEP FOR DIFFERENT VALUES OF Formula$K_{1}$, Formula$K_{2}$. ALL RESULTS HAVE BEEN AVERAGED OVER 50 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS. A “+” IN THE Formula$t$- TEST BETWEEN ALGORITHMS Formula$X$ VERSUS Formula$Y$ INDICATES THAT Formula$X$ IS SIGNIFICANTLY BETTER THAN Formula$Y$ WITH 95% CERTAINTY, WHILE A “≈” MEANS THAT THE DIFFERENCE IS NOT STATISTICALLY SIGNIFICANT
Figure 3
Fig. 3. Convergence characteristics of RTEP with different values for Formula$K_{1}$, Formula$K_{2}$ on unimodal functions Formula$f_{2}$, Formula$f_{5}$, multimodal functions Formula$f_{9}$, Formula$f_{11}$, Formula$f_{13}$, and low-dimensional function Formula$f_{23}$. The vertical axis is the function value, and the horizontal axis is the number of FEs. All results have been averaged over 50 runs.

1) Experimental Results

Table II shows the mean error of RTEP on the 23 classical test functions over 50 independent runs. The numbers inside the parentheses next to RTEP in each line indicate the values of Formula$K_{1}$ and Formula$K_{2}$ used in the experiments. For each function, the best (i.e., lowest) mean error among the different approaches is shown in boldface type. Fig. 3 shows the convergence characteristics of RTEP for several functions in terms of the mean error value over 50 independent runs. The following observations can be made from these results:

  1. First, RTEP with different values for Formula$K_{1}$ and Formula$K_{2}$ reached the proximity of global mimimum (i.e., Formula$\hbox{mean error} = 0$) for all the functions. This signifies the essence of recurring exploration and exploitation operations for improving the performance of EAs based on mutation. The mutation of RTEP involving directional information and simple selection strategies, provides effective exploration and exploitation, which is evident from the convergence characteristics shown in Fig. 3. It is seen that RTEP with different values for Formula$K_{1}$ and Formula$K_{2}$ achieved nearly log-linear convergence and reached sufficiently close to the global minima very consistently for all the functions.
  2. Second, both RTEP(2, 4) and RTEP(4, 8) performed better than RTEP(1, 1) on 15 functions, while RTEP(1, 1) outperformed RTEP(2, 4) and RTEP(4, 8) on two and one functions, respectively (Table II). This indicates the necessity of executing exploration and exploitation operations at some length. The Formula$t$-test shows that both RTEP(2, 4) and RTEP(4, 8) performed better than RTEP(1, 1) on all of the six high-dimensional multimodal functions. Since unimodal and low-dimensional functions are easier to optimize, RTEP(2, 4) or RTEP(4, 8) and RTEP(1, 1) showed similar performance on many of them, as shown by the Formula$t$-test.
  3. Third, the convergence characteristics of RTEP with different values for Formula$K_{1}$ and Formula$K_{2}$ seem to be quite similar for the entire evolutionary process (Fig. 3) for the unimodal function Formula$f_{5}$ and low-dimensional function Formula$f_{23}$. However, for the high-dimensional multimodal functions, RTEP(2, 4) and RTEP(4, 8) produced better solution quality and maintained higher convergence rates than RTEP(1, 1), as apparent from graphs of Formula$f_{9}$, Formula$f_{11}$, and Formula$f_{13}$. Moreover, some oscillations are observed at the later stage of the evolutionary process, especially by RTEP(1, 1) for the function Formula$f_{9}$. This may be due to the nonoptimal setting of Formula$K_{1}$ and Formula$K_{2}$.

In order to gain a deeper understanding of the performance difference between RTEP(1, 1) and RTEP(2, 4) or RTEP(4, 8), we examine the achievement of successful outcomes by mutation during the course of the evolution. Such success is determined by the percentage of better offspring produced by mutation for all individuals in a population. Fig. 4 shows the percentage of successful mutations with the progress of the evolutionary process. We have included only the graphs for RTEP(1, 1) and RTEP(2, 4). Similar outcomes, like the one shown for RTEP(2, 4), have been exhibited by RTEP(4, 8) too, but not drawn to make Fig. 4 more clear. The plots clearly indicate that RTEP(2, 4) facilitates more successful mutations than RTEP(1, 1).

Figure 4
Fig. 4. Percentage of successful mutations with RTEP(1, 1) and RTEP(2, 4) applied to functions (left) Formula$f_{8}$ and (right) Formula$f_{12}$. All results have been averaged over 50 runs.

2) RTEP With Different Parameters

The aim of this section is to investigate the effect of different parameters and ideas used in RTEP. We performed a number of new experiments to examine their effects empirically. The setup of these experiments is exactly the same as those described previously.

  1. Effect of Formula$K_{1}$ and Formula$K_{2}$: Formula$K_{1}$ and Formula$K_{2}$ define the lengths of explorative and exploitative stages. Their effect has been investigated in the previous section with a limited range of values. To examine their effects further, RTEP with wider range of values of Formula$K_{1}$ and Formula$K_{2}$ is applied to ten classical test functions. Table III shows the average results of RTEP with different values of Formula$K_{1}$ and Formula$K_{2}$. A small or moderate stage length in RTEP has performed better than larger stage lengths. This is reasonable because when the stage length is set large (e.g., Formula$K_{1} = K_{2} = 100$), the recurring nature of RTEP is reduced and the proposed algorithm operates somewhat in a sequential fashion. In fact, what is crucial in RTEP is not the length of the stages but their recurring nature. Explorative and exploitative stages of RTEP are therefore executed sequentially to investigate it. This scheme is referred to as the sequential two-stage EA (STEA). STEA executes an explorative stage for the first half of the total generation assigned to solve a given problem and the exploitative stage for the next half. Table IV shows the average results of STEA over 50 independent runs. The comparison of results presented in Table IV with those presented in Tables II and III indicates that RTEP performs much better than its sequential counterpart, STEA. The difference is mostly by several orders of magnitude and obviously statistically significant. This proves the necessity of interleaving and mixing two stages regularly instead of trying to make a perfect switch from one stage to another.
    Table 3
    TABLE III PERFORMANCE OF RTEP WITH DIFFERENT VALUES OF Formula$K_{1}$ AND Formula$K_{2}$. ALL RESULTS ARE AVERAGED OVER 50 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS
    Table 4
    TABLE IV PERFORMANCE OF STEA ON TEN CLASSICAL BENCHMARK FUNCTIONS. ALL RESULTS HAVE BEEN AVERAGED OVER 50 INDEPENDENT RUNS
  2. Effect of Neighbor and Stranger: RTEP uses the distance of neighbors and strangers as an SD for mutation. It therefore tracks neighbors and strangers throughout the evolution for each individual of the population. To examine whether the use of neighbors and strangers have contributed to make effective explorations and exploitations, we introduce here another variant of RTEP, named naive-RTEP. Naive-RTEP uses neither strangers nor neighbors. Instead, it selects a random individual from the rest of the population. The distance of this individual from the current individual along a selected component is then used as an SD for the Gaussian mutation for that component. The average results of naive-RTEP are presented and compared with those of RTEP(2, 4) in Table V. It is apparent that naive-RTEP performed much worse than RTEP on all of the ten functions. The performance gap of naive-RTEP is by several orders of magnitude for most of the functions. This indicates the necessity of using proper SD and directional information for effective exploration and exploitation during evolution. To understand the effect of neighbors and strangers, we have measured the percentage of successful mutations throughout the evolution and plotted in Fig. 5. From the plots of RTEP(2, 4) and naive-RTEP, we find that the fraction of successful mutations is significantly higher for RTEP(2, 4).
    Table 5
    TABLE V PERFORMANCE OF NAIVE-RTEP ON TEN CLASSICAL BENCHMARK FUNCTIONS. ALL RESULTS HAVE BEEN AVERAGED OVER 50 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS

3) Comparison With Other Work

Many approaches exist in the literature against which one could compare the present work. However, it is infeasible and unnecessary to conduct an exhaustive comparison with all algorithms. The aim of our experimental comparison here is to understand the strengths and weaknesses of RTEP. Since the proposed algorithm uses only mutation as a variation operator and executes exploration/exploitation operations separately, recurring multistage EA (RMEA) [37], IFEP [6], ALEP [5], and RCMA with XHC [25] are primarily considered here for comparison. RMEA is a former research work by us that is founded on a similar philosophy of RTEP, but a number of key differences make RTEP perform significantly better than RMEA. IFEP and ALEP are both based on mutation only, while RCMA employs both mutation and crossover for exploration and exploitation, respectively. Later in this section, RTEP is compared with the comprehensive learning particle swarm optimizer (CLPSO) that neither uses mutation nor considers exploration and exploitation separately. In a subsequent section, RTEP is also compared with NSDE and LSRCMA, both of which employ specialized operator (e.g., neighborhood search used by NSDE, adaptive local search used by LSRCMA) for exploitations in addition to the mutation operator for explorations.

Figure 5
Fig. 5. PERCENTAGE OF SUCCESSFUL MUTATIONS IN A POPULATION WHEN NAIVE-RTEP AND RTEP(2, 4) ARE APPLIED TO FUNCTIONS (LEFT) Formula$f_{10}$ and (right) Formula$f_{21}$. All results have been averaged over 50 runs.
Table 6
TABLE VI COMPARISON BETWEEN RTEP(2,4), RMEA [37], IFEP [6], ALEP [5], AND CEP [2] ON 12 CLASSICAL BENCHMARK FUNCTIONS. RESULTS AVERAGED OVER 50 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS. “+” INDICATES THAT RTEP(2, 4) IS SIGNIFICANTLY BETTER THAN THE COMPARED ALGORITHM WITH 95% CERTAINTY, WHILE “ ≈” MEANS THAT THE DIFFERENCE IS NOT STATISTICALLY SIGNIFICANT

RMEA, similar to RTEP, employs similar individuals for exploitations and dissimilar individuals for exploitations. However, it uses fitness values to measure the similarity or dissimilarity between individuals. This is based on the assumption that fitness similarity (or dissimilarity) can generally be accounted for a relative similarity (or dissimilarity) between the genotypes. However, such an assumption might not be true. The search space for high-dimensional multimodal functions is usually extremely large, and it is commonly observed that genetically diverse individuals have quite similar fitness values. The converse may not so common for continuous functions, but still two genetically very similar individuals might have quite different fitness values. Thus RMEA, by selecting neighbors or nonneighbors merely by fitness values may inadvertently pick inappropriate individuals which would fail to induce the intended exploitations or explorations. RTEP, on the other hand, employs genotype distance to appropriately select neighbors and nonneighbors. This selection operation is consistent with its variation operation (i.e., mutation) which employs the genotype distances too in order to pick an appropriate mutation step size, either exploitative or explorative.

Second, during each generation, RMEA mutates every individual Formula$n$ times, each time randomly picking one of its Formula$n$ gene values. This might cause one gene to be mutated several times, which is likely to reduce the behavioral link between the parent and the offspring. In contrast, RTEP mutates a random number Formula$r$ of genes of every individual. A small value of Formula$r$ is more likely to preserve close behavioral link between the parent and the offspring (which would be better for exploitations), while moderate or large values of Formula$r$ would facilitate better search space explorations. Thus, the mutation operation of RTEP possesses both exploitative and explorative features, while the mutation of RMEA makes it off-balance toward more explorations. The significant performance difference between RTEP and RMEA, as illustrated in Table VI, indicates the better effectiveness of the proposed RTEP scheme.

Like RTEP, both IFEP [6] and ALEP [5] use only mutation for producing offspring. IFEP mixed (rather than switched) Cauchy and Gaussian mutations in one algorithm. This algorithm generated two candidate offspring from each parent: one by Cauchy mutation and one by Gaussian mutation. The better candidate is then chosen by IFEP as the offspring. ALEP, on the other hand, generated four candidate offspring from each parent by Lévy mutation with four different distributions. It has been shown that IFEP [6] and ALEP [5] performed better than either their nonadaptive versions or the CEP [2]. RCMA with XHC [25] executed exploration and exploitation operations separately and combined them in one algorithm. This algorithm uses position-based crossover (PBX) [25] and breeder genetic algorithm (BGA) mutation [38] for exploration. It employed a negative assortative mating strategy for selecting two parents to perform crossover in order to introduce population diversity. RCMA with a specialized crossover operator, XHC [25], has been shown to perform better than all other variants. We executed RTEP for the same number of FEs as for RMEA, IFEP, ALEP, and RCMA. According to [20], CEP [2] has been implemented here for the same population size and FEs.

Table VI presents the performance of RTEP(2, 4), RMEA, IFEP, ALEP, and CEP on 12 classical functions averaged over 50 independent runs. The performance of RTEP(2, 4) is obviously better than the other algorithms. Since RTEP (2, 4) almost always outperformed IFEP and CEP by several orders of magnitude, which is obviously statistically significant, the Formula$t$-test is omitted for both of them and conducted only for RMEA and ALEP. Results show that RTEP(2, 4) outperforms RMEA on 11 out of the 12 functions. The only function where RMEA performed better is Formula$f_{3}$, but the performance difference is not statistically significant (Formula$t$-test with 95% certainity). Moreover, RTEP is significantly better than IFEP on six out of six functions, while it outperformed ALEP on 10 out of 11 functions and also outperformed CEP on all of the 12 functions. Both RTEP and RCMA are applied to five functions for 100 000 FEs and with Formula$\hbox{dimensionality} = 25$, as suggested in [25]. Table VII shows RCMA with XHC outperformed RTEP(2, 4) only one unimodal function Formula$f_{1}$, while RTEP outperformed RCMA on rest of the functions, i.e., unimodal functions Formula$f_{3}$ and Formula$f_{5}$ and multimodal functions Formula$f_{9}$ and Formula$f_{11}$. Although we could not perform Formula$t$-test due to lack of available detailed results, it is apparent that the performance of RTEP(2, 4) is significantly better than RCMA with XHC on Formula$f_{3}$, Formula$f_{5}$, Formula$f_{9}$, and Formula$f_{11}$.

Table 7
TABLE VII COMPARISON BETWEEN RTEP(2, 4) AND RCMA WITH XHC [25] ON FIVE CLASSICAL BENCHMARK FUNCTIONS WITH DIMENSION Formula$= 25$. ALL RESULTS HAVE BEEN AVERAGED OVER 50 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS

The previous comparison suggests that RTEP(2, 4) is better than its counterparts that use only mutation or execute exploration and exploitation operations separately. It is interesting to investigate the performance of RTEP(2, 4) with an approach that neither uses mutation nor considers the exploration and exploitation operations explicitly. One such approach is the recently introduced CLPSO [30], [39], a variant of the PSO [40]. CLPSO used a novel learning strategy in which all other particles' historical best information is used to update a particle's velocity to move the search process forward. It has demonstrated better performance than other variants of PSO for wide range of complex functions. Since the number of FEs used by CLPSO [30] has been 200 000, RTEP(2, 4) is reimplemented for the same FEs. Table VIII presents results for RTEP(2, 4) and CLPSO [30] on six functions over 30 independent runs. RTEP(2, 4) performed better than CLPSO on two unimodal and three multimodal functions. CLPSO outperformed RTEP(2, 4) on only one multimodal function. The performance differences is clearly significant and better for RTEP.

Table 8
TABLE VIII COMPARISON BETWEEN RTEP(2, 4) AND CLPSO [30] ON SIX CLASSCIAL BENCHMARK FUNCTIONS. ALL RESULTS HAVE BEEN AVERAGED OVER 30 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS

B. Discussion

This section briefly explains why the performance of RTEP is better than other approaches for most of the classical test functions. First, RTEP emphasizes performing global exploration and local exploitation not only separately but also adaptively. The utilization of the distance of dissimilar or similar individuals in mutation of RTEP clearly reflects such emphasis. RMEA, though attempts to perform recurring explorations and exploitations, its inappropriate selection scheme of strangers and neighbors is likely to fail to bring about the desired explorations and exploitations. IFEP [6], ALEP [5], and CEP [2] do not separate exploration and exploitation operations; rather, IFEP and ALEP primarily emphasize producing good offspring. The emphasis on only good solutions may reduce the population diversity resulting in poor overall performance. RCMA with XHC [25] performs exploration and exploitation separately. Exploration is carried out by using PBX [25] and BGA mutation [38], while exploitation is conducted by XHC [25] which is a specialized crossover operator combined with subsequent hill climbing. The problem of using different operators lies in ensuring their synergistic effect [24]. CLPSO [30] is a learning approach that does not employ exploration and exploitation operations separately. Although it utilizes the best information of all particles to update the velocity of any one particle, it may still trap into local optima due to the inherent problem of a learning scheme.

Second, mutation in RTEP does not produce offspring blindly, but rather utilizes the information of other individuals to produce objective-oriented offspring. The mutation produces offspring in such way that an offspring either facilitates the exploration of wider and unvisited regions of the search space or the exploitation of the local neighborhoods. However, the mutation in IFEP [6], ALEP [5], and CEP [2] does not use the information of other individuals and produces offspring blindly. The consequence of blind mutation is that the offspring produced may be dominated by individuals in the current population. RCMA with XHC [25] also uses blind mutation and crossover for exploration. Although RMEA employs population information for appropriate mutation step size, it often destroys the behavioral link between the parent and the offspring which makes the algorithm more explorative than exploitative resulting in far deteriorated performance in comparison to RTEP.

Third, RTEP uses two different simple selection strategies to select offspring for the next generation. During exploration, our algorithm allows an offspring for the next generation if the offspring has similar or better fitness value as its parent. However, only the better offspring is allowed during exploitation. These selection strategies match the exploration/exploitation objective of an evolutionary process. A tournament-based selection scheme is used in IFEP [6], ALEP [5], and CEP [2]. For each parent or offspring, Formula$q$ opponents are chosen uniformly at random for pairwise fitness comparison from all the parents and offspring. However, the value of Formula$q$ affects the population diversity. A large value of Formula$q$ corresponds to high selection pressure, so the probability of the most fit individual being selected multiple times becomes high, resulting in loss of population diversity. RCMA with XHC [25] allows only better offspring for both exploration and exploitation, which is more likely to reduce population diversity and lead to premature convergence.

C. RTEP on CEC2005's Benchmark Functions

The proposed RTEP has also been applied to a new set of benchmark functions introduced in the special session on real-parameter optimization at CEC2005 [31]. The new set includes 25 functions with different complexity. These functions are challenging in the sense that many of them are the shifted, rotated, expanded, or combined variants of classical benchmark functions. Functions Formula$f_{1} - f_{5}$ of the new set are unimodal, while the remaining 20 functions are multimodal. The detailed description of these functions can be found in [31]. The dimension of all the 25 functions is set to 30. The FEs are set to be 3.0e + 05. These setups are for a fair comparison with previous work.

Table 9
TABLE IX COMPARISON BETWEEN RTEP, LSRCMA [28], AND NSDE [26], [27] ON FIVE UNIMODAL AND NINE MULTIMODAL INTRODUCED AT CEC2005 [31]. ALL RESULTS HAVE BEEN AVERAGED OVER 25 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS. FUNCTION'S CHARACTERISTICS ARE EXPRESSED BY Formula$S$: Shifted, Formula$R$: ROTATED, Formula$N$: NON SEPARABLE. “+” AND “−” INDICATE THAT RTEP(2, 4) IS SIGNIFICANTLY BETTER AND WORSE THAN THE COMPARED ALGORITHM, RESPECTIVELY. “≈” MEANS THAT THE DIFFERENCE IS NOT STATISTICALLY SIGNIFICANT

The mean error values of 25 independent runs for RTEP, RCMA with adaptive local search (LSRCMA) [28], and DE with neighborhood search (NSDE) [26], [27] are presented in Tables IX and X. Results indicate that RTEP achieves performance comparable to and often better than the other two algorithms. In case of the five unimodal functions, RTEP is found to be significantly better than LSRCMA on three functions, while it is outperformed by LSRCMA on only one function. NSDE is found to be significantly better than RTEP on two functions while it is outperformed by RTEP on three functions.

Table 10
TABLE X COMPARISON BETWEEN RTEP, LSRCMA [28], AND NSDE [26], [27] ON THE 11 COMPOSITION FUNCTIONS INTRODUCED AT CEC2005 [31]. ALL RESULTS HAVE BEEN AVERAGED OVER 25 INDEPENDENT RUNS. THE BEST RESULTS ARE MARKED WITH BOLDFACE FONTS. “+” AND “−” INDICATE THAT RTEP(2, 4) IS SIGNIFICANTLY BETTER AND WORSE THAN THE COMPARED ALGORITHM, RESPECTIVELY. “≈” MEANS THAT THE DIFFERENCE IS NOT STATISTICALLY SIGNIFICANT

The superiority of RTEP is clearly seen on the results presented for basic and expanded multimodal functions Formula$f_{6} - f_{14}$ (Table IX). Our algorithm, RTEP, is found to perform significantly better than the other two algorithms on six out of nine multimodal functions. It is outperformed by the other two algorithms on only two functions and found to be similar on one function.

Hybrid composition functions Formula$f_{15} - f_{25}$ are constructed by combining several benchmark functions with a randomly located global optimum and several randomly located deep local optima. The results presented for these functions in Table X indicate that the performance of all three algorithms is somewhat compromised for these functions in comparison to the previous ones. However, RTEP still performs significantly better than LSRCMA and NSDE on six and five composition functions, while it is outperformed by either of the algorithms on five composition functions. With close observation, we find that RTEP outperformed LSRCMA and NSDE on more composition functions.

There may be two reasons why RTEP is a bit inefficient on hybrid composition functions. First, the fixed strategy used in RTEP that executes repeatedly exploration and exploitation operations for a fixed number of iterations before alternating to the other operation may not work well for composition functions. An adaptive approach that can dynamically change this number during the course of evolution may be more appropriate than the fixed strategy, particularly for complex search spaces. Second, the directional information used in RTEP is taken by considering two neighboring or distant individuals. It could be better if the information of more individuals or the characteristics of the search space around the current points could be used in the mutations. The incorporation of these ideas could be a topic for our future study.

SECTION V

CONCLUSION

Evolutionary systems based on mutation have been introduced to the scientific community for nearly four decades. However, most mutation-based algorithms use a single-stage execution model to tackle the conflicting goals of evolution i.e., exploration and exploitation. These algorithms mostly rely on increasing the exploration capability of the mutation operation, although both exploration and exploitation are necessary during evolution. Improving the capability of one operation at the expense of another becomes a crucial decision due to the unknown scenario at different stages of evolution. This paper introduces an RTEP scheme based on mutation to unravel the conflicting goals of evolution in finding a good near-optimum solution for complex problems.

RTEP adopts repeated alternated execution of exploration and exploitation operations during evolution. RTEP uses Gaussian mutation in its two recurring stages for producing offspring. Global exploration and local exploitation are encouraged through the use of mutation involving directional information and appropriate selection strategies. The distance between two dissimilar or similar individuals is used in RTEP as the SD of Gaussian mutation to explore the search space globally or locally. Extensive experiments have been carried out to evaluate how well RTEP performed in comparison with other evolutionary and non-EAs. In many cases, RTEP outperformed the others by several orders of magnitudes.

RTEP involves two user specific parameters, Formula$K_{1}$ and Formula$K_{2}$, which are the durations of exploration and exploitation stages, respectively. It is demonstrated (Tables II and III) that any small value works sufficiently well for Formula$K_{1}$ and Formula$K_{2}$ on all the benchmark functions. However, results start to deteriorate with increasing values of Formula$K_{1}$ and Formula$K_{2}$ (Tables III and IV). This happens because the recurring nature of RTEP starts to diminish with larger lengths of its recurring stages. Hence, the key ingredient for good results is short stage lengths with frequent alternation of the stages. Hence, any small value for Formula$(K_{1}, K_{2})$ would likely to perform well enough. For example, even Formula$(K_{1}, K_{2}) = (1, 1)$ provides satisfactory results (Table II). Since real-world problems vary wildly from each other, it would be inappropriate for us to suggest some optimal choice as the values of Formula$(K_{1}, K_{2})$. Instead, what we suggest is to choose small values for Formula$K_{1}$, Formula$K_{2}$ when the user does not have sufficient prior knowledge about the problem characteristics.

There are several future research directions suggested by this paper. First, a scheme to make the length of exploration and exploitation stages, i.e., Formula$K_{1}$ and Formula$K_{2}$ self-adaptive could be devised. Second, since the framework presented by RTEP is generic enough, it could be effectively extended to many other existing algorithms. As every EA has to maintain a population of potential solutions, it may readily introduce the participation of dissimilar and similar individuals in the search operations and define the “explorative” and “exploitative” versions of its variation operators. Third, RTEP has been applied mainly to continuous parameter optimization problems. It would be interesting to study how well RTEP performs for other problems. As RTEP showed excellent capacity to locate the global optima, one interesting idea would be to hybridize RTEP with other existing algorithms. Moreover, RTEP could be employed on a problem that is partially solved by another algorithm while the global minima is still unknown and to analyze whether RTEP could improve its rate of convergence and final solution quality.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions, which helped to improve several aspects of this paper greatly.

Footnotes

This paper was recommended by Associate Editor J. Wang.

M. S. Alam and M. M. Islam are with the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh.

X. Yao is with the Nature Inspired Computation and Applications Laboratory, Department of Computer Science, University of Science and Technology of China, Hefei 230027, China and also with the Center of Excellence for Research in Computational Intelligence and Applications, School of Computer Science, The University of Birmingham, B15 2TT Birmingham, U.K.

K. Murase is with the Department of Human and Artificial Intelligence Systems, University of Fukui, Fukui 910-8507, Japan.

References

No Data Available

Authors

Mohammad Shafiul Alam

Mohammad Shafiul Alam

Mohammad Shafiul Alam received the B.Sc. and M.Sc. degrees in computer science and engineering from the Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2004 and 2007, respectively, where he is currently working toward the Doctoral degree.

He is also with the Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka. His research interests include both continuous and discrete optimization with evolutionary algorithms and swarm intelligence.

Md. Monirul Islam

Md. Monirul Islam

Md. Monirul Islam received the B.E. degree from the Khulna University of Engineering and Technology (KUET), Khulna, Bangladesh, in 1989, the M.E. degree from the Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh, in 1996, and the Ph.D. degree from the University of Fukui, Fukui, Japan, in 2002.

He was a Lecturer and Assistant Professor from 1989 to 2002 with KUET. He moved to BUET as an Assistant Professor of computer science and engineering (CSE) in 2003, where he is currently a Professor and the Head of the Department of CSE. He has also worked as a Visiting Associate Professor supported by the Japanese Society for Promotion of Sciences at the University of Fukui. He has more than 80 refereed publications. His major research interests include evolutionary robotics, evolutionary computation, neural networks, machine learning, pattern recognition, and data mining.

Dr. Islam won the First Prize in The Best Paper Award Competition of the Joint Third International Conference on Soft Computing and Intelligent Systems and the Seventh International Symposium on Advanced Intelligent Systems.

Xin Yao

Xin Yao

Xin Yao (M'91–SM'96–F'03) received the B.S. degree in computer science from the University of Science and Technology of China (USTC), Hefei, China, in 1982, the M.S. degree in computer science from the North China Institute of Computing Technology, Beijing, China, in 1985, and the Ph.D. degree in computer science from USTC in 1990.

He is currently a Chair (Full Professor) of Computer Science and the Director of the Center of Excellence for Research in Computational Intelligence and Applications, University of Birmingham, Birmingham, U.K. He has been an invited keynote speaker at more than 50 international conferences and has more than 300 refereed research publications in evolutionary computation and neural network ensembles. His research interests include evolutionary computation, neural network ensembles, global optimization, data mining, computational time complexity of evolutionary algorithms, and real-world applications. In addition to basic research, he works closely with many industrial partners on various real-world problems.

Dr. Yao is a Distinguished Lecturer of the IEEE Computational Intelligence Society, a Distinguished Visiting Professor at USTC, and a Visiting Professor at Nanjing University, Xidian University, and Northeast Normal University. He was an Editor-in-Chief of the IEEE Transactions on Evolutionary C omputation from 2003 to 2008 and is an Associate Editor or an Editorial Board Member of ten other international journals. He is the Editor of the World Scientific Book Series on Advances in Natural Computation and a Guest Editor of several journal special issues.

Kazuyuki Murase

Kazuyuki Murase

Kazuyuki Murase received the M.E. degree in electrical engineering from Nagoya University, Nagoya, Japan, in 1978 and the Ph.D. degree in biomedical engineering from Iowa State University, Ames, in 1983.

He is a Professor with the Department of Human and Artificial Intelligence Systems, Graduate School of Engineering, University of Fukui, Fukui, Japan, since 1999. He joined as a Research Associate with the Department of Information Science, Toyohashi University of Technology, Toyohashi, Japan, in 1984, as an Associate Professor with the Department of Information Science, Fukui University in 1988, and became the Professor in 1992.

Dr. Murase is a member of The Institute of Electronics, Information and Communication Engineers, The Japanese Society for Medical and Biological Engineering, The Japan Neuroscience Society, The International Neural Network Society, and The Society for Neuroscience. He serves as a one of the Board of Directors in the Japan Neural Network Society, a Councilor of Physiological Society of Japan, and a Councilor of Japanese Association for the Study of Pain.

Cited By

No Data Available

Keywords

Corrections

None

Multimedia

No Data Available
This paper appears in:
No Data Available
Issue Date:
No Data Available
On page(s):
No Data Available
ISSN:
None
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available

Text Size