<![CDATA[ IEEE Transactions on Evolutionary Computation - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 4235 2019April 18<![CDATA[Table of contents]]>232C1C1138<![CDATA[IEEE Transactions on Evolutionary Computation publication information]]>232C2C272<![CDATA[IGD Indicator-Based Evolutionary Algorithm for Many-Objective Optimization Problems]]>2321731871200<![CDATA[Distributed Cooperative Co-Evolution With Adaptive Computing Resource Allocation for Large Scale Optimization]]>2321882022992<![CDATA[Offline Data-Driven Evolutionary Optimization Using Selective Surrogate Ensembles]]>2322032163269<![CDATA[On Scalable Multiobjective Test Problems With Hardly Dominated Boundaries]]>2322172312822<![CDATA[Low-Dimensional Euclidean Embedding for Visualization of Search Spaces in Combinatorial Optimization]]>$pmb {mathbb {R}}^{k}$ (with ${k} {=} {2}$ or 3 in practice) while aiming to preserve spatial relationships existing in the original space. The LDEE method uses the t-distributed stochastic neighbor embedding (t-SNE) to transform solutions from the original search space to the Euclidean one. In this paper, it is mathematically shown that the assumptions underlying the t-SNE method are valid in the case of permutation spaces with the Mallows distribution. The same is true for other metric spaces provided that the distribution of points is assumed to be normal with respect to the adopted metric. The embedding obtained using t-SNE is further refined to ensure visual separation of individual solutions. The visualization obtained using the LDEE method can be used for analyzing the behavior of the population in a population-based metaheuristic, the working of the genetic operators, etc. Examples of visualizations obtained using this method for the four peaks problem, the firefighter problem, the knapsack problem, the quadratic assignment problem, and the traveling salesman problem are presented in this paper.]]>2322322462917<![CDATA[Adaptive Sorting-Based Evolutionary Algorithm for Many-Objective Optimization]]>2322472571526<![CDATA[Evolutionary Bilevel Optimization Based on Covariance Matrix Adaptation]]>a priori knowledge of the lower-level problem from the upper-level optimizer, which significantly reduces the number of function evaluations. We also propose a refinement-based elite preservation mechanism to trace the elite and avoid inaccurate solutions. Comparisons with five state-of-the-art algorithms on 22 benchmark problems and two real-world applications are carried out to test the performance of the proposed approach. The experimental results have shown the effectiveness of the proposed approach in keeping a good tradeoff between solution quality and computational efficiency.]]>2322582721944<![CDATA[New Sampling Strategies When Searching for Robust Solutions]]>robust solutions that perform well over the possible future scenarios. In this paper, we focus on input uncertainty, such as in manufacturing, where the actual manufactured product may differ from the specified design but should still function well. Estimating a solution’s expected fitness in such a case is challenging, especially if the fitness function is expensive to evaluate, and its analytic form is unknown. One option is to average over a number of scenarios, but this is computationally expensive. The archive sample approximation method reduces the required number of fitness evaluations by reusing previous evaluations stored in an archive. The main challenge in the application of this method lies in determining the locations of additional samples drawn in each generation to enrich the information in the archive and reduce the estimation error. In this paper, we use the Wasserstein distance metric to approximate the possible benefit of a potential sample location on the estimation error, and propose new sampling strategies based on this metric. Contrary to previous studies, we consider a sample’s contribution for the entire population, rather than inspecting each individual separately. This also allows us to dynamically adjust the number of samples to be collected in each generation. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches.]]>2322732872655<![CDATA[Decomposition-Based Evolutionary Multiobjective Optimization to Self-Paced Learning]]>2322883023054<![CDATA[Two-Archive Evolutionary Algorithm for Constrained Multiobjective Optimization]]>2323033152430<![CDATA[Robust Multiobjective Optimization via Evolutionary Algorithms]]>2323163301411<![CDATA[A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization]]>2323313452819<![CDATA[<inline-formula> <tex-math notation="LaTeX">$I_{rm SDE}$ </tex-math></inline-formula>+—An Indicator for Multi and Many-Objective Optimization]]>$I_{{SDE}}$ ^{+}) is a combination of sum of objectives and shift-based density estimation and benefits from their ability to promote convergence and diversity, respectively. An evolutionary multiobjective optimization framework based on the proposed indicator is shown to perform comparably or better than the state-of-the-art on a variety of scalable benchmark problems.]]>2323463522059<![CDATA[Large Scale Black-Box Optimization by Limited-Memory Matrix Adaptation]]>$boldsymbol {mathcal {O}}({n}^{ { {2}}})$ time and storage complexity to $boldsymbol {mathcal {O}}({{mn}})$ with $boldsymbol {m}~{boldsymbol ll }~{n}$ such as $boldsymbol {m}~{boldsymbol in }~boldsymbol {mathcal {O}}{(1)}$ or ${m} {in } boldsymbol {mathcal {O}}({log {({n})}})$ , we present the limited-memory MA-ES for efficient zeroth order large-scale optimization. The algorithm demonstrates state-of-the-art performance on a set of established large-scale benchmarks.]]>232353358914<![CDATA[Introducing IEEE Collabratec]]>2323593592055<![CDATA[Together, we are advancing technology]]>232360360453<![CDATA[IEEE Transactions on Evolutionary Computation Society Information]]>232C3C3115<![CDATA[IEEE Transactions on Evolutionary Computation information for authors]]>232C4C4110