I. Introduction
In some real-world applications, it is common to face optimization problems having several (often conflicting) objectives [1], [2]. They are termed multiobjective optimization problems (MOPs) and attempt to search a Pareto-optimal set (PS) consisting of the best possible tradeoffs among the objectives. The mapping of PS in objective space is termed Pareto-optimal front (PF). Over the past 20 years, a number of nature-inspired heuristic algorithms, e.g., multiobjective evolutionary algorithms (MOEAs) [3], [4] and multiobjective particle swarm optimizers (MOPSOs) [5], [6], have been reported as an alternative to tackle various kinds of MOPs. Some early reported MOEAs, such as NSGA-II [3] and SPEA2 [7], usually adopted two criteria for population selection. Pareto dominance is first used to guide the search, and then a density estimator is employed to diversify the set of solutions obtained. Such operations in MOEAs are very effective in tackling MOPs with two or three objectives. However, when solving many-objective optimization problems (MaOPs, i.e., MOPs with more than three objectives), the performance of these MOEAs severely deteriorates [8], mainly due to the loss of selection pressure toward the true PF [9]–[11] and the weakened search capabilities of their evolutionary operators [12]. With the increase of objectives in MaOPs, most of the generated solutions are mutually nondominated.