Loading [a11y]/accessibility-menu.js
An Efficient Recursive Differential Grouping for Large-Scale Continuous Problems | IEEE Journals & Magazine | IEEE Xplore

An Efficient Recursive Differential Grouping for Large-Scale Continuous Problems


Abstract:

Cooperative co-evolution (CC) is an efficient and practical evolutionary framework for solving large-scale optimization problems. The performance of CC is affected by the...Show More

Abstract:

Cooperative co-evolution (CC) is an efficient and practical evolutionary framework for solving large-scale optimization problems. The performance of CC is affected by the variable decomposition. An accurate variable decomposition can help to improve the performance of CC on solving an optimization problem. The variable grouping methods usually spend many computational resources obtaining an accurate variable decomposition. To reduce the computational cost on the decomposition, we propose an efficient recursive differential grouping (ERDG) method in this article. By exploiting the historical information on examining the interrelationship between the variables of an optimization problem, ERDG is able to avoid examining some interrelationship and spend much less computation than other recursive differential grouping methods. Our experimental results and analysis suggest that ERDG is a competitive method for decomposing large-scale continuous problems and improves the performance of CC for solving the large-scale optimization problems.
Published in: IEEE Transactions on Evolutionary Computation ( Volume: 25, Issue: 1, February 2021)
Page(s): 159 - 171
Date of Publication: 15 July 2020

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Large-scale optimization problems involve at least thousands of decision variables [1], [2]. It is challenging for evolutionary algorithms (EAs) [3] to solve such kind of large-scale optimization problem [4]–[6]. Cooperative co-evolution (CC) [7] adopts the divide-and-conquer strategy [8]–[10] to solve optimization problems. CC divides the variables into several subcomponents and optimizes the subcomponents separately. The divide-and-conquer strategy can decrease the difficulty of solving the large-scale optimization problems [11]–[15].

Select All
1.
P. Benner, “Solving large-scale control problems,” IEEE Control Syst. Mag., vol. 24, no. 1, pp. 44–59, Feb. 2004.
2.
S. Shan and G. G. Wang, “Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions,” Struct. Multidiscipl. Optim., vol. 41, no. 2, pp. 219–241, Mar. 2010.
3.
Y. Liu, X. Yao, Q. Zhao, and T. Higuchi, “Scaling up fast evolutionary programming with cooperative coevolution,” in Proc. IEEE Congr. Evol. Comput., 2001, pp. 1101–1108.
4.
M. N. Omidvar, X. Li, and K. Tang, “Designing benchmark problems for large-scale continuous optimization,” Inf. Sci., vol. 316, pp. 419–436, Sep. 2015.
5.
T. Weise, R. Chiong, and K. Tang, “Evolutionary optimization: Pitfalls and booby traps,” J. Comput. Sci. Technol., vol. 27, no. 5, pp. 907–936, Sep. 2012.
6.
A. LaTorre, S. Muelas, and J.-M. Peña, “A comprehensive comparison of large scale global optimizers,” Inf. Sci., vol. 316, pp. 517–549, Sep. 2015.
7.
M. A. Potter and K. A. D. Jong, “A cooperative coevolutionary approach to function optimization,” in Parallel Problem Solving From Nature. Heidelberg, Germany : Springer, 1994, pp. 249–257.
8.
P. Yang, K. Tang, and X. Yao, “A parallel divide-and-conquer-based evolutionary algorithm for large-scale optimization,” IEEE Access, vol. 7, pp. 163105–163118, 2019.
9.
J. Blanchard, C. Beauthier, and T. Carletti, “A surrogate-assisted cooperative co-evolutionary algorithm using recursive differential grouping as decomposition strategy,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2019, pp. 689–696.
10.
X. Peng, K. Liu, and Y. Jin, “A dynamic optimization approach to the design of cooperative co-evolutionary algorithms,” Knowl. Based Syst., vol. 109, pp. 174–186, Oct. 2016.
11.
W. Chen and K. Tang, “Impact of problem decomposition on cooperative coevolution,” in Proc. IEEE Congr. Evol. Comput., Jun. 2013, pp. 733–740.
12.
H. Liu, Y. Wang, X. Liu, and S. Guan, “Empirical study of effect of grouping strategies for large scale optimization,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2016, pp. 3433–3439.
13.
X. Peng and Y. Wu, “Large-scale cooperative co-evolution using Niching-based multi-modal optimization and adaptive fast clustering,” Swarm Evol. Comput., vol. 35, pp. 65–77, Aug. 2017.
14.
M. Yang, A. Zhou, C. Li, J. Guan, and X. Yan, “CCFR2: A more efficient cooperative co-evolutionary framework for large-scale global optimization,” Inf. Sci., vol. 512, pp. 64–79, Feb. 2020.
15.
D. Yazdani, M. N. Omidvar, J. Branke, T. T. Nguyen, and X. Yao, “Scaling up dynamic optimization problems: A divide-and-conquer approach,” IEEE Trans. Evol. Comput., vol. 24, no. 1, pp. 1–15, Feb. 2020.
16.
Y. Wu, X. Peng, and D. Xu, “Identifying variables interaction for black-box continuous optimization with mutual information of multiple local optima,” in Proc. IEEE Symp. Series Comput. Intell. (SSCI), 2019, pp. 2683–2689.
17.
M. N. Omidvar, X. Li, Y. Mei, and X. Yao, “Cooperative co-evolution with differential grouping for large scale optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 378–393, Jun. 2014.
18.
M. N. Omidvar, M. Yang, Y. Mei, X. Li, and X. Yao, “DG2: A faster and more accurate differential grouping for large-scale black-box optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 6, pp. 929–942, Dec. 2017.
19.
Y. Sun, M. Kirley, and S. K. Halgamuge, “A recursive decomposition method for large scale continuous optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 5, pp. 647–661, Oct. 2018.
20.
Y. Sun, X. Li, A. Ernst, and M. N. Omidvar, “Decomposition for large-scale optimization problems with overlapping components,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2019, pp. 326–333.
21.
Y. Sun, M. N. Omidvar, M. Kirley, and X. Li, “Adaptive threshold parameter estimation with recursive differential grouping for problem decomposition,” in Proc. ACM Genet. Evol. Comput. Conf. (GECCO), 2018, pp. 889–896.
22.
X. Hu, F. He, W. Chen, and J. Zhang, “Cooperation coevolution with fast interdependency identification for large scale optimization,” Inf. Sci., vol. 381, pp. 142–160, Mar. 2017.
23.
Y. Mei, M. Omidvar, X. Li, and X. Yao, “A competitive divide-and-conquer algorithm for unconstrained large-scale black-box optimization,” ACM Trans. Math. Softw., vol. 42, no. 2, pp. 1–24, 2016.
24.
K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise. ( 2010 ). Benchmark Functions for the CEC’2010 Special Session and Competition on Large-Scale Global Optimization. [Online]. Available: http://goanna.cs.rmit.edu.au/xiaodong/publications/lsgo-cec10.pdf
25.
X. Li, K. Tang, M. N. Omidvar, Z. Yang, and K. Qin. ( 2013 ). Benchmark Functions for the CEC’2013 Special Session and Competition on Large Scale Global Optimization. [Online]. Available: http://goanna.cs.rmit.edu.au/xiaodong/cec13-lsgo/competition/cec2013-lsgo-benchmark-tech-report.pdf
26.
M. Yang, “Efficient resource allocation in cooperative co-evolution for large-scale global optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 4, pp. 493–505, Aug. 2017.
27.
N. Hansen. ( 2005 ). The CMA Evolution Strategy: A Tutorial. Accessed: 2016. [Online]. Available: https://hal.inria.fr/hal-01297037
28.
X. Peng, Y. Jin, and H. Wang, “Multimodal optimization enhanced cooperative coevolution for large-scale optimization,” IEEE Trans. Cybern., vol. 49, no. 9, pp. 3507–3520, Sep. 2019.
29.
R. Cheng and Y. Jin, “A competitive swarm optimizer for large scale optimization,” IEEE Trans. Cybern., vol. 45, no. 2, pp. 191–204, Feb. 2015.
30.
D. Molina, A. LaTorre, and F. Herrera, “SHADE with iterative local search for large-scale global optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC), Jul. 2018, pp. 1–8.

Contact IEEE to Subscribe

References

References is not available for this document.