Welcome to CEC'2021 Competition on Evolutionary Transfer Multiobjective Optimization
来源: Liu Songbai/
City University of Hong Kong
4530
1
0
2021-04-27

Supported by CEC2021 Special Session on Evolutionary Transfer Learning and Transfer Optimization

IEEE Congress on Evolutionary Computation (CEC) 2021

28 June - 01 July 2021, Kraków, Poland

Overview and Aim:

Evolutionary algorithms (EAs) characterized by a population-based iterative search engine have been recognized as an effective tool for solving many complex multiobjective optimization problems (MOPs) [1]-[2], such as in community detection, cybersecurity, feature selection in classification, and compressing deep neural networks. Despite the great success enjoyed by EAs, present EA-based solvers generally start their search from a completely random population, which means that the search starts from scratch or the ground-zero state when given a new MOP, regardless of how similar it is to the already addressed MOPs [3]. However, many real-world MOPs are closely related, so useful experiments or knowledge often can be learned from well-solved related MOPs to effectively guide the search in solving the new MOP [4]. In that regard, existing EAs have remained yet to fully exploit the useful knowledge that may exist in similar MOPs. Thus, a significantly under-explored area of evolutionary multiobjective optimization is the study of EA-based methodologies that can evolve along with the MOPs solved, i.e., the apparent lack of automated knowledge transfers and reuse across different MOPs with certain similarities [5]. Exactly, humans have an inherent ability to transfer knowledge across tasks/problems. What we acquire as knowledge while learning about one problem, we utilize in the same way to solve related problems. The more related the problems, the easier it is for us to transfer or cross-utilize our knowledge. In general, “experience is the best teacher” for the evolutionary search [6].

    Inspired by transfer learning which can reuse past experiences to solve relevant problems, transfer learning-based methods are widely used in evolutionary optimization [7], and evolutionary transfer multiobjective optimization (ETMO) has become a new frontier in evolutionary computation research [8]. Generally, when applying transfer learning methods to solve MOPs, three key points should be considered [9]: 1) Transferability, i.e., the ability to avoid a negative transfer. 2) Transfer components, i.e., identify which potential knowledge from the related MOPs is transferable and useful. 3) Transfer method, i.e., reusing the learned knowledge effectively to help optimize the new MOP. Thoughtfully, ETMO is defined as a paradigm that integrates EA algorithms with knowledge learning and transfer across related domains [10], aiming to improve the optimization performance when solving a variety of problems, such as multi-/many-objective optimization problems, dynamic optimization problems, multi-task optimization problems, large-scale optimization problems, real-world complex optimization problems, etc. Finally, the aim of the competition is to investigate the new ETMO methods with knowledge transfer to improve the performance in solving different types of MOPs.

Test Benchmark Problems:

To promote the research on evolutionary transfer multiobjective optimization (ETMO), benchmark problems are of great importance to ETMO algorithm analysis, which helps designers or practitioners to better understand the merit and demerit of ETMO algorithms. However, although there are many areas that ETMO can cover, there are few types of benchmark problems that exist. Thus, a new test function suite (called ETMOF) is designed for this competition, covering diverse types and properties in the case of multi-task, such as various formulation models, various PS geometries and PF shapes, large-scale variables, dynamically changing environment, etc. Specifically, the proposed test suite has 40 benchmark problems, which can be classified as the following five types:

(1) Evolutionary Transfer Multiobjective Optimization Problems: ETMOF1 to ETMOF8

(2) Evolutionary Transfer Many-objective Optimization Problems: ETMOF9 to ETMOF16

(3) Evolutionary Transfer Large-scale Multiobjective Optimization Problems: ETMOF17 to ETMOF24

(4) Evolutionary Transfer Many-task Optimization Problems: ETMOF25 to ETMOF32

(5) Evolutionary Transfer Dynamic Multiobjective Optimization Problems: ETMOF33 to ETMOF40

The detailed definitions of these 40 benchmark problems can be found in Benchmark Problems for CEC2021 Competition on Evolutionary Transfer Multiobjectve Optimization, and all benchmark functions have been implemented in JAVA code based on the codes provided by [11], which can be downloaded here.

Competition Protocol:

Experimental Settings

Population size: 100 or a similar number for a single task of MOP.

Stopping criterion: a maximum number of 100000 function evaluations for a single task of static MOPs, i.e., MOPs in ETMOF1 to ETMOF32, and a maximum number of 100*(31Tt) function evaluations for a single task of dynamic MOPs, i.e., the number of changes is 30 for each run of a single dynamic MOP task in ETMOF33 to ETMOF 40.

Number of runs: An algorithm is required to be executed for 21 runs. Note, in each run, a new random seed should be adopted. Besides, it is prohibited to execute multiple 21 runs and then deliberately pick up the 21 better results.

Performance Metrics

The IGD [12] is used to evaluate the performance of an optimizer on each task of the related static MOPs.  A smaller IGD value indicates a better performance of the corresponding optimizer.

Moreover, the MIGD [13] is used to evaluate the performance of an optimizer on each task of the corresponding dynamic MOPs. 

Finally, the mean standard score (MSS) [11] of the obtained IGD or MIGD values is used to rank the ETMO-based optimizers. MSS is used as a comprehensive criterion, and a smaller MSS value indicates the better overall performance of an ETMO optimizer on a benchmark function.

Submission Guidelines:

To submit the result, it is expected to format the submitted competition results in tables as the same as Table 1. Of course, other your preferred ways of result presentation are also acceptable. But, please do make sure that the submitted results are of high readability and multiple types of results shown in Table 1 are clearly recorded, including the best, worst, mean, median, and the standard deviation of the IGD/MIGD values for each task.

As summarized above, the 40 benchmark problems proposed in this report can be broadly divided into 5 categories, so it is recommended to submit the results into 5 similar tables, as the sample shown in Table 1.

Based on the principle of no free lunch, it is impossible to solve all kinds of problems with a single optimizer. So, in this competition, it is not mandatory to solve all five types of benchmark problems using the same strategy, and you can design different strategies to handle different kinds of problems, aiming to show how knowledge can be effectively transferred to promote optimization when given different types of problems, which also potentially could reveal some of the conditions that may lead to negative transfer. Finally, we will make a comprehensive ranking of all contestants' algorithms.

For the new ETMO algorithm that you designed, please submit the corresponding source code which should allow the generation of reproducible results you're submitted. Besides, it would be nice if you can submit a document that gives a brief introduction to the main components and parameter settings of your algorithm.

    If you have any queries, please kindly contact me (sbliu2-c@my.cityu.edu.hk). We will follow the extended deadlines as determined by CEC 2021.  If you require extra time, you can also contact me by email. If you have suggestions to improve the technical report or if you find any potential bug in the codes, please don't hesitate to tell me by email, so that we can update you about any bug fixings and/or the extension of the deadline.

Table 1: IGD/MIGD Results Obtained by Your Algorithm on ETMOF1 to ETMOF8

Problem Task No. IGD/MIGD
Best Worst Mean Median Std.
ETMOF1 T1          
...          
Tk          
...            
           
           
ETMOF8 T1          
...          
Tk          

 

Important Dates:

For participants planning to submit a paper to the 2021 IEEE Congress on Evolutionary Computation:

Paper submission: 31 Jan 2021

Notification to authors: 22 Mar 2021

Final submission: 7 Apr 2021

Note: You are encouraged to submit your paper to the Special Session on Evolutionary Transfer Learning and Transfer Optimization

Participants for competition only: Results submission deadline: 10 May 2021

Note: Please send your results directly to Mr. Songbai Liu (sbliu2-c@my.cityu.edu.hk)

Other related Links:

  1.  Webpage for Evolutionary Transfer Optimization (ETO)

  2. Webpage for Evolutionary Multitask Optimization (EMO)

 

Competition Organizers: 

Kay Chen Tan (Fellow, IEEE)

Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR

E-mail: kctan@polyu.edu.hk 

Songbai Liu

Department of Computer Science, City University of Hong Kong, Hong Kong SAR

E-mail: sbliu2-c@my.cityu.edu.hk

Qiuzhen Lin

College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China

E-mail: qiuzhlin@szu.edu.cn

Qing Li

Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR

E-mail: csqli@comp.polyu.edu.hk

 

References:

[1]K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, 2002.

[2]Q.F. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, 2007.

[3]A. Gupta, Y. Ong, and L. Feng, K. C. Tan, “Multiobjective Multifactorial Optimization in Evolutionary Multitasking,” IEEE Trans. on Cybernetics, vol. 47, no. 7, pp. 1652–1665, 2016.

[4]M. Gong, Z. Tang, H. Li, J. Zhang, “Evolutionary Multitasking with Dynamic Resource Allocating Strategy,” IEEE Trans. Evol. Comput., vol. 23, no. 5, pp. 858–869, 2019.

[5]J. Lin, HL. Liu, K. C. Tan, and F. Gu, “An Effective Knowledge Transfer Approach for Multiobjective Multitasking Optimization”, IEEE Trans. on Cybernetics, in the press, 2020.

[6]A. Gupta, Y. S. Ong, L. Feng, “Insights on Transfer Optimization: Because Experience is the Best Teacher,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 1, pp. 51-64, 2018..

[7]M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. Yen, “Transfer Learning-Based Dynamic Multiobjective Optimization Algorithms”, IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 501–514, 2018

[8]K. C. Tan, L. Feng, M. Jiang, “Evolutionary Transfer Optimization - A New Frontier in Evolutionary Computation Research,” IEEE Computational Intelligence Magazine, in the press, 2020.

[9]J. Zhang, W. Zhou, X. Chen, W. Yao, L. Cao, “Multisource Selective Transfer Framework in Multiobjective Optimization Problems,” IEEE Trans. Evol. Comput., vol. 24, no. 3, pp. 424–438, 2020.

[10]K. C. Tan, “Evolutionary Transfer Optimization,” report in IEEE World Congress on Computational Intelligence (WCCI), 2020.

[11]Y. Yuan, et al, “Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results,” Technical Report, arXiv:1706.02766v1, 2017.

[12]P. A. N. Bosman and D. Thierens, “The balance between proximity and diversity in multiobjective evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 7, no. 2, pp. 174–188, 2003.

[13]S. Jiang and S. Yang, "Evolutionary dynamic multi-objective optimization: Benchmark and algorithm comparisons," IEEE Trans., Cybern., vol. 47, no. 1, pp. 198-210, 2017.

附件

登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们: