Welcome to CEC2023 Competition on Dynamic Constrained Multiobjective Optimization
来源: 陈国玉/
中国矿业大学(徐州)
2555
2
0
2023-07-04

IEEE Congress on Evolutionary Computation (CEC) 2023

July 1-5, 2023, Chicago, USA 

Resutls:

Congratulations to the winners, and thank you for your active participation! Also, thanks for the technical support of SOYOTEC Technologies Corp. 

The results of all participators have been verified by the organizers. The code will be uploaded in the website after the participant authorizes it. Since some of the articles related to these algorithms have not yet been published, in order to protect the privacy of these algorithms, the codes are temporarily not published. They will be published after the article is accepted by some publishers. If you are interested in these algorithms you can contact the authors directly (Guangyuan Sui: 1173864595@qq.com, Gejie Rang:  21gjrang@stu.edu.cn, Yulong Ye: liangzwh@gmail.com).

Overview and Aim:

In the past decade, dynamic constrained multiobjective optimization has attracted the increasing research interest [1][2]. To the best of our knowledge, the problem is widely-spread in real-world applications, such as scheduling optimization, and resource allocation, which involves time-varying multiobjective and constraints [3]-[5]. More especially, the corresponding dynamic constrained multiobjective optimization problems (DCMOPs) contain more complex characteristics and special difficulties than dynamic multiobjective optimization or constrained multiobjective optimization ones [6]-[9]. To solve this kind of problem, traditional multiobjective evolutionary algorithms mainly face three difficulties. First, environmental changes can be described as various dynamics, forming different levels of difficulties to algorithms. Thus, there is no change response strategy that can deal with all kinds of dynamics. Second, different types of constraints may appear under dynamic environments, which pose a challenge to achieve good versatility in handling various constraints for any static optimizer. Finally, the response time for environmental changes is generally tight for algorithms. Concerning the above-mentioned analysis, there is a significant need for new mechanisms in solving DCMOPs. More especially, a set of diverse and unbiased test problems is a great demand to systematically study dynamic constrained multiobjective evolutionary algorithms (DCMOEAs) in the field [10][11].

Test Problems:

To promote the research on dynamic constrained multiobjective optimization (DCMO), 10 benchmark functions are developed, covering diverse characteristics which exactly represent different real-world scenarios, for example, continuity-disconnection, time-dependent PF/PS geometries, dynamic infeasible region, small feasible region, and so on. The detailed definitions of these 10 test problems can be found in Benchmark Problems for CEC2023 Competition on Dynamic Constrained Multiobjective Optimization.

Based on the test suite with various characteristics, researchers can better understand the strengths and weaknesses of DCMOEAs, stimulating the research on dynamic constrained multiobjective optimization [12][13]. All the benchmark functions have been implemented in MATLAB code based on the codes provided by [14], which can be downloaded in the following website.

https://github.com/gychen94/DCMO

Competition Protocol:

1) General settings:

Population size: 100.

Number of variables: 10.

Frequency of change (τt ): 10 (fast changing environments), 30 (slow changing environments).

Severity of change (nt ): 5 (severe changing environments), 10 (moderate changing environments).

Number of changes: 60.

Stopping criterion: a maximum number of 100(60τt+60) fitness evaluations, where 6000 fitness evaluations are given before the first environmental change occurs.

Number of independent runs: 20.

2) Performance metric:

The MIGD [15] is used to evaluate the performance of an optimizer on each DCMOP. A smaller MIGD value indicates a better performance of the corresponding optimizer.

Moreover, the MHV [16] is used to measure the comprehensive performance of an optimizer on DCMOPs. A larger MHV value indicates a better performance of the corresponding optimizer.

Submission Guidelines:

To submit the result, it is expected to format the submitted competition results in tables as the same as the following Table. More especially, please do make sure that the submitted results are of high readability, and multiple types of results shown in Table are clearly recorded, including the mean and standard deviation of the MIGD/MHV values for each test instance.

For all participants, please also submit the corresponding source code which should allow the generation of reproducible results you're submitted. Besides, it would be nice if you can submit a document that gives a brief illustration to the algorithm and corresponding parameter settings.

If you have any queries, please kindly contact us (chenguoyumail@163.com). We will follow the deadlines as determined by CEC 2023. If you require extra time, you can also contact us by email. If you have suggestions to improve the technical report or if you find any potential bug in the codes, please don't hesitate to tell us by email, so that we can update you about any bug fixings and/or the extension of the deadline.

Table: MIGD and MHV results obtained by your algorithm on DCF

Problem t ,nt ) MIGD (mean(std.)) MHV (mean(std.))
DCF1 10,5 1.1111E-2(1.1111E-3) 1.1111E-2(1.1111E-3)
10,10    
30,5    
30,10    
...      
     
     
     
DCF10 10,5    
10,10    
30,5    
30,10    

Ranking calculation rules:

Based on the ranking of the competition algorithm in the above two indicators, a final Score is calculated to evaluate the performance of the algorithm.

Score consists of two ranking syntheses (SR) :

where rank is obtained by the mean and standard deviation value, which is calculated based on the given test function and two performance indicators. When SR values of different algorithms are obtained, scores can be calculated as follows:

Award:

IEEE CEC 2023 conference certificates and prize money will be awarded to the winners of this competition(1st to 3rd place).

Important Dates:

For participants planning to submit a paper to the 2023 IEEE Congress on Evolutionary Computation:

Paper submission: 13 Jan 2023   27 Jan 2023 

Paper reviews: 3 Mar 2023   17 Mar 2023  

Paper re-submission: 24 Mar 2023   7 April 2023  

Paper final notifications: 31 Mar 2023   14 April 2023 

Note: You are encouraged to submit your paper to the given at: https://2023.ieee-cec.org/

Participants for competition only:

Results submission deadline: 31 Mar 2023   30 April 2023 

Note: Please send your results directly to Mr. Guoyu Chen (chenguoyumail@163.com)

Competition Organizers:

Yinan Guo

School of Mechanical Electronic and Information Engineering, China University of Mining and Technology (Beijing), Beijing, China

E-mail: nanfly@126.com

Guoyu Chen

School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China

E-mail: chenguoyumail@163.com

Caitong Yue

School of Electrical Engineering, Zhengzhou University, Zhengzhou, China

E-mail: zzuyuecaitong@163.com

Jing Liang

School of Electrical Engineering, Zhengzhou University, Zhengzhou, China

E-mail: liangjing@zzu.edu.cn

Yong Wang

School of Automation, Central South University, Changsha, China 

E-mail: ywang@csu.edu.cn

Shengxiang Yang

School of Computer Science and Informatics, De Montfort University, Leicester LE1 9BH, U.K.

E-mail: syang@dmu.ac.uk

References:

  1. S. Biswas, S. Das, P. N. Suganthan, and C. A. C. Coello, “Evolutionary multiobjective optimization in dynamic environments: A set of novel benchmark functions,” in Proc. 2014 IEEE Congr. Evol. Comput., 2014, pp. 3192–3199.
  2. G. Chen, Y. Guo, M. Huang, D. Gong, and Z. Yu, “A domain adaptation learning strategy for dynamic multiobjective optimization,” Inf. Sci., vol. 606, no. 4, pp. 328–349, 2022.
  3. R. Azzouz, S. Bechikh, and L. Ben Said, “Dynamic Multi-objective Optimization Using Evolutionary Algorithms: A Survey,” in Adaptation, Learning, and Optimization, 2017, pp. 31–70.
  4. S. Jiang and S. Yang, “Evolutionary dynamic multi-objective optimization: Benchmarks and algorithm comparisons,” IEEE Trans. Cybern., vol. 47, no. 1, pp. 198–211, 2017.
  5. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Problems for Evolutionary Multiobjective Optimization,” in Evolutionary Multiobjective Optimization, London: Springer-Verlag, 2005, pp. 105–145.
  6. R. Azzouz, S. Bechikh, and L. Ben Said, “Multi-objective Optimization with Dynamic Constraints and Objectives,” in Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Jul. 2015, 615–622.
  7. J.-J. Ji, Y.-N. Guo, X.-Z. Gao, D.-W. Gong, and Y.-P. Wang, “Q-Learning-Based Hyperheuristic Evolutionary Algorithm for Dynamic Task Allocation of Crowdsensing,” IEEE Trans. Cybern., pp. 1–14, 2021.
  8. Z. Ma and Y. Wang, “Evolutionary Constrained Multiobjective Optimization: Test Suite Construction and Performance Comparisons,” IEEE Trans. Evol. Comput., vol. 23, no. 6, pp. 972–986, 2019.
  9. T. T. Nguyen and X. Yao, “Continuous Dynamic Constrained Optimization—The Challenges,” IEEE Trans. Evol. Comput., vol. 16, no. 6, pp. 769–786, 2012.
  10. T. T. Nguyen and Xin Yao, “Benchmarking and solving dynamic constrained problems,” in 2009 IEEE Congress on Evolutionary Computation, 2009, pp. 690–697.
  11. R. Azzouz, S. Bechikh, L. Ben Said, and W. Trabelsi, “Handling time-varying constraints and objectives in dynamic evolutionary multi-objective optimization,” Swarm Evol. Comput., vol. 39, pp. 222–248, 2018.
  12. Q. Chen, J. Ding, S. Yang, and T. Chai, “A Novel Evolutionary Algorithm for Dynamic Constrained Multiobjective Optimization Problems,” IEEE Trans. Evol. Comput., vol. 24, no. 4, pp. 792–806, 2020.
  13. S. Jiang, S. Yang, X. Yao, K. TAN, M. Kaiser, and N. Krasnogor, “Benchmark problems for cec2018 competition on dynamic multiobjective optimisation,” CEC2018 Competition, 2018.
  14. Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum],” IEEE Comput. Intell. Mag., vol. 12, no. 4, pp. 73–87, 2017.
  15. Aimin Zhou, Yaochu Jin, and Qingfu Zhang, “A Population Prediction Strategy for Evolutionary Dynamic Multiobjective Optimization,” IEEE Trans. Cybern., vol. 44, no. 1, pp. 40–53, 2014.
  16. L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm for calculating hypervolume,” IEEE Trans. Evol. Comput., vol. 10, no. 1, pp. 29–38, 2006.
附件

登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们: