Table of Contents
In the past two decades, many nature-inspired optimization algorithms have been developed and successfully applied for solving a wide range of optimization problems, including Simulated Annealing (SA), Evolutionary Algorithms (EAs), Differential Evolution (DE), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Estimation of Distribution Algorithms (EDA), etc. Although these techniques have shown excellent search capabilities when applied to small or medium sized problems, they still encounter serious challenges when applied to large scale problems, i.e., problems with several hundreds to thousands of variables. The reasons appear to be two-fold. Firstly, the complexity of a problem usually increases with the number of decision variables, constraints, or objectives (for multi-objective optimization problems). Problems with this high level of complexity may prevent a previously successful search strategy from locating the optimal solutions. Secondly, as the size of the solution space of the problem grows exponentially with the increasing number of decision variables, there is an urgent need to develop more effective and efficient search strategies to better explore this vast solution space with only limited computational budgets.
In recent years, research on scaling up EAs to large-scale problems has attracted significant attention, including both theoretical and practical studies. Existing work on tackling the scalability issue is getting more and more attention in the last few years.
This special session is devoted to highlight the recent advances in EAs for handling large-scale global optimization (LSGO) problems, involving single objective or multiple objectives, unconstrained or constrained, binary/discrete or real, or mixed decision variables. More specifically, we encourage interested researchers to submit their original and unpublished work on:
- Theoretical and experimental analysis on the scalability of EAs;
- Novel approaches and algorithms for scaling up EAs to large-scale optimization problems;
- Applications of EAs to real-world large-scale optimization problems;
- Novel test suites that help researches to understand large-scale optimization problems characteristics.
There is a LSGO competition which is being organized along with the special session. Nonetheless, participating in the competition is not mandatory and any work on the LSGO field is welcome.
Manuscripts should be prepared according to the standard format and page limit of regular papers specified for CEC’2018. Instructions on the preparation of the manuscripts can be obtained at the WCCI’2018 website: http://www.ecomp.poli.br/~wcci2018/. Special session papers will be treated in the same way as regular papers and will be included in the conference proceedings. Submissions should be done by using the following link: http://ieee-cis.org/conferences/cec2018/upload.php.
Please, note that the submission deadline is: January 15, 2018
Updated submission deadline: February 1, 2018
Furthermore, a companion competition on Large Scale Global Optimization (LSGO) will also be organized together with the special session. The competition allows participants to run their own algorithms on 15 benchmark functions, each of them of 1000 dimensions. Detailed information about these benchmark functions is provided in the following technical report:
X. Li, K. Tang, M. Omidvar, Z. Yang and K. Qin, “Benchmark Functions for the CEC’2013 Special Session and Competition on Large Scale Global Optimization,” Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013.
The aim of this competition is to provide a common platform that encourages fair and easy comparisons across different LSGO algorithms. Researchers are welcome to apply any kind of evolutionary computation technique to the test suite. The technique and the results can be reported in a paper for the special session (i.e., submitted via the online submission system of WCCI’2018).
The authors must provide their results as shown in the aforementioned technical report (Table 2). In particular, this table must contain the statistical information of their results at different checkpoints of the execution: 1.2E5, 6.0E5, and 3E6 fitness evaluations, being 3E6 the maximum number of evaluations.
In order to make it easier to obtain the results in the requested format, the original source code of the benchmark has been modified to automate this task. This modified code will output the requested information in an external file at the desired checkpoints. Right now, C++, Matlab and Python versions of the benchmark support this feature. Additionally, several tools are provided to create an Excel file with the results as recorded by the modified code and the latex table (Table 2 of the technical report) to allow its easy inclusion in the paper. However, the use of this version of the code is optional: the original code can still be used provided that the requested information is gathered by the algorithm.
The original code (C++, Java and Matlab implementations) is available in the following link: lsgo_2013_benchmarks_original.zip
For Python users, Prof. Molina is maintaining a Python version of the test
suite, which can be found in the following website:
and can installed by simply doing:
pip install cec2013lsgo==0.1.
The source code (modified for the C++ and Matlab implementations) is
available in the following link:
The source code for Python users can installed by simply doing:
pip install cec2013lsgo.
Also, the source code of the benchmark can be obtained from their repository.
As stated before, these modified versions save all the results into separate files named results_num.csv (for each function num and at the different checkpoints). However, if multiple runs for the same function are conducted concurrently, the user can change the default filename for the output file (please, check the examples for each version of the source code).
Additionally, if all the runs are done within the same program, in order to allow to store these values for each of the runs you have to inform the code when a new run is starting:
In the C++ implementation, you should use the method nextRun() among different runs (and for first time).
In the Matlab implementation, you need to set the global variable initial_flag to 0 just before each run.
Finally, the zip file also includes a small utility in Python1, extract_data.py that receives as a parameter the directory where the .csv files are stored and generates:
results_all.xls: An Excel file with all the results (we encourage the participants to submit the results in this format by email to the organizers, firstname.lastname@example.org). This Excel file contains not only the required milestones (1.2e5, 6e5, 3e6) but also additional values to analyze the performance of the different algorithms.
results.tex: A Latex file with all the required values as in Table 2 of the technical report to be included in the paper.
Results of competition
Three algorithms have participated into the competition:
BICA: “Bi-Spaces Interactive Cooperative Coevolutionary Algorithm for Large Scale Global Black-Box Optimization”, by Mongde Zhao et al. Source Code and results available here
MLSHADE-SPA: “LSHADE-SPA memetic framework for solving large-scale optimization problems”, by Anas A. Hadi, Ali W. Mohamed, and Kamal M. Jambi, Complex & Intelligent Systems, 2018. PDF, Results available here
SHADE-ILS: “SHADE with Iterative Local Search for Large-Scale Global Optimization”, by Daniel Molina and Antonio LaTorre. Results available and Source code, slides and github repository
Additionally, for comparisons, we have included in the comparison MOS, by Antonio LaTorre et al., the winner in previous comparisons, and considered the current state-of-the-art in LSGO.
The final results of the competition
Thus, the ranking of the algorithms are:
MLSHADE-SPA (Second position).
MOS (previous winner).
To summarise, after more than five years, two algorithms, SHADE-ILS and MLSHADE-SPA, give better results than MOS, previous winner and until now unbeatable algorithm for LSGO.
Also, all results used in the comparisons are available as Excel file: BICCA Excel, MLSHADE-SPA, SHADEILS, or All together.
University of Granada, Spain.
University Politécnica de Madrid, Spain.
It requires the library pandas in order to work (pip install pandas) ↩