IEEE CEC’2021 Tutorial on Large-Scale Optimization and Learning

Organizers: Dr. Mohammad Nabi Omidvar and Dr. Yuan Sun

Abstract

Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advances in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables. It is this ``curse-of-dimensionality’’ that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical, metaheuristics and learning-based optimization algorithms being developed to overcome this scalability issue. This Tutorial is dedicated to exploring the recent advances in this field, covering topics in large-scale black-box continuous optimization and large-scale combinatorial optimization. In particular we focus on:

What’s new: The proposed tutorial is significantly different from previous years. Firstly, we emphasize more on the synergy between machine learning and large-scale optimization, and show how the two fields can benefit from each other. Secondly, we significantly enlarge the scope of the tutorial by including combinatorial optimization problems and demonstrating how mathematical programming, data mining and machine learning techniques can be used to enhance the performance of EAs to tackle large-scale combinatorial optimization problems.

Content

We provide an overview of the recent advances in the field of evolutionary large-scale global optimization with empahsis given to the interplay between optimization and learning. We also cover to large-scale Mixed Integer Programs (MIPs), a general mathematical formulation for many real-world combinatorial optimization problems. In particular, we introduce two important categories of methods for solving large-scale MIPs. (1) Problem reduction: We briefly describe the existing metaheuristics and metaheuristics that incorporate the idea of problem reduction for solving large-scale MIPs, including the Construct, Merge, Solve & Adapt, Merge Search, and Large Neighbourhood Search algorithms. We also introduce an emerging topic of leveraging machine learning and data mining for problem reduction, and show how this can be incorporated into EAs to boost their performance for solving large-scale MIPs. (2) Problem decomposition: We give a general description of problem decomposition techniques such as Lagrangian relaxation and Dantzig-Wolfe decomposition, and show how EAs can be combined with traditional mathematical programming techniques to solve large-scale MIPs more efficiently. Importantly, we discuss the potential of using EAs and machine learning techniques to automatically identify good decompositions for general MIPs. Overall, the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges and opportunities in this field.

Targeted Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of the tutorial is approximately 100 minutes and 15-20 minutes for the Q&A.

Organizers

Mohammad Nabi Omidvar (M’09, SM’20) received the first bachelor’s degree (First Class Hons.) in computer science, the second bachelor’s degree in applied mathematics, and the Ph.D. degree in computer science from RMIT University, Melbourne, VIC, Australia, in 2010, 2014, and 2016, respectively. He is currently an Academic Fellow (Asst. Professor) with the School of Computing, University of Leeds, and Leeds University Business School working on applications of artificial intelligence in financial services. Prior to joining the University of Leeds, he was a research fellow with the School of Computer Science, University of Birmingham, U.K. His current research interests include large-scale global optimization, decomposition methods for optimization, multiobjective optimization, and AI in finance. Dr. Omidvar was a recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his research on large-scale global optimization in 2017, the Australian Postgraduate Award in 2010, and the Best Computer Science Honours Thesis Award from the School of Computer Science and IT, RMIT University. In 2019 he jointly received the CEC 2019 Competition on Large-Scale Global Optimization award. He is also the chair of IEEE CIS Taskforce on Large-Scale Global Optimization.


Yuan Sun is a Research Fellow at the School of Mathematics, Monash University. Prior to that, he was a Postdoctoral Research Associate with the School of Computer Science and Software Engineering, RMIT University. He received a PhD degree in Computer Science from The University of Melbourne, Australia, and a Bachelor’s degree in Applied Mathematics from Peking University, China. Dr Sun’s research interests include large-scale optimization, machine learning, mathematical programming, and evolutionary computation. His research has been published in prestigious journals and conferences such as IEEE TPAMI, IEEE TEVC, OR Spectrum, AAAI, ICLR and VLDB. Dr Sun is the vice-chair of IEEE Taskforce on Large-Scale Global Optimization. He developed the Recursive Differential Grouping (RDG) method for decomposing large-scale black-box optimization problems efficiently, and his CC-RDG3 algorithm won the CEC 2019 Competition on Large-Scale Global Optimization.