Tutorial Schedule and Links to Materials
For tutorial materials, please also visit http://www.cs.usm.maine.edu/~congdon/Conferences/CEC2015/.
Tutorials Accepted for CEC 2015
|Session1||Madalina M. Drugan|
|Multi-armed bandits for evolutionary algorithms|
|A recent trend in machine learning (ML) is the transfer of knowledge from one area to another. We focus on potential synergies between multi-armed bandits (MAB) and evolutionary computation (EC). Evolutionary computation is a subfield of global optimization and is an incubator for a range of techniques (e.g., multi-objective evolutionary computation, estimation distribution algorithms) with a wide range of application. MAB addresses sequential decision problems in an initially unknown environment in a unitary framework for convergence guaranties to optimality. While MAB is a mathematical paradigm, the main strength of EC is its general applicability and computational efficiency. Although at first they seem very different, MAB and EC are two learning techniques that address basically the same problem: the maximization of the agent’s reward in MAB and fitness function in EC in a potentially unknown environment. The goal of this tutorial is to give an overview of MAB techniques integrated in EC and vice-versa, EC techniques used in MABs. We discuss on resemblances and differences in learning and optimising with multi-armed bandits and evolutionary computation. This is an advanced tutorial that assumes some background in machine learning.|
|A Survey of Representations for Evolutionary Computation|
|Representation is a central issue in evolutionary computation. The no free lunch theorem demonstrates that there is no intrinsic advantage in a particular algorithm when considered against complete spaces of
problems. The corollary is that algorithms should be fitted to the problems they are solving. Choice of representation is the primary point in the design of an evolutionary algorithm where the designer can incorporate domain knowledge and, in effect, choose the adaptive landscape he is searching. This tutorial will introduce a broad variety of representations with comments on their adaptive landscapes and domains of application. These will included novel representations for real parameter optimization, generative representations, ordered gene representations, and general modifications that can be made to a variety of representations. Familiarity with evolutionary algorithms is assumed, but no other special knowledge is required.
|Session3||Per Kristian Lehre|
|Runtime Analysis of Population-Based Evolutionary Algorithms|
|Populations are at the heart of evolutionary algorithms. They provide the genetic variation which selection acts upon. A complete picture of evolutionary algorithms can only be obtained if we understand their population dynamics. A rich theory on the time complexity of evolutionary algorithms has been developed over the last 20 years. While initially considering simplified settings such as the (1+1) EA on simple pseudo‐Boolean problems, the theory has gradually been refined to more realistic settings that take into account population dynamics. The tutorial begins with a brief overview of the population‐based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level‐based analyses. To illustrate how these techniques can be applied, we consider several fundamental questions: When are populations necessary for efficient optimisation with EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What is the impact of the crossover‐operator? What determines an EA’s tolerance for uncertainty, e.g. in form of noisy or partially available fitness?|
|Statistical Tests for Computational Intelligence|
|In this tutorial, we explain how to use what kind of statistical tests for which cases with exercise. We do not explain their mathematical aspects but focus on their practical use for our computational intelligence research to increase their reliability. When we appeal our proposed methods for speeding up evolutionary computation or neural networks or those for increasing pattern recognition rates, for example, conclusions without adequate statistical tests cannot be accepted. However, it is true that there are several students and researchers whose good research was not accepted due to wrong use or no use of statistical tests for their experimental results. We introduce three checking points to choose one adequate statistical test among seven statistical tests (standard t-tests, analysis of variance, sign test, Wilcoxon’s signed-ranks test, Mann-Whitney U-test, Kruskal-Wallis test, and Friedman test) and learn how to use them. We also learn multiple comparisons. Other feature of this tutorial is to learn how to handle human subjective tests that cannot be avoided for interactive evolutionary computation (IEC), human-computer interactions, and other human-related research. We study Sheffe’s method of paired comparison that is one kind of ANOVA for this purpose. This is an introductory tutorial for EC practitioners.|
|Session1||Stephen L. Smith|
|Medical Applications of Evolutionary Computation|
|The use of evolutionary computation in medical applications is becoming widespread but the nature of the application area does present special challenges, not least of all regarding the acquisition and reliability of data. The aim of this tutorial is to give a practical guide to applying evolutionary computation to medicine and healthcare, for those new to evolutionary computation and the seasoned practitioner. Subjects will range from preprocessing of data to how to establish good collaborations with world leading medical centers. This will be illustrated through consideration of three case examples in detail: the diagnosis of Parkinson’s disease, detection of breast cancer from mammograms and using Raman spectroscopy to predict cancer.
This tutorial is suitable for all who are interested in applying evolutionary computation to problems in medicine, providing both an introduction to, and detailed examples of, this exciting and rewarding application area.
|Evolutionary Computation and Games|
|In recent years, the field of Computational Intelligence and Games (CIG) has enjoyed rapid progress and a sharp rise in popularity. In this field, algorithms from across the computational intelligence spectrum are tested on benchmarks based on e.g. board games and video games, and new CI-based solutions are developed for problems in game development and game design. This tutorial will give an overview of key research challenges and methods of choice in CIG. The tutorial is divided in two parts, where the second part builds on methods and results introduced in the first part. Level: introductory. No particular background is assumed beyond basic knowledge of computational intelligence methods and an interest in games.|
|An Introduction to Bioinformatics for EC Researchers|
|It used to be that the bottleneck to progress in molecular biology was the time it took to sequence the DNA of an organism. With the advent of next-generation sequencing technologies, this is no longer true. The bottleneck now is data analysis. Biologists badly need computer scientists to help them with this bottleneck. Many of the evolutionary computation approaches presented at CEC are applicable to bioinformatic problems. This tutorial will explain the biological background behind some of the bioinformatic problems currently of interest to biologists, focusing on those in genomics (the study of the function and structure of the DNA in an organism) and epigenomics (the study of heritable markers on DNA that impact gene regulation). Included will be tips on how to work effectively with biologists and information about publicly available data sets and how to use them meaningfully. Examples of solutions to bioinformatic problems that use evolutionary computation will be given, and the presenter will share her experiences working in a lab that studies the model organism, Tetrahymena thermophila. This tutorial assumes the attendees have experience with EC techniques, but not necessarily with biology; only a high school level understanding of biology is needed.|
|Deploying Evolutionary Computation in Industry: Challenges and Lessons Learned|
|Evolutionary computation deals with design, analysis and applications of stochastic problem-solving methods inspired by biological evolution and known as evolutionary algorithms. They nowadays represent a mature optimization methodology and are widely deployed in science, engineering and business. However, despite the general-purpose nature and robustness of evolutionary algorithms, solving industrial optimization problems by means of evolutionary computation can face numerous obstacles. Differences in understanding, expectations and language between the solution providers and the customers, the time-consuming process of gradually refining the optimization problem definition, and the potential application failure due to non-technical reasons are just a few issues that are scarcely ever tackled in the evolutionary computation courses and literature.
This tutorial presents the challenges faced and lessons learned in designing evolutionary algorithms for industrial optimization problems that go beyond the textbook knowledge on optimization and evolutionary computation. It starts with defining the scope of the presentation and providing motivating examples of industrial applications with diverse characteristics. The core of the tutorial is a systematic analysis of the potential challenges illustrated with practical situations, followed by an overview of the lessons learned in dealing with these challenges when deploying evolutionary algorithms. It is the ambition of the tutorial to both promote evolutionary computation as a practical problem-solving methodology and contribute to bridging the gap between the algorithm designers and end-users.
The intended audience are students, researchers and engineers interested in designing evolutionary algorithms for industrial optimization problems as well as end-users from various industries interested in deploying evolutionary algorithms in optimization. Basic knowledge on optimization and evolutionary algorithms may be helpful, but is not necessary to follow the tutorial.
|An Overview of Evolutionary Algorithms and Hyper-Heuristics|
|Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics. Evolutionary algorithm hyperheuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. Researchers will be provided with a foundation to start their own research in this area. The tutorial will firstly provide an overview of hyper-heuristics and evolutionary algorithm hyper-heuristics. The tutorial will examine the different categories of hyper-heuristics, showing how evolutionary algorithms can be used for each type of hyper-heuristic. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. The tutorial will also look at recent and emerging research directions in evolutionary algorithm hyper-heuristics. Two emerging directions are hyper-heuristics for the design of evolutionary algorithms and evolutionary algorithm hyperheuristics for algorithm design. The tutorial will end with a discussion session on future research directions in evolutionary algorithm hyper-heuristics. The tutorial is aimed at researchers in evolutionary algorithms who have an interest in hyperheuristics or have just started working in this area. A background in evolutionary algorithms is assumed.|
|Session2||Jerry Swan and John R. Woodward|
|Semi-Automated Algorithm Design with Genetic Programming|
|This tutorial gives a methodology for the use of constructive, generative hyper-heuristics to improve human-designed algorithms. This methodology has been successfully used by the presenters in a variety of application areas. It employs Genetic Programming, resulting in a semi-automated design process. The tutorial provides a simple step-by-step guide, introducing researchers to an exciting topic with many potential applications.
In addition to presenting the underlying theory, the tutorial will give an introduction to Templar, a framework that supports ‘Template Method Hyper-heuristics’, by which an algorithm is parameterized by a collection of heuristics which are then configured automatically. This method makes effective use of Genetic Programming to tune the algorithm to some target set of problem instances. Since the supporting algorithm skeleton can be arbitrarily complex, this allows greater scalability than can be achieved by the naive application of Genetic Programming. The Templar approach turns the creation of generative hyper-heuristics into the more procedural matter of GP parameter tuning. We describe several case studies, including how to create a hyper-heuristic template for quicksort and show the effectiveness of the approach with a state-of-the-art application in which we optimize quicksort for power consumption. The tutorial is suitable for practitioners at every level, assuming only basic familiarity with metaheuristics and machine learning – some experience of genetic programming is helpful, but not a necessity.
|Session3||Ke Tang and Kai Qin|
|Data-driven Evolutionary Algorithms|
|Most evolutionary algorithms (EAs) iteratively generate new candidate solutions in the solution space of optimization problems, and hence incrementally produce data as the evolutionary search progresses. In the past decades, a lot of efforts had been made to enhance EAs via exploiting the data generated in the course of search. However, modern optimization problems, featured with fast-growing scales, complexity and uncertainty, can seldom be solved by merely hybridizing EAs with off-the-shelf data analytics techniques, and therefore call for an in-depth investigation of the synergy between optimization and data analytics. This tutorial will introduce a unified perspective on EAs that explicitly adopts data analytics as an indispensible component, namely data-driven EAs. Both historical development and latest advances on this topic will be covered. Through fruitful examples, we aim to provide the audience a comprehensive understanding on how conventional EAs could benefit from advanced data analytics techniques. This tutorial is an intermediate one. Those who are interested in designing novel EAs, or/and are tackling real-world problems that may not be satisfactorily addressed with conventional EAs, are welcome to attend. In general, the audiences are assumed to be familiar with conventional EAs and have basic knowledge on modern data analytics/mining techniques, e.g., regression, clustering, learning to rank, and classification.|
|Session4||Abhishek Gupta, Yew-Soon Ong and Kay Chen Tan|
|Evolutionary Multitasking and Implications for Cloud Computing|
|The design of population-based search algorithms of evolutionary computation (EC) has traditionally been focused on handling a single optimization problem at a time. Despite the implicit parallelism of a population, no attempt has yet been made to harness the potential of EC methods towards multitasking, i.e. to solve multiple problems simultaneously, which is of great value in today’s booming cloud-based world. Cloud Computation (CC) is rapidly transforming the world around us by making state-of-the-art software and infrastructure services easily available to any individual or organization at very affordable prices. Of particular interest to this tutorial is the fact that CC platforms face a natural phenomenon wherein multiple optimization jobs are received from multiple users at the same time. Considering the foreseeable ubiquity of CC, it is becoming increasingly important to conceive platforms that can provide users with access to optimization solvers, including evolutionary algorithms, as software services. Thus, in a cloud setting, multitasking capabilities in an EC solver can be of immense value. To this end, the tutorial introduces a new paradigm in EC, which we label as multifactorial optimization (MFO). The nomenclature signifies a multitasking search involving multiple optimization tasks at once, with each task contributing a unique factor influencing the evolution of individuals in a composite solution space. As shall be demonstrated through numerous experiments, multitasking can significantly reduce makespan over a collection of optimization tasks, while providing high quality solutions. In other words, MFO can potentially increase service levels, driving CC prices down even further, without compromising service quality. The parallels of MFO to traditional single-objective optimization and multi-objective optimization shall also be drawn in the talk.|
|Evolutionary Algorithms as a Dynamical Complex Systems|
|This tutorial is focused on mutual intersection of a few interesting fields of research whose core topic are evolutionary algorithms in general. It discusses recent progress in evolutionary algorithms that can be considered discrete dynamical complex system with inherent nonlinear dynamics. As already reported in many research papers and books, this dynamics can generate different kinds of behavior including chaotic one and can be visualized as a complex geometrical structure. In the tutorial there will be explained relations between evolutionary dynamics, their visualization as a complex network and novel methods of their control. Selected evolutionary algorithms in this tutorial will be differential evolution, genetic algorithm, particle swarm, Bee algorithm, and others. Methodology, converting evolutionary algorithms to the complex network will be introduced including demonstrations. The relation between complex networks parameters, including their time dependence, and evolutionary dynamics will be explained. This tutorial is designed as an introduction; no advanced or expert knowledge from complex networks, chaos and control is expected.|
|Session2||P. N. Suganthan|
|Differential Evolution: Recent Advances|
|Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE ? a novel concept can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss proximity and neighborhood based DE to improve the performance of DE on multi-modal landscapes. The talk will finally highlight a few problems that pose severe challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.|
|Session3||Stephen Chen and James Montgomery|
|Thresheld Convergence in PSO, DE, and Other Search Methods|
|Many search techniques (e.g. PSO and DE) use vectors to generate new solutions (e.g. attraction vectors in PSO and difference vectors in DE). These vectors, which can exploit gradients, provide useful information in unimodal search spaces. For the exploration of multi-modal search spaces, it is instead useful to think of the sampling of attraction basins ? each local optimum defines an attraction basin which will lead to it through the use of greedy local search. Since many search techniques (such as PSO and DE) also employ elitism, the solutions which guide the search are the fittest solutions found to date. Implicitly, the most promising attraction basins to exploit (i.e. find the exact local optimum) are identified by the fitness of a known solution within them.
This identification works best when the solutions within each basin have the same relative fitness with respect to their local optima. Since local search interferes with this comparison process, the goal of thresheld convergence is to eliminate concurrent global and local search. Specifically, convergence is “held” back through the use of a threshold function . Successful applications of thresheld convergence include PSO  and DE  and two new heuristic search techniques: Minimum Population Search  and Leaders and Followers . Observations from these applications provide useful insights into the mechanisms of PSO and DE and the operation of metaheuristics in multi-modal search spaces.
This is an intermediate tutorial which is best suited for researchers already familar with Particle Swarm Optimization, Differential Evolution, and/or other metaheuristics — especially with their performance characteristics in unimodal and multi-modal search spaces.
|Session4||Robert G Reynolds|
|Cultural Algorithms: Putting Social Networks to Work|
|Cultural Algorithms are an evolutionary algorithm designed to support learning in data intensive environment across a number of disparate domains. Is there a general framework in which the problems of CAs can be expressed?
In order to answer this question, Cultural Algorithms are viewed as a framework within which to develop and test principles of Social Physics (Pentland). First, Cultural Algorithms are described within social physics as an engine that does work within a physical system and uses Big Data to direct its actions. Then, a set of Social Metrics based upon the principles of social physics are described. We demonstrate that similar metric patterns are repeated in successful Cultural Algorithm runs for problems that range from simple to chaotic in entropic terms. We then show how these patterns can be explained within the Cultural Algorithm Social Physics Framework as a means of maximizing the work done by a Cultural Algorithm. Finally, when the patterns begin to deviate from those expected when the system was working at its maximum, then human intervention can be used to increase the work done by the system. Examples from real world applications will be presented.
|Decomposition and Cooperative Coevolution Techniques for Large Scale Global Optimization|
|Many real-world optimization problems involve a large number of decision variables. For example, in shape optimization a large number of shape design variables are often used to represent complex shapes, such as turbine blades, aircraft wings, and heat exchangers. However, existing optimization methods are ill-equipped in dealing with this sort of large scale global optimization (LSGO) problems. A natural approach to tackle LSGO problems is to adopt a divide-and-conquer strategy. A good example is the early work on a cooperative coevolutionary (CC) algorithm by Potter and De Jong (1994), where a problem is decomposed into several subcomponents of smaller sizes, and then each subcomponent is “cooperatively coevolved” with other subcomponents. In this tutorial we will provide an overview on the recent development of CC algorithms for LSGO problems, in particular those extended from the original Potter and De Jong’s CC model. One key challenge in applying CC is how to best decompose a problem in a way such that the inter-dependency between subcomponents can be kept at minimum. Another challenge is how to best allocate a fixed computational budget among different subcomponents when there is an imbalance of contributions from these subcomponents. Equally dividing the budget among these subcomponents and optimizing each through a round-robin fashion (as in the classic CC method) may not be a wise strategy, since it can waste lots of computational resource. Many more research questions still remain to be answered. In recent years, several interesting decomposition methods (or variable grouping methods) have been proposed. This tutorial will survey these methods, and identify their strengths and weakness. The tutorial will also describe a contribution-based method for better allocating computation among the subcomponents. Finally we will present a newly designed variable grouping method, namely differential grouping, which outperforms those early surveyed decomposition methods. We will provide experimental results on CEC’2010 and CEC’2013 LSGO benchmark functions to demonstrate the effectiveness of this method.|
|Session2||Ankur Sinha and Kalanamoy Deb|
|Evolutionary Bilevel Optimization|
|Bilevel optimization problems involve two levels of optimization. A solution at the upper level may be considered feasible only if it is optimal for a parametric lower level optimization problem. Such optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another.
Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted. This tutorial is intended for anyone oriented for practice or academic research and is interested in evolutionary optimization and their applications. No prerequisite or prior knowledge is needed to attend this tutorial.
|Evolutionary Computation for Dynamic Optimization Problems|
|Many real-world optimization problems are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic optimization problems (DOPs) are challenging problems for researchers and practitioners in decision-making due to their nature of difficulty. Yet, they are important problems that decision-makers in many domains need to face and solve. Evolutionary computation (EC) is a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DOPs has attracted a lot of research effort during the last twenty years with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial aims to summarise the research area of EC for DOPs and attract potential young researchers into the important research area. It will provide an introduction to the research area of EC for DOPs and carry out an in-depth description of the state-of-the-art of research in the field regarding the following five aspects: benchmark problems and generators, performance measures, algorithmic approaches, theoretical studies, and applications. Some future research issues and directions regarding EC for DOPs will also be presented. The tutorial is “intermediate” and the audience is expected to have some background in EC.|
|Session4||Luis Marti and Nayat Sanchez-Pi|
|Convergence Detection and Stopping Criteria for Evolutionary Multi-Objective Optimization|
|Most soft-computing, heuristic, non-deterministic or numerical methods have in common that they need a stopping criterion. This criterion, which is usually a heuristic itself, is responsible for minimising the waste of computational resources by detecting scenarios where it makes no sense to continue executing the method. Hence, the success or failure of any practical application relies heavily on not only the techniques applied but also the support methodologies, including the stopping criterion. Paradoxically, the matter of stopping criteria and convergence detection has often been overlooked by most of the evolutionary multi-objective optimisation (EMO) community. This is probably because it plays a supporting role and, consequently, the theoretical and practical implications concerning this topic have not yet been properly studied and disseminated. However, it can be argued that many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme.|