# Evolutionary algorithms - introduction

One of the most well-known nature-inspired techniques are evolutionary algorithms. These are general optimization heuristics that have wide applications in many areas, from optimization of real functions, to combinatorial optimization, to the development of electronic circuits or neural networks.

Evolutionary algorithms are inspired by (typically) Darwin’s theory of evolution. One could even say that evolutionary algorithms simulate the life of many generations and how they will be affected by environmental selection pressures. In evolutionary algorithms, we work with a set of candidates to solve a given problem. Each candidate is called an individual and the whole set is called a population. An evolutionary algorithm runs in iterations (generations) and in each iteration it selects individuals, modifies them in some way (using so-called genetic operators) and then selects which individuals survive to the next generation. Preference is always given to the individuals that are better at solving the given problem. The quality of the solution to the problem is given by the so-called fitness function. The population at the beginning of a generation is referred to as the parents and the population at the end (after applying the genetic operators) as the offspring.

## Simple Genetic Algorithm

A classic example of an evolutionary algorithm is the simple genetic algorithm. In it, individuals are coded as sequences of bits of fixed length. The fitness function (as always) depends on the particular problem to be solved. At the beginning of the algorithm, individuals in the population are initialized randomly and their fitness is calculated. These individuals then form the first population of parents. Subsequently, the parents to which the genetic operators will be applied are selected. The selection is done in such a way that the probability of selecting a given individual is directly proportional to its fitness - this method of selection is called roulette-wheel selection or fitness proportionate selection. Selection is done with repetition, so some individuals may be selected multiple times while others are not selected at all. Genetic operators - crossovers and mutations - are then applied to these selected individuals. Crossover combines two solutions to create a new solution. Here, so-called single-point crossover is typically used - one point in the individual is randomly selected and a new offspring is created by copying a part from one parent before this point and from another parent from this point onwards. After the crossover, mutation is applied. In a simple genetic algorithm, this is typically a mutation that randomly changes some bits in the individual (bit-flip). The goal of the mutation is to improve the diversity of the population. The newly created offspring then replace the original parents and the algorithm continues to the next generation.

The algorithm can be described using the pseudocode below:

P_0 <- initialize the population randomly
t <- 0
while not happy
evaluate the fitness of individuals in P_t
M_t <- select parents using the roulette wheel selection on P_t
O_t <- create the offspring population by using the genetic operators M_t
P_{t+1} <- O_t
t <- t+1


The fitness function is given by the problem we want to solve and evolutionary algorithms always maximize it. A typical example is the so-called OneMAX problem, where we want the individual represented as a binary string to contain as many ones as possible. In this case, we can define fitness as the number of ones in an individual. This example is not very useful in practice, but in the same way we can represent the well-known problem of finding a subset of some given set such that the sum of the elements in this subset is some given number. Number 1 in the individual at position $i$ then means that the $i$-th item is selected into the subset. Fitness can then be computed as the difference of the sum of all numbers in the set and the absolute value of the difference between the desired value and the sum of the elements in the subset (we want to minimize the absolute value of the difference because evolutionary algorithms maximize fitness, we need to subtract it from some suitably large number - e.g., the sum of all numbers).

## Evolutionary algorithms variants

The simple genetic algorithm is just one example of evolutionary algorithms. We can see that many parts of it can easily be replaced by some other variant. For example, we can change selection, genetic operators, and the way a new population is created from the old one. In addition, we can change the way the solution is encoded in the individual.

### Selection

So far we have mentioned roulette-wheel selection, where the probability of selecting an individual is directly proportional to its fitness - more formally, the probability $p_i$ of selecting an individual $i$ is calculated as $p_i = \frac{f_i}{\sum_{j=1}^N f_j},$ where $f_j$ is the fitness of individual $j$. The big disadvantage of this selection is that it depends directly on fitness values, so if we change the fitness by adding some large constant to it, the differences between individuals will be reduced and selection will start to behave randomly. This can of course be exploited if we want to strengthen/weaken the effect of selection.

There are many alternatives to the roulette-wheel selection. We will mention only one for now, and that is tournament selection - first a few (typically 2) individuals are randomly selected, their fitness is compared, and then selection selects the better of the two with a high probability (say 80%). The advantage of tournament selection is that its result depends only on the fitness ranking of the individuals, and it also works for negative fitness values.

### Genetic Operators

Genetic operators are closely related to how an individual is encoded. For binary encoding and other encodings based on constant-length sequences, for example, the aforementioned single-point crossover is often used. It can be generalized to $n$-point crossover, where $n$ points are chosen instead of one point to cross them, and the children are then created by alternating which parent is chosen from which part. In addition, we sometimes consider so-called uniform crossover, where for each position we again randomly decide from which parent to insert a value into a child.

### Transfer of Individuals to the Next Generation

In the simple genetic algorithm, the next generation is made up directly of the offspring created in the previous generation. For this reason, the number of parents and offspring must always be the same. Another disadvantage is that genetic operators can break the best solution found so far. The loss of the best solution is prevented by a technique called elitism. In algorithms with elitism, typically a small fraction of the best parents are copied directly to the next generation. We than generate only so many offspring to fill the population, or we can select only some of the generated offspring to fill the population – we can again use the roulette wheel selection or the tournament selection to select them.

It is also possible to generate a different number of offspring than the number of parents. We can then describe working with the population using the notation from evolution strategies as $(\mu, \lambda)$ or $(\mu + \lambda)$. The notation indicates that the population of $\mu$ parents will generate $\lambda$ offspring. In the former case, the next population is then selected as the $\mu$ best of the $\lambda$ offspring (and thus necessarily $\lambda > \mu$), in the latter case, the next population is selected as the $\mu$ best of both parents and offspring (and sometimes even $\lambda = 1$ - in so called steady-state algorithms).

In the next lecture we will look at other possibilities for encoding the individuals and the genetic operators associated with it.