Swarms and Colonies

In today’s lecture, we will look at other nature-inspired optimization algorithms. Specifically, the Particle Swarm Optimization (PSO) algorithm, which is inspired by the movement of flocks of birds, and the Ant Colony Optimization (ACO), which is inspired by the behavior of ants. Finally, we will briefly look at other algorithms inspired by nature.

Particle Swarm Optimization (PSO)

Particle Swarm Optimization (PSO) is an optimization algorithm inspired by the behavior of schools of fish or flocks of birds. It is an algorithm designed for continuous optimization that uses a set (population) of particles. Each particle is represented as two vectors in $\mathbb{R}^n$ - one of them is its position ($x$) and the other is its velocity ($v$). Additionally, each particle remembers the position (vector) in space where it had the best fitness value ($p_{best}$) and the location where the best fitness value is for the entire population ($g_{best}$).

The entire PSO algorithm is then very simple. It is based on the fact that each particle moves in space and is attracted to the places where it has found its best solution, and where the best global solution found so far is. Initially, particle positions are initialized randomly in the search space, and then the algorithm iteratively updates the velocities and positions of all particles according to equations: \(v \gets \omega v + \varphi_p r_p (p_{best} - x) + \varphi_b r_b (g_{best} - x)\)

\[x \gets x + v,\]

where $r_p, r_b$ are random numbers between 0 and 1 (often different for different coordinates) and $\omega, \varphi_p, \varphi_b$ are parameters that determine inertia and how much an individual is attracted to its own and to the global best solution .

The PSO algorithm described in this way uses the so-called global topology, i.e. all particles can directly communicate with everyone (and thus share information about the global optimum). There are also PSO variants that use only local topology, i.e. particles can only communicate with a subset of the swarm, which then shares the local best solution. There are basically two options for choosing a topology - either geometric, where particles that are close to each other communicate with each other, or social, where the topology is determined in advance, regardless of particle positions. In the second case, a circular topology is often used, where each particle has only two neighbors.

PSO is typically used for continuous optimization and in this case it is not necessary to modify the algorithm in any way - the particle positions directly represent the solution candidates and their quality is determined according to the value of the optimized function. But the algorithm can also be used to solve other types of problems - discrete and combinatorial. For integer optimization, real numbers are often used and the results are then rounded. For more general combinatorial optimization, it is necessary to somehow define the operations that are used in the equations. One possibility is, for example, to use set operations.

Ant Colony Optimization (ACO)

Ant Colony Optimization is an algorithm based on the behavior of ants. They are able to find short paths between the anthill and the food based on the placement of a pheromone in the environment. When searching for food, ants deposit a small amount of pheromone on the way from the anthill. After finding the food, they return to the anthill and lay a much larger amount of pheromone along the way. If ants encounter a pheromone trail during their movement (searching for food), they tend to follow it - the chances of them following it are higher if the trail is stronger.

The ACO algorithm is based on this very metaphor. It is typically used to solve problems that can be represented as pathfinding in a graph, such as the traveling salesman problem, or routing in computer networks. The algorithm runs iteratively and repeats two steps - solution generation and pheromone update.

The generation of a solution by each ant happens in such a way that the ant starts at some (random) vertex of the graph and decides to which vertex it will go next. The transition probability from vertex $x$ to vertex $y$ depends on the amount of pheromone $\tau_{xy}$ between these two vertices and on the attractiveness (basically the heuristic value) of $\nu_{xy}$ of the transition between these vertices (in both cases, a higher value is better). The transition probability is then proportional to the value $(\tau_{xy}^\alpha)(\nu_{xy}^\beta)$, where $\alpha$ and $\beta$ are constants that determine the ratio between the influence of both variables.

The pheromone update has two steps - in the first step, part of the pheromone evaporates, in the second step, pheromone is placed on the edges along which one of the ants moved, in an amount that corresponds to the quality of the solution found by the ant. Specifically, after these two steps, the amount of pheromone is adjusted according to the relationship \(\tau_{xy} \gets (1-\rho)\tau_{xy} + \sum_k \tau_{xy}^k,\) where $\rho$ is a constant that determines the rate of pheromone evaporation, $Q$ is an appropriately chosen constant, $L_k$ is the quality of the solution found by ant $k$ and $\tau_{xy}^k = Q/L_k$ if ant $k$ passed along the edge $(x,y)$ and 0 otherwise.

When using ACO to solve TSP, $L_k$ is typically the path length and $\nu_{xy}$ is the inverse of the edge length $(x, y)$ .

When applied to other problems, typically only the path generation needs to be adapted. For example, the so-called Vehicle Routing Problem (VRP) is a problem where the goal is to plan a route for the delivery of goods from a warehouse to customers. The goal is to minimize the number of vehicles needed for delivery and the distance traveled. Vehicles are limited by capacity (and e.g. time spent on the road). When generating a route, the ants (vehicles) choose the next customer to visit according to the above-mentioned relationship as long as possible (the goods fit in the car and the journey is not too long). If this is not possible, the ant returns to the warehouse. The rest of the algorithm is then the same as for TSP.

Artificial Bee Colony

Artificial Bee Colony (ABC) is an optimization algorithm based on the foraging behavior of bees. Bees are divided into three groups - workers, onlooker bees and explorers. Each worker processes one food source (the positions of these sources encode the solution). While processing, the worker visits nearby food sources, and if it is of better quality (has better fitness), she replaces her source with this new source. Then all the workers gather in the hive, exchange information about the quality of the resources, and the onlooker bees choose some of these food sources using roulette selection. At the same time, workers remember how long they have been processing a given source, and if this time exceeds the set limit, they leave the source and become explorers. Explorers search the area randomly looking for new food sources.

Artificial Immune Systems

Artificial immune systems (AIS) model various processes that take place in the work of the immune system in mammals. The techniques used here vary, but a typical application of AIS is in machine learning, specifically for classification or anomaly detection.

For example, negative selection is inspired by the positive and negative selection of T-lymphocytes that takes place in the thymus during the maturation of lymphocytes. In negative selection, lymphocytes that react to one’s own cells are removed. Algorithms based on negative selection are typically used for anomaly detection or one-class classification. In both cases, the point is to find out if the given patterns are normal (belonging to the training data) or not. The algorithm generates a number of different rules and removes those that respond to the data from the training set.

Clonal selection, on the other hand, tries to create rules that recognize some patterns well. First, a set of possible rules is generated, then those that better correspond to the required patterns (have a higher affinity) are cloned and, depending on their affinity, a so-called hyper-mutation is applied to them. The entire clonal selection algorithm actually resembles an evolutionary algorithm without crossover.

Criticism of nature inspired techniques

A number of new heuristics inspired by nature are currently emerging. This is very often criticized as the proposed algorithms often use complicated metaphors to hide the fact that they are not that new. When studying these algorithms, it is always good to think about what is really new in them and if by chance the metaphor is not only meant to complicate the description of the algorithm to make it look new and interesting.