Artificial Intelligence, computational intelligence, and their applications

Introduction and history

The field of artificial intelligence can roughly be divided into two parts - symbolic artificial intelligence and computational artificial intelligence. Symbolic artificial intelligence uses a symbolic description of the world in some formal language (e.g. logic) to solve problems, e.g. in the field of planning and knowledge representation. In contrast, computational intelligence focuses on learning behavior from available data and observations. Nature-inspired algorithms then represent a large part of computational intelligence.

Nature-inspired algorithms comprise a wide range of techniques that are rapidly gaining popularity in artificial intelligence. Among the most famous of them are artificial neural networks and evolutionary algorithms. Above all, artificial neural networks (in the form of so-called deep learning) today achieve many good results and surpass traditional methods in the field of image processing, reinforcement learning, machine translation, and others.

Neural networks are inspired by the functioning of the nervous system. They consist of simple computing units, so-called neurons, which are connected by weights. When training neural networks, these weights are adjusted so that the outputs of the network correspond to some desired outputs. Evolutionary algorithms are in turn optimization algorithms inspired by Darwinian evolution. They work with a set (population) of solution candidates (individuals). They run in iterations (generations) and apply genetic operators (crossover and mutation) to the entire population in each generation. Individuals that represent better solutions have a better chance of creating new offspring.

The above-mentioned methods are definitely not new. The first experiments with artificial neural networks date back to the 1940s. In 1959, Rosenblatt presents the perceptron, which later becomes the basis of modern neural networks. However, the development of neural networks in the 1970s and 1980s was marked by distrust in their abilities, which was based mainly on the book Perceptrons by Marvin Minsky and Seymour Papert. The further development of neural networks then only came in the second half of the 80s after the (re)discovery of the backpropagation algorithm, which is still used to train neural networks today. Currently, different architectures of (deep) neural networks are emerging in many areas of machine learning and give the best known results.

The idea of evolutionary algorithms is also not new. One of the first evolutionary strategies (optimization algorithms for continuous optimization, which are characterized by the fact that they automatically adapt their parameters) dates back to the late 1970s. The genetic algorithm as such was created in mid-1980s. Today, evolutionary algorithms are used in addition to optimization itself, e.g. for the design of electronic circuits, or for training neural networks in reinforcement learning.

Machine learning and optimization

The main areas of application of nature-inspired algorithms are machine learning and optimization. In fact, as we will see, these areas have a lot in common, since the goal of machine learning is typically to adjust the parameters of a model to best fit the available data. However, the area of optimization is somewhat more general.

The entire field of machine learning can be divided into three areas:

  1. In supervised learning, we are given inputs and their respective outputs. The goal is to find a model that, given an input, correctly assigns the correct output. Outputs can be of two types, either categorical or numerical. In the former case, the goal is to assign the object to the correct category (e.g. to decide whether there is a dog or a cat in a picture), and the problem is called classification. In the latter case, the goal is to predict a numerical value based on the input data (e.g. the price of the property based on information about it). Such a problem is called regression.
  2. In unsupervised learning only inputs without desired outputs are given. The goal is typically to divide the inputs into groups of similar inputs (clustering), or to learn how inputs look and generate new data (generative models).
  3. In reinforcement learning, the goal is to learn the behavior of an agent so that it best solves a given problem based on feedback from the environment in which it acts. We can think of this as trying to learn a new game by trying different actions and seeing how they affect the score.

Applications and results

Nature-inspired methods have achieved many interesting results in recent years. In the field of evolutionary algorithms, the Humies awards are announced every year for results obtained using evolutionary algorithms, which can compete in quality with results obtained by humans. For example, one of the nice results is the antenna design obtained through genetic programming, which was used in a NASA test mission. Evolutionary algorithms can also be used to evolve neural networks, e.g. to create artificial intelligence for the Mario game.

Neural networks, and deep neural networks, achieve very good results, for example, in image classification and processing. A nice result is, for example, automatic creation of captions for images. But deep neural networks also achieve very good results in reinforcement learning and are able to beat human players in many games such as Go, Starcraft, or many Atari games (you can access this paper if you log in on the page - Access through my institution).