Exam format and requirements
The exam consists of two parts. In the first part you have to solve a given problem using the techniques covered in the course. It can be, for example, an optimization problem solvable using evolutionary algorithms, or a machine learning problem solvable with neural networks. In the former case, you need to design the whole evolutionary algorithm (individual encoding, operators), in the latter case, you need to describe a suitable architecture of neural networks and describe, how to train it (what is the loss function). In both cases, I expect that you will be able to explain your choices and discuss what other possible ways of solving the problem there are.
In the second part of the exam, you will get two questions covering the topics from the lecture.
You will have enough time to solve the first part (at least 30 minutes), in the second part, you will also have time to prepare you answers. The second part follows directly after the first one.
Detailed list of requirements
- Basics of machine learning
- explain the differences between the different types of machine learning supervised learning, unsupervised learning, reinforcement learning)
- explain the differences between machine supervised learning tasks (regression, classification)
- describe the types of features (categorical, ordinal, numerical) and their preprocessing (encoding of categorical and ordinal features, scaling of numerical symptoms)
- Reinforcement learning
- describe the reinforcement learning problem intuitively (agent, environment, strategy search)
- describe the environment as a Markov decision process
- define reinforcement learning as a problem of search for a strategy maximizing the agent’s total reward
- define the terms state value and action value
- describe the basic idea of Monte-Carlo methods
- explain Q-learning (including the formula for updating the Q matrix)
- describe the SARSA algorithm
- explain how to use Q-learning in spaces with continuous states and actions
- A simple genetic algorithm
- describe the basic version of the evolutionary algorithm with binary encoding
- describe basic genetic operators (one-point and $n$-point crossover, mutation)
- describe roulette-wheel and tournament selection
- explain the advantages and disadvantages of tournament and roulette-wheel selection
- explain what a fitness function is
- describe elitism
- describe methods for environmental selection (transfer of individuals to another population), $(\mu, \lambda)$ and $(\mu + \lambda)$
- describe the extension of a simple GA to integer coding problems
- Evolutionary algorithms for continuous optimization
- define a continuous optimization problem
- describe genetic operators used in continuous optimization - arithmetic crossover, biased and unbiased mutations
- explain the difference between separable and non-separable functions
- describe the differential evolution algorithm and its advantages
- Evolutionary algorithms for combinatorial optimization
- describe mutations for permutation encoding
- describe crossovers used in permutation encoding (OX, PMX, ER)
- explain the use of evolutionary algorithms for the traveling salesman problem
- Linear and Cartesian GP, grammatical evolution
- describe linear genetic programming (encoding of an individuals, operators)
- describe Cartesian genetic programming (individual encoding, operators)
- describe grammatical evolution (individual encoding, operators)
- explain how to solve the problem of too short or too long individuals in grammatical evolution
- Tree-based genetic programming
- describe tree-based genetic programming and its genetic operators
- describe, how to deal with constants in tree-based genetic programming
- describe typed genetic programming
- describe automatically defined functions
- explain how to prevent looping/infinite recursion for automatically defined functions
- Perceptron neural networks
- describe how one perceptron works
- describe the perceptron training algorithm and explain when it converges
- describe the structure of multilayer perceptron networks
- describe the activation functions used in multilayer perceptrons (sigmoid, tanh, ReLU)
- describe the method for training neural networks (backpropagation)
- derive the rules for adapting the weights in the backpropagation method (at least for the output layer + an idea for the hidden layers)
- RBF networks
- describe the RBF unit and the architecture of the RBF network
- explain the principle of training of RBF networks
- describe the $k$-means algorithm
- compare the geometric properties of RBF networks and multilayer perceptrons
- Convolutional networks
- describe the function of the convolution filter
- describe the function of pooling layers in a convolutional network
- describe the architecture of a convolutional neural network
- explain what adversarial examples are and how they affect the practical deployment of neural networks
- describe the FGSM algorithm for creating adversarial examples
- explain the idea of artistic style transfer
- explain the basic ideas of generative adversarial networks (according to the text on the website)
- Recurrent neural networks
- define what recurrent neural networks are
- explain where recurrent networks are used
- explain the training of recurrent networks using the backpropagation-through-time algorithm
- explain the problem with exploding and vanishing gradients
- describe the principle of Echo State networks
- describe the training of Echo State networks
- describe the LSTM cell and the LSTM network architecture
- explain the advantages of Echo State networks and LSMT networks compared to basic recurrent networks
- Neuroevolution
- describe the use of evolutionary algorithms for training weights in a neural network
- describe the NEAT algorithm - individual representation, operators
- explain the meaning of innovation numbers in the NEAT algorithm
- describe the idea behind the HyperNEAT algorithm
- describe the extension of the NEAT algorithm for the search of neural network architectures (coDeepNEAT algorithm)
- explain the idea of the novelty search algorithm
- compare novelty search and evolutionary algorithms and discuss their advantages/disadvantages
- Deep reinforcement learning
- explain the deep Q-learning algorithm
- compare Q-learning and deep Q-learning
- describe tricks with experience replay and target network
- explain the effect of experience replay and target network on deep Q-learning
- describe the DDPG algorithm
- describe actor-critic approaches
- describe the advantage (in actor-critic approaches) and justify why it is used
- describe the A3C method
- Particle Swarm Optimization
- describe the update of positions and velocities in PSO
- describe the topologies used in PSO (global, geometric, social)
- explain the influence of topologies on the PSO algorithm
- explain the how to use PSO for solving problems in continuous optimization
- Ant Colony Optimization
- explain the basic concepts of ACO - pheromone, attractiveness
- describe, how solutions are generated in ACO
- describe the pheromone update based on the quality of solution
- describe the use of ACO for solving the travelling salesman problem and the vehicle routing problem
- Artificial Life
- explain the differences between individual types of artificial life (soft, hard, wet)
- explain the differences between strong and weak artificial life
- describe 1D and 2D cellular automata
- describe the Game of Life rules
- describe the rules of Langton’s ant
- explain the concept of highway in Langton’s ant
- describe the Tierra system
- describe individuals in the Tierra system
- explain the way of addressing (defining jump targets) in the Tierra system
- describe examples of individuals created in the Tierra system (parasites)