Dissertation theses
Advanced methods of evolutionary black-box optimization
Optimization tasks encountered in real-world applications more and more frequently optimize objectives that are not mathematically calculated functions, but results of computer simulations or experimental measurements. That kind of optimization, called black-box optimization, presents two great challenges: 1. it is possible to get only the values of such a black-box objective, not its gradient or higher derivatives, 2. the evaluation of the objective typically costs much time and/or money. As far as the first challenge is concerned, evolutionary algorithms have proven very successful for optimization using only the values of the objective during the last decades. However, they typically require a large number of evaluations, which conflicts with the second challenge. That conflict incited an intense reseaech into black-box evolutionary optimization during the last decade, bringing a broad spectrum of topics suitable for PhD theses.
Artificial neural networkks in black-box optimization
As black-box optimization, we refer to optimization in which the mathematical expression of the optimized function is not used (typically because no such expression is known), but the optimization algorithm only has its values at specific points available. These values are usually obtained empirically by measurement or through experiments, either physical or in the form of simulations. Black-box optimization uses algorithms that make almost no assumptions about the mathematical properties of the optimized function, most often evolutionary algorithms and other nature-inspired algorithms such as particle swarms. Since these algorithms work only with the functional values of the optimized function, they approach its optimum much more slowly than the optimization methods for smooth functions, which also use information about the gradient or second derivatives of the optimized function. This property is particularly disadvantageous in conjunction with the fact that empirically obtaining the value of the optimized function is usually quite expensive and time-consuming. However, evolutionary algorithms can be significantly accelerated by using the empirical black-box function only occasionally for evaluating the value of the optimized function, whereas mostly only its sufficiently accurate regression model is evaluated. Among the regression models used for this purpose, artificial neural networks have been for about 20 years, first multilayer perceptrons and later networks with radial basis functions. Due to the current popularity of modern types of neural networks, often referred to as deep neural networks, new approaches usable speed up black-box optimization based on modern neural networks have nevertheless been proposed in recent years, such as deep Gaussian processses, using Bayesian neural networks, optimization in a latent space of a lower dimension, mapped by a generative neural network into the spacein which the inputs of the optimized black-box function lie, or making use of GAN-type networks (generative adversarial network), the two components of which are used for the exploration and exploitation components of the optimization.
Explainability of graph neural networks
Graph data are used to keep knowledge about many important areas of the present world, such as computer networks, social networks, chemical molecules, communication systems, industrial processes or text documents. However, data-based methods for data analysis and modelling, such as classification, regression and clustering, have been developed primarily for vector data, and graph data need to be for using those methods first represented in some vector space. Most successful at such representation are artificial neural networks. Due to the need to learn graph representation, a specific kind of neural networks appeared, called graph neural networks. However, also graph neural networks have the propertzy of an overwhelming majority of neural networks that the rtransformation of network inputs to their outputs is a black-box mapping, which for a given network input does not enable to explain its output. In connetion with traditional neural networks, primarily multiayer perceptrons and radial basis function networks, attention has been paid, already since the 1990s, to methods enabling to describe the dependence of network output on its input by means of logical implications and equivalences, or to explain in some other way the value of its output for a given input. In the case of graph neural networks, however, research into explainability methods is only starting. Contribute to it should also the proposed thesis.
Making use of active learning in optimization
Evolutionary algorithms are, in the last 20 years, one of the most successful methods for solving non-traditional optimization problems, such as search for the most suitable documents containing required information, discovery of the most interesting knowledge in available data, or other kinds of optimization tasks in which the values of the objective function can be obtained only empirically. Because evolutionary algorithms employ only function values of the objective function, they approach its optimum much more slowly than optimization methods for smooth functions, which make use of information about the objective function gradients as well, possibly also about its second derivatives. This property of evolutionary algorithms is particularly disadvantageous in the context of costly and time-consuming empirical way of obtaining values of the objective function. However, evolutionary algorithms can be substantially speeded up if they employ the empirical objective function only sometimes when evaluating objective function values, whereas they mostly evaluate only a sufficiently accurate regression model of that function. It is the accuracy of the model that determines how successful surrogate of the original empirical function it will be. Therefore, the model is made more accurate after obtaining each new generation of points in which the empirical function has been evaluated, through re-learning including those points. However, it is possible to go even further and to consider, when choosing points for empirical evaluation, besides the value of the empirical function also how they contribute, during model re-learning, to making it more accurate. Such an approach is termed active learning. However, using active learning to accelerate evolutionary algorithms is only at a very beginning, and should be supported also by the proposed thesis.
Transfer learning in black-box optimization
Transfer learning is a method of algorithmic extraction of knowledge from a solved problem and its incorporation into the solution of another problem. It began to be studied more deeply about 20 years ago, in connection with the development of modern types of neural networks, often referred to as deep networks. It allows using a network trained on a large amount of data to train another network of similar quality on a much smaller amount of data.
In recent years, there have been also attempts to use transfer learning in black-box optimization. This is an optimization in which the mathematical expression of the optimized function is not used (typically because no such expression is known), but only its values at specific points are available to the optimization algorithm. These values are usually obtained empirically by measurement or by means of experiments, whether they take place physically or in the form of simulations. Black-box optimization uses algorithms that make almost no assumptions about the mathematical properties of the optimized function, most often evolutionary algorithms and other nature-inspired algorithms such as particle swarm optimization. As for transfer learning, it turns out that, similar to the case of neural networks, it allows to train a network of the same quality with a smaller amount of training data, it makes it possible to find the optimum based on a smaller number of values of the black-box function during black-box optimization. This is very promising because empirically obtaining the value of the optimized function is usually quite expensive and time-consuming.
However, research on methods for transfer learning in black-box optimization is still in a very early stage. A contribution to it should also be the proposed thesis.