2
0
mirror of https://github.com/boostorg/math.git synced 2026-01-19 04:22:09 +00:00

Differential evolution (#1062)

* Differential evolution

---------

Co-authored-by: Matt Borland <matt@mattborland.com>
This commit is contained in:
Nick
2024-01-01 17:09:16 -08:00
committed by GitHub
parent 56b6ae1b94
commit 4ee83916c5
6 changed files with 629 additions and 0 deletions

View File

@@ -0,0 +1,110 @@
[/
Copyright (c) 2023 Nick Thompson
Use, modification and distribution are subject to the
Boost Software License, Version 1.0. (See accompanying file
LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
]
[section:differential_evolution Differential Evolution]
[heading Synopsis]
``
#include <boost/math/tools/differential_evolution.hpp>
namespace boost::math::tools {
template <typename ArgumentContainer> struct differential_evolution_parameters {
using Real = typename ArgumentContainer::value_type;
ArgumentContainer lower_bounds;
ArgumentContainer upper_bounds;
Real F = static_cast<Real>(0.65);
double crossover_ratio = 0.5;
// Population in each generation:
size_t NP = 200;
size_t max_generations = 1000;
size_t threads = std::thread::hardware_concurrency();
ArgumentContainer const * initial_guess = nullptr;
};
template <typename ArgumentContainer, class Func, class URBG>
ArgumentContainer differential_evolution(
const Func cost_function, differential_evolution_parameters<ArgumentContainer> const &de_params, URBG &g,
std::invoke_result_t<Func, ArgumentContainer> target_value = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(),
std::atomic<bool> *cancellation = nullptr,
std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr,
std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr);
} // namespaces
``
The `differential_evolution` function provides an implementation of the (classical) differential evolution optimization algorithm, often going by the label `de/rand/bin/1`.
It is capable of minimizing a cost function defined on a continuous space represented by a set of bounds.
This function has been designed more for progress observability, graceful cancellation, and post-hoc data analysis than for speed of convergence.
We justify this design choice by reference to the "No free lunch" theorem of Wolpert and Macready, which establishes "that for any algorithm, any elevated performance over one class of problems is offset by performance over another class".
[heading Parameters]
`lower_bounds`: A container representing the lower bounds of the optimization space along each dimension. The `.size()` of the bounds should return the dimension of the problem.
`upper_bounds`: A container representing the upper bounds of the optimization space along each dimension. It should have the same size of `lower_bounds`, and each element should be >= the corresponding element of `lower_bounds`.
`F`: The scale factor controlling the rate at which the population evolves (default is 0.65).
`crossover_ratio`: The crossover ratio determining the trade-off between exploration and exploitation (default is 0.5).
`NP`: The population size (default is 200). Parallelization occurs over the population, so this should be "large".
`max_generations`: The maximum number of generations for the optimization (default is 1000).
`threads`: The number of threads to use for parallelization (default is the hardware concurrency). If the objective function is already multithreaded, then this should be set to 1 to prevent oversubscription.
`initial_guess`: An optional guess for where we should start looking for solutions.
The defaults were chosen by a reading of Price, Storn, and Lampinen, chapter 3, with the exception of the population size, which we have chosen a bit larger than they found as core counts have increased since publication of this reference and parallelization occurs within each generation.
Note that there is a tradeoff between finding global minima and convergence speed.
The most robust way of decreasing the probability of getting stuck in a local minima is to increase the population size.
[heading The function]
``
template <typename ArgumentContainer, class Func, class URBG>
ArgumentContainer differential_evolution(const Func cost_function,
differential_evolution_parameters<ArgumentContainer> const &de_params,
URBG &g,
std::invoke_result_t<Func, ArgumentContainer> value_to_reach = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(),
std::atomic<bool> *cancellation = nullptr,
std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr,
std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr)
``
Parameters:
`cost_function`: The cost function to be minimized.
`de_params`: The parameters to the algorithm as described above.
`rng`: A uniform random bit generator, like `std::mt19937_64`.
`value_to_reach`: An optional value that, if reached, stops the optimization. This is the most robust way to terminate the calculation, but in most cases the optimal value of the cost function is unknown. If it is, use it! See the referenced book for clear examples of when target values can be deduced.
`cancellation`: An optional atomic boolean to allow the user to stop the computation and gracefully return the best result found up to that point. N.B.: Cancellation is not immediate; the in-progress generation finishes.
`queries`: An optional vector to store intermediate results during optimization. This is useful for debugging and perhaps volume rendering of the objective function after the calculation is complete.
`current_minimum_cost`: An optional atomic variable to store the current minimum cost during optimization. This allows developers to (e.g.) plot the progress of the minimization over time and in conjunction with the `cancellation` argument allow halting the computation when the progress stagnates. Refer to Price, Storn, and Lampinen, Section 3.2 for caveats with this approach.
Returns:
The argument vector corresponding to the minimum cost found by the optimization.
N.B.: The termination criteria is an "OR", not an "AND".
So if the maximum generations is hit, the iteration stops, even if (say) a `value_to_reach` has not been attained.
[h4:examples Examples]
An example use can be found [@../../example/differential_evolution.cpp here].
More examples and API use cases can be studied in [@../../test/differential_evolution_test.cpp differential_evolution_test.cpp].
[h5:caveats Caveats]
We have decided to only support classic `de/rand/1/bin` because there are too many parameters for this class as it stands, and we have not seen benchmarks that indicate that other variants of the algorithm perform better.
If a compelling usecase is provided, support for the `de/x/y/z` variants can be added.
Supported termination criteria are explicit requests for termination, value-to-reach, and max generations.
Price, Storn, and Lampinen, Section 2.8 also list population statistics and lack of accepted trials over many generations as sensible termination criteria.
These could be supported if there is demand.
[h4:references References]
* Price, Kenneth, Rainer M. Storn, and Jouni A. Lampinen. ['Differential evolution: a practical approach to global optimization.] Springer Science & Business Media, 2006.
* D. H. Wolpert and W. G. Macready, ['No free lunch theorems for optimization.] IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67-82, April 1997, doi: 10.1109/4235.585893.
[endsect] [/section:differential_evolution Differential Evolution]

View File

@@ -21,6 +21,7 @@ There are several fully-worked __root_finding_examples, including:
[include quartic_roots.qbk] [include quartic_roots.qbk]
[include root_finding_examples.qbk] [include root_finding_examples.qbk]
[include minima.qbk] [include minima.qbk]
[include differential_evolution.qbk]
[include root_comparison.qbk] [include root_comparison.qbk]
[/ roots_overview.qbk [/ roots_overview.qbk

View File

@@ -0,0 +1,41 @@
/*
* Copyright Nick Thompson, 2023
* Use, modification and distribution are subject to the
* Boost Software License, Version 1.0. (See accompanying file
* LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#include <iostream>
#include <boost/math/tools/differential_evolution.hpp>
using boost::math::tools::differential_evolution_parameters;
using boost::math::tools::differential_evolution;
double rosenbrock(std::vector<double> const & x) {
double result = 0;
for (size_t i = 0; i < x.size() - 1; ++i) {
double tmp = x[i+1] - x[i]*x[i];
result += 100*tmp*tmp + (1-x[i])*(1-x[i]);
}
return result;
}
int main() {
auto de_params = differential_evolution_parameters<std::vector<double>>();
constexpr const size_t dimension = 10;
// Search on [0, 2]^dimension:
de_params.lower_bounds.resize(dimension, 0);
de_params.upper_bounds.resize(dimension, 2);
// This is a challenging function, increase the max generations 10x from default so we don't terminate prematurely:
de_params.max_generations *= 10;
std::random_device rd;
std::mt19937_64 rng(rd());
// The global minima is exactly zero-but some leeway is required:
double value_to_reach = 1e-5;
auto local_minima = differential_evolution(rosenbrock, de_params, rng, value_to_reach);
std::cout << "Minima: {";
for (auto l : local_minima) {
std::cout << l << ", ";
}
std::cout << "}\n";
std::cout << "Value of cost function at minima: " << rosenbrock(local_minima) << "\n";
}

View File

@@ -0,0 +1,329 @@
/*
* Copyright Nick Thompson, 2023
* Use, modification and distribution are subject to the
* Boost Software License, Version 1.0. (See accompanying file
* LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#ifndef BOOST_MATH_TOOLS_DIFFERENTIAL_EVOLUTION_HPP
#define BOOST_MATH_TOOLS_DIFFERENTIAL_EVOLUTION_HPP
#include <algorithm>
#include <array>
#include <atomic>
#include <cmath>
#include <limits>
#if __has_include(<omp.h>)
#include <omp.h>
#endif
#include <random>
#include <sstream>
#include <stdexcept>
#include <thread>
#include <type_traits>
#include <utility>
#include <vector>
namespace boost::math::tools {
namespace detail {
template <typename T, typename = void> struct has_resize : std::false_type {};
template <typename T>
struct has_resize<T, std::void_t<decltype(std::declval<T>().resize(size_t{}))>> : std::true_type {};
template <typename T> constexpr bool has_resize_v = has_resize<T>::value;
} // namespace detail
// Storn, R., Price, K. (1997). Differential evolution-a simple and efficient heuristic for global optimization over
// continuous spaces.
// Journal of global optimization, 11, 341-359.
// See:
// https://www.cp.eng.chula.ac.th/~prabhas//teaching/ec/ec2012/storn_price_de.pdf
// We provide the parameters in a struct-there are too many of them and they are too unwieldy to pass individually:
template <typename ArgumentContainer> struct differential_evolution_parameters {
using Real = typename ArgumentContainer::value_type;
ArgumentContainer lower_bounds;
ArgumentContainer upper_bounds;
Real F = static_cast<Real>(0.65);
double crossover_ratio = 0.5;
// Population in each generation:
size_t NP = 200;
size_t max_generations = 1000;
#if defined(_OPENMP)
size_t threads = std::thread::hardware_concurrency();
#else
size_t threads = 1;
#endif
ArgumentContainer const *initial_guess = nullptr;
};
template <typename ArgumentContainer>
void validate_differential_evolution_parameters(differential_evolution_parameters<ArgumentContainer> const &de_params) {
using std::isfinite;
using std::isnan;
std::ostringstream oss;
if (de_params.threads == 0) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": Requested zero threads to perform the calculation, but at least 1 is required.";
throw std::invalid_argument(oss.str());
}
if (de_params.lower_bounds.size() == 0) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The dimension of the problem cannot be zero.";
throw std::domain_error(oss.str());
}
if (de_params.upper_bounds.size() != de_params.lower_bounds.size()) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": There must be the same number of lower bounds as upper bounds, but given ";
oss << de_params.upper_bounds.size() << " upper bounds, and " << de_params.lower_bounds.size() << " lower bounds.";
throw std::domain_error(oss.str());
}
for (size_t i = 0; i < de_params.lower_bounds.size(); ++i) {
auto lb = de_params.lower_bounds[i];
auto ub = de_params.upper_bounds[i];
if (lb > ub) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The upper bound must be greater than or equal to the lower bound, but the upper bound is " << ub
<< " and the lower is " << lb << ".";
throw std::domain_error(oss.str());
}
if (!isfinite(lb)) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The lower bound must be finite, but got " << lb << ".";
oss << " For infinite bounds, emulate with std::numeric_limits<Real>::lower() or use a standard infinite->finite "
"transform.";
throw std::domain_error(oss.str());
}
if (!isfinite(ub)) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The upper bound must be finite, but got " << ub << ".";
oss << " For infinite bounds, emulate with std::numeric_limits<Real>::max() or use a standard infinite->finite "
"transform.";
throw std::domain_error(oss.str());
}
}
if (de_params.NP < 4) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The population size must be at least 4, but requested population size of " << de_params.NP << ".";
throw std::invalid_argument(oss.str());
}
if (de_params.threads > de_params.NP) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": There must be more individuals in the population than threads.";
throw std::invalid_argument(oss.str());
}
// From: "Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)"
// > The scale factor, F in (0,1+), is a positive real number that controls the rate at which the population evolves.
// > While there is no upper limit on F, effective values are seldom greater than 1.0.
// ...
// Also see "Limits on F", Section 2.5.1:
// > This discontinuity at F = 1 reduces the number of mutants by half and can result in erratic convergence...
auto F = de_params.F;
if (isnan(F) || F >= 1 || F <= 0) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": F in (0, 1) is required, but got F=" << F << ".";
throw std::domain_error(oss.str());
}
if (de_params.max_generations < 1) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": There must be at least one generation.";
throw std::invalid_argument(oss.str());
}
if (de_params.initial_guess) {
auto dimension = de_params.lower_bounds.size();
if (de_params.initial_guess->size() != dimension) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": The initial guess must have the same dimensions as the problem,";
oss << ", but the problem size is " << dimension << " and the initial guess has "
<< de_params.initial_guess->size() << " elements.";
throw std::domain_error(oss.str());
}
auto const &guess = *de_params.initial_guess;
for (size_t i = 0; i < dimension; ++i) {
auto lb = de_params.lower_bounds[i];
auto ub = de_params.upper_bounds[i];
if (guess[i] < lb || guess[i] > ub) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": At index " << i << " the initial guess " << guess[i] << " is not in the bounds [" << lb << ", " << ub
<< "].";
throw std::domain_error(oss.str());
}
}
}
#if !defined(_OPENMP)
if (de_params.threads != 1) {
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": If OpenMP is not available, then there algorithm must run on a single thread, but requested "
<< de_params.threads << " threads.";
throw std::invalid_argument(oss.str());
}
#endif
}
template <typename ArgumentContainer, class Func, class URBG>
ArgumentContainer differential_evolution(
const Func cost_function, differential_evolution_parameters<ArgumentContainer> const &de_params, URBG &g,
std::invoke_result_t<Func, ArgumentContainer> target_value =
std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(),
std::atomic<bool> *cancellation = nullptr,
std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr,
std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr) {
using Real = typename ArgumentContainer::value_type;
validate_differential_evolution_parameters(de_params);
constexpr bool has_resize = detail::has_resize_v<ArgumentContainer>;
using ResultType = std::invoke_result_t<Func, ArgumentContainer>;
using std::clamp;
using std::isnan;
using std::round;
auto const NP = de_params.NP;
std::vector<ArgumentContainer> population(NP);
auto const dimension = de_params.lower_bounds.size();
for (size_t i = 0; i < population.size(); ++i) {
if constexpr (has_resize) {
population[i].resize(dimension);
} else {
// Argument type must be known at compile-time; like std::array:
if (population[i].size() != dimension) {
std::ostringstream oss;
oss << __FILE__ << ":" << __LINE__ << ":" << __func__;
oss << ": For containers which do not have resize, the default size must be the same as the dimension, ";
oss << "but the default container size is " << population[i].size() << " and the dimension of the problem is "
<< dimension << ".";
oss << " The function argument container type is " << typeid(ArgumentContainer).name() << ".\n";
throw std::runtime_error(oss.str());
}
}
}
// Why don't we provide an option to initialize with (say) a Gaussian distribution?
// > If the optimum's location is fairly well known,
// > a Gaussian distribution may prove somewhat faster, although it
// > may also increase the probability that the population will converge prematurely.
// > In general, uniform distributions are preferred, since they best reflect
// > the lack of knowledge about the optimum's location.
// - Differential Evolution: A Practical Approach to Global Optimization
// That said, scipy uses Latin Hypercube sampling and says self-avoiding sequences are preferable.
// So this is something that could be investigated and potentially improved.
using std::uniform_real_distribution;
uniform_real_distribution<Real> dis(Real(0), Real(1));
for (size_t i = 0; i < population.size(); ++i) {
for (size_t j = 0; j < dimension; ++j) {
auto const &lb = de_params.lower_bounds[j];
auto const &ub = de_params.upper_bounds[j];
population[i][j] = lb + dis(g) * (ub - lb);
}
}
if (de_params.initial_guess) {
population[0] = *de_params.initial_guess;
}
std::atomic<bool> target_attained = false;
std::vector<ResultType> cost(NP, std::numeric_limits<ResultType>::quiet_NaN());
#if defined(_OPENMP)
#pragma omp parallel for num_threads(de_params.threads)
#endif
for (size_t i = 0; i < cost.size(); ++i) {
cost[i] = cost_function(population[i]);
if (!isnan(target_value) && cost[i] <= target_value) {
target_attained = true;
}
if (current_minimum_cost && cost[i] < *current_minimum_cost) {
*current_minimum_cost = cost[i];
}
if (queries) {
#if defined(_OPENMP) // get rid of -Wunknown-pragmas when OpenMP is not available:
#pragma omp critical
#endif
queries->push_back(std::make_pair(population[i], cost[i]));
}
}
// This probably won't show up on any performance metrics, but just convert everything to integer ops:
const auto crossover_int =
static_cast<decltype(g())>(round(static_cast<double>((URBG::max)() - (URBG::min)()) * de_params.crossover_ratio));
std::vector<URBG> generators(de_params.threads);
for (size_t i = 0; i < de_params.threads; ++i) {
generators[i].seed(g());
}
for (size_t generation = 0; generation < de_params.max_generations; ++generation) {
if (cancellation && *cancellation) {
break;
}
if (target_attained) {
break;
}
#if defined(_OPENMP)
#pragma omp parallel for num_threads(de_params.threads)
#endif
for (size_t i = 0; i < NP; ++i) {
#if defined(_OPENMP)
size_t thread_idx = omp_get_thread_num();
#else
size_t thread_idx = 0;
#endif
auto &gen = generators[thread_idx];
size_t r1, r2, r3;
do {
r1 = gen() % NP;
} while (r1 == i);
do {
r2 = gen() % NP;
} while (r2 == i || r2 == r1);
do {
r3 = gen() % NP;
} while (r3 == i || r3 == r2 || r3 == r1);
// Hopefully the compiler optimizes this so that it's not allocated on every iteration:
ArgumentContainer trial_vector;
if constexpr (has_resize) {
trial_vector.resize(dimension);
}
for (size_t j = 0; j < dimension; ++j) {
// See equation (4) of the reference:
auto guaranteed_changed_idx = gen() % NP;
if (gen() < crossover_int || j == guaranteed_changed_idx) {
auto tmp = population[r1][j] + de_params.F * (population[r2][j] - population[r3][j]);
auto const &lb = de_params.lower_bounds[j];
auto const &ub = de_params.upper_bounds[j];
// Some others recommend regenerating the indices rather than clamping;
// I dunno seems like it could get stuck regenerating . . .
trial_vector[j] = clamp(tmp, lb, ub);
} else {
trial_vector[j] = population[i][j];
}
}
auto trial_cost = cost_function(trial_vector);
if (queries) {
#if defined(_OPENMP)
#pragma omp critical
#endif
queries->push_back(std::make_pair(trial_vector, trial_cost));
}
if (isnan(trial_cost)) {
continue;
}
if (trial_cost < cost[i] || isnan(cost[i])) {
std::swap(population[i], trial_vector);
cost[i] = trial_cost;
if (!isnan(target_value) && cost[i] <= target_value) {
target_attained = true;
// In a single-threaded context, I'd put a break statement here,
// but OpenMP does not allow break statements in for loops.
// We'll just have to wait until the end of this generation.
}
if (current_minimum_cost && cost[i] < *current_minimum_cost) {
*current_minimum_cost = cost[i];
}
}
}
}
auto it = std::min_element(cost.begin(), cost.end());
return population[std::distance(cost.begin(), it)];
}
} // namespace boost::math::tools
#endif

View File

@@ -1255,6 +1255,7 @@ test-suite interpolators :
[ run compile_test/interpolators_catmull_rom_incl_test.cpp compile_test_main : : : [ requires cxx11_hdr_array cxx11_hdr_initializer_list ] ] [ run compile_test/interpolators_catmull_rom_incl_test.cpp compile_test_main : : : [ requires cxx11_hdr_array cxx11_hdr_initializer_list ] ]
[ run compile_test/interpolators_catmull_rom_concept_test.cpp compile_test_main : : : [ requires cxx11_hdr_array cxx11_hdr_initializer_list ] ] [ run compile_test/interpolators_catmull_rom_concept_test.cpp compile_test_main : : : [ requires cxx11_hdr_array cxx11_hdr_initializer_list ] ]
[ run test_standalone_asserts.cpp ] [ run test_standalone_asserts.cpp ]
[ run differential_evolution_test.cpp : : : [ requires cxx17_if_constexpr cxx17_std_apply ] ]
; ;
test-suite quadrature : test-suite quadrature :

View File

@@ -0,0 +1,147 @@
/*
* Copyright Nick Thompson, 2023
* Use, modification and distribution are subject to the
* Boost Software License, Version 1.0. (See accompanying file
* LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#include "math_unit_test.hpp"
#include <boost/math/constants/constants.hpp>
#include <boost/math/tools/differential_evolution.hpp>
#include <random>
using boost::math::constants::e;
using boost::math::constants::two_pi;
using boost::math::tools::differential_evolution;
using boost::math::tools::differential_evolution_parameters;
using std::cbrt;
using std::cos;
using std::exp;
using std::sqrt;
// Taken from: https://en.wikipedia.org/wiki/Test_functions_for_optimization
template <typename Real> Real ackley(std::array<Real, 2> const &v) {
Real x = v[0];
Real y = v[1];
Real arg1 = -sqrt((x * x + y * y) / 2) / 5;
Real arg2 = cos(two_pi<Real>() * x) + cos(two_pi<Real>() * y);
return -20 * exp(arg1) - exp(arg2 / 2) + 20 + e<Real>();
}
template <class Real> void test_ackley() {
using ArgType = std::array<Real, 2>;
auto de_params = differential_evolution_parameters<ArgType>();
de_params.lower_bounds = {-5, -5};
de_params.upper_bounds = {5, 5};
std::mt19937_64 gen(12345);
auto local_minima = differential_evolution(ackley<Real>, de_params, gen);
CHECK_LE(std::abs(local_minima[0]), 10 * std::numeric_limits<Real>::epsilon());
CHECK_LE(std::abs(local_minima[1]), 10 * std::numeric_limits<Real>::epsilon());
// Does it work with a lambda?
auto ack = [](std::array<Real, 2> const &x) { return ackley<Real>(x); };
local_minima = differential_evolution(ack, de_params, gen);
CHECK_LE(std::abs(local_minima[0]), 10 * std::numeric_limits<Real>::epsilon());
CHECK_LE(std::abs(local_minima[1]), 10 * std::numeric_limits<Real>::epsilon());
// Test that if an intial guess is the exact solution, the returned solution is the exact solution:
std::array<Real, 2> initial_guess{0, 0};
de_params.initial_guess = &initial_guess;
local_minima = differential_evolution(ack, de_params, gen);
CHECK_EQUAL(local_minima[0], Real(0));
CHECK_EQUAL(local_minima[1], Real(0));
}
template <typename Real> auto rosenbrock_saddle(std::array<Real, 2> const &v) {
auto x = v[0];
auto y = v[1];
return 100 * (x * x - y) * (x * x - y) + (1 - x) * (1 - x);
}
template <class Real> void test_rosenbrock_saddle() {
using ArgType = std::array<Real, 2>;
auto de_params = differential_evolution_parameters<ArgType>();
de_params.lower_bounds = {0.5, 0.5};
de_params.upper_bounds = {2.048, 2.048};
std::mt19937_64 gen(234568);
auto local_minima = differential_evolution(rosenbrock_saddle<Real>, de_params, gen);
CHECK_ABSOLUTE_ERROR(Real(1), local_minima[0], 10 * std::numeric_limits<Real>::epsilon());
CHECK_ABSOLUTE_ERROR(Real(1), local_minima[1], 10 * std::numeric_limits<Real>::epsilon());
// Does cancellation work?
std::atomic<bool> cancel = true;
gen.seed(12345);
local_minima =
differential_evolution(rosenbrock_saddle<Real>, de_params, gen, std::numeric_limits<Real>::quiet_NaN(), &cancel);
CHECK_GE(std::abs(local_minima[0] - Real(1)), std::sqrt(std::numeric_limits<Real>::epsilon()));
}
template <class Real> Real rastrigin(std::vector<Real> const &v) {
Real A = 10;
Real y = 10 * v.size();
for (auto x : v) {
y += x * x - A * cos(two_pi<Real>() * x);
}
return y;
}
template <class Real> void test_rastrigin() {
using ArgType = std::vector<Real>;
auto de_params = differential_evolution_parameters<ArgType>();
de_params.lower_bounds.resize(8, static_cast<Real>(-5.12));
de_params.upper_bounds.resize(8, static_cast<Real>(5.12));
std::mt19937_64 gen(34567);
auto local_minima = differential_evolution(rastrigin<Real>, de_params, gen);
for (auto x : local_minima) {
CHECK_ABSOLUTE_ERROR(x, Real(0), Real(2e-4));
}
// By definition, the value of the function which a target value is provided must be <= target_value.
Real target_value = 1e-3;
local_minima = differential_evolution(rastrigin<Real>, de_params, gen, target_value);
CHECK_LE(rastrigin(local_minima), target_value);
}
double sphere(std::vector<float> const &v) {
double r = 0.0;
for (auto x : v) {
double x_ = static_cast<double>(x);
r += x_ * x_;
}
if (r >= 1) {
return std::numeric_limits<double>::quiet_NaN();
}
return r;
}
// Tests NaN return types and return type != input type:
void test_sphere() {
using ArgType = std::vector<float>;
auto de_params = differential_evolution_parameters<ArgType>();
de_params.lower_bounds.resize(8, -1);
de_params.upper_bounds.resize(8, 1);
de_params.NP *= 10;
de_params.max_generations *= 10;
std::mt19937_64 gen(56789);
auto local_minima = differential_evolution(sphere, de_params, gen);
for (auto x : local_minima) {
CHECK_ABSOLUTE_ERROR(0.0f, x, 2e-4f);
}
}
#define GCC_COMPILER (defined(__GNUC__) && !defined(__clang__))
int main() {
// GCC<=8 rejects the function call syntax we use here.
// Just do a workaround:
#if !defined(GCC_COMPILER) || (defined(GCC_COMPILER) && __GNUC__ >= 9)
test_ackley<float>();
test_ackley<double>();
test_rosenbrock_saddle<double>();
test_rastrigin<float>();
#endif
test_sphere();
return boost::math::test::report_errors();
}