class FitnessFunctions(object):
collection of objective functions.
Static Method | binval |
return sum_i(0 if (optimum[0] <= x[i] <= optimum[1]) else 2**i) |
Static Method | leadingones |
return len(x) - nb of leading-ones-in-x to be minimized, |
Static Method | onemax |
return sum_i(0 if (optimum[0] <= x[i] <= optimum[1]) else 1) |
Method | __init__ |
Undocumented |
Method | absplussin |
multimodal function with the global optimum at x_i = -1.152740846 |
Method | branin |
Undocumented |
Method | bukin |
Bukin function from Wikipedia, generalized simplistically from 2-D. |
Method | cigar |
Cigar test objective function |
Method | cigtab |
Cigtab test objective function |
Method | cigtab2 |
cigtab with 1 + 5% long and short axes. |
Method | cornerelli |
Undocumented |
Method | cornerellirot |
Undocumented |
Method | cornersphere |
Sphere (squared norm) test objective function constraint to the corner |
Method | diagonal |
Undocumented |
Method | diffpow |
Diffpow test objective function |
Method | elli |
Ellipsoid test objective function |
Method | elliconstraint |
ellipsoid test objective function with "constraints" |
Method | ellihalfrot |
return ellirot(x[:N2]) + elli(x[N2:]) where N2 is roughly frac*len(x) |
Method | ellirot |
Undocumented |
Method | elliwithoneconstraint |
Undocumented |
Method | epslow |
Undocumented |
Method | epslowsphere |
TODO: define as wrapper |
Method | flat |
Undocumented |
Method | fun |
fun_as_arg(x, fun, *more_args) calls fun(x, *more_args). |
Method | goldsteinprice |
Undocumented |
Method | grad |
Undocumented |
Method | grad |
Undocumented |
Method | grad |
symmetric gradient |
Method | grad |
Undocumented |
Method | grad |
Undocumented |
Method | grad |
Undocumented |
Method | grad |
Undocumented |
Method | griewank |
Undocumented |
Method | halfelli |
Undocumented |
Method | happycat |
a difficult sharp ridge type function. |
Method | hyperelli |
Undocumented |
Method | levy |
a rather benign multimodal function. |
Method | lincon |
ridge like linear function with one linear constraint |
Method | linear |
Undocumented |
Method | lineard |
Undocumented |
Method | noise |
Undocumented |
Method | noise |
Undocumented |
Method | noisysphere |
noise=10 does not work with default popsize, cma.NoiseHandler(dimension, 1e7) helps |
Method | normal |
Undocumented |
Method | optprob |
Undocumented |
Method | partsphere |
Sphere (squared norm) test objective function |
Method | pnorm |
Undocumented |
Method | powel |
Undocumented |
Method | rand |
Random test objective function |
Method | rastrigin |
Rastrigin test objective function |
Method | ridge |
Undocumented |
Method | ridgecircle |
a difficult sharp ridge type function. |
Method | rosen |
Rosenbrock test objective function, x0=0 |
Method | rosen0 |
Rosenbrock test objective function with optimum in all-zeros, x0=-1 |
Method | rosen |
Undocumented |
Method | rosen |
needs exponential number of steps in a non-increasing f-sequence. |
Method | rosenelli |
Undocumented |
Method | rot |
returns fun(rotation(x), *args), ie. fun applied to a rotated argument |
Method | schaffer |
Schaffer function x0 in [-100..100] |
Method | schwefel2 |
Schwefel 2.22 function |
Method | schwefelelli |
Undocumented |
Method | schwefelmult |
multimodal Schwefel function with domain -500..500 |
Method | sectorsphere |
asymmetric Sphere (squared norm) test objective function |
Method | somenan |
returns sometimes np.nan, otherwise fun(x) |
Method | sphere |
Sphere (squared norm) test objective function |
Method | sphere |
Sphere (squared norm) test objective function |
Method | spherew |
Sphere (squared norm) with sum x_i = 1 test objective function |
Method | spherewithnconstraints |
Undocumented |
Method | spherewithoneconstraint |
Undocumented |
Method | styblinski |
in [-5, 5] found also in Lazar and Jarre 2016, optimum in f(-2.903534...)=0 |
Method | subspace |
No summary |
Method | tablet |
Tablet test objective function |
Method | trid |
Undocumented |
Method | twoaxes |
Cigtab test objective function |
Method | xinsheyang2 |
a multimodal function which is rather unsolvable in larger dimension. |
Class Variable | binary |
default f-offset for binary functions at the optimum. |
Class Variable | binary |
default interval where the optimum is assumed on binary functions. |
Class Variable | evaluations |
Undocumented |
Property | BBOB |
Undocumented |
return sum_i(0 if (optimum[0] <= x[i] <= optimum[1]) else 2**i)
to be minimized.
Details: the result is computed as int
, because in dimension > 54
a float
representation can not account for the least sensitive
bit anymore. Because we minimize, this is not necessarily a big
problem.
return len(x) - nb of leading-ones-in-x to be minimized,
where only values in [optimum[0], optimum[1]] are considered to be "equal to" 1.
Bukin function from Wikipedia, generalized simplistically from 2-D.
http://en.wikipedia.org/wiki/Test_functions_for_optimization
cigtab with 1 + 5% long and short axes.
n_axes: int
, if > 0, sets the number of long as well as short
axes to n_axes
, respectively.
Ellipsoid test objective function
fun_as_arg(x, fun, *more_args) calls fun(x, *more_args).
Use case:
fmin(cma.fun_as_arg, args=(fun,), gradf=grad_numerical)
calls fun_as_args(x, args) and grad_numerical(x, fun, args=args)
needs exponential number of steps in a non-increasing f-sequence.
x_0 = (-1,1,...,1) See Jarre (2011) "On Nesterov's Smooth Chebyshev-Rosenbrock Function"
a multimodal function which is rather unsolvable in larger dimension.
>>> import functools >>> import numpy as np >>> import cma >>> f = functools.partial(cma.ff.xinsheyang2, termination_friendly=False) >>> X = [(i * [0] + (4 - i) * [1.24]) for i in range(5)] >>> for x in X: print(x) [1.24, 1.24, 1.24, 1.24] [0, 1.24, 1.24, 1.24] [0, 0, 1.24, 1.24] [0, 0, 0, 1.24] [0, 0, 0, 0] >>> ' '.join(['{0:.3}'.format(f(x)) for x in X]) # [np.round(f(x), 3) for x in X] '0.091 0.186 0.336 0.456 0.0'
One needs to solve a trinary deceptive function where f-value (to be minimized) is monotonuously decreasing with increasing distance to the global optimum >= 1. That is, the global optimum is surrounded by 3^n - 1 local optima that have the better values the further they are away from the global optimum.
Conclusion: it is a rather suspicious sign if an algorithm finds the global optimum of this function in larger dimension.
See also http://benchmarkfcns.xyz/benchmarkfcns/xinsheyangn2fcn.html
default f-offset for binary functions at the optimum.
Changing this default is only effective before import.
default interval where the optimum is assumed on binary functions.
The interval is chosen such that the value from round(.) or floor(. + 1/2) or int(. + 1/2) is in the interval middle. This prevents some unexpected outcomes with algorithms that search on the continuous values. The most logical domain boundary values are now [-0.5, 1.5] or [0, 1].
Details: Changing this default is only effective before import.