module documentation

CMA-ES (evolution strategy), the main sub-module of cma providing in particular CMAOptions, CMAEvolutionStrategy, and fmin2

Class CMAEvolutionStrategy CMA-ES stochastic optimizer class with ask-and-tell interface.
Class CMAEvolutionStrategyResult A results tuple from CMAEvolutionStrategy property result.
Class CMAOptions a dictionary with the available options and their default values for class CMAEvolutionStrategy.
Class MetaParameters collection of many meta parameters.
Exception InjectionWarning Injected solutions are not passed to tell as expected
Function cma_default_options_ use this function to get keyword completion for CMAOptions.
Function fmin functional interface to the stochastic optimizer CMA-ES for non-convex function minimization.
Function fmin2 wrapper around cma.fmin returning the tuple (xbest, es),
Function fmin_con Deprecated: use cma.ConstrainedFitnessAL or cma.fmin_con2 instead.
Function fmin_con2 optimize f with inequality constraints g.
Function fmin_lq_surr minimize objective_function with lq-CMA-ES.
Function fmin_lq_surr2 minimize objective_function with lq-CMA-ES.
Function is_feasible default to check feasibility of f-values.
Function no_constraints Undocumented
Function safe_str return a string safe to eval or raise an exception.
Variable all_stoppings Undocumented
Variable cma_allowed_options_keys Undocumented
Variable cma_default_options Undocumented
Variable cma_versatile_options Undocumented
Variable meta_parameters Undocumented
Variable use_archives speed up for very large population size. use_archives prevents the need for an inverse gp-transformation, relies on collections module, not sure what happens if set to False.
Class _CMAEvolutionStrategyResult A results tuple from CMAEvolutionStrategy property result.
Class _CMAParameters strategy parameters like population size and learning rates.
Class _CMASolutionDict_empty a hack to get most code examples running
Class _CMASolutionDict_functional No class docstring; 0/2 instance variable, 1/2 method documented
Class _CMAStopDict keep and update a termination condition dictionary.
Function _al_set_logging try to figure a good logging value from various verbosity options
Variable _assertions_cubic Undocumented
Variable _assertions_quadratic Undocumented
Variable _CMASolutionDict Undocumented
Variable _debugging Undocumented
Variable _depreciated Undocumented
Variable _new_injections Undocumented
def cma_default_options_(AdaptSigma='True # or False or any CMAAdaptSigmaBase class e.g. CMAAdaptSigmaTPA, CMAAdaptSigmaCSA', CMA_active='True # negative update, conducted after the original update', CMA_active_injected='0 #v weight multiplier for negative weights of injected solutions', CMA_cmean='1 # learning rate for the mean value', CMA_const_trace='False # normalize trace, 1, True, "arithm", "geom", "aeig", "geig" are valid', CMA_diagonal='0*100*N/popsize**0.5 # nb of iterations with diagonal covariance matrix, True for always', CMA_diagonal_decoding='0 # multiplier for additional diagonal update', CMA_eigenmethod='np.linalg.eigh # or cma.utilities.math.eig or pygsl.eigen.eigenvectors', CMA_elitist='False #v or "initial" or True, elitism likely impairs global search performance', CMA_injections_threshold_keep_len='1 #v keep length if Mahalanobis length is below the given relative threshold', CMA_mirrors='popsize < 6 # values <0.5 are interpreted as fraction, values >1 as numbers (rounded), for `True` about 0.16 is used', CMA_mirrormethod='2 # 0=unconditional, 1=selective, 2=selective with delay', CMA_mu='None # parents selection parameter, default is popsize // 2', CMA_on='1 # multiplier for all covariance matrix updates', CMA_sampler='None # a class or instance that implements the interface of `cma.interfaces.StatisticalModelSamplerWithZeroMeanBaseClass`', CMA_sampler_options='{} # options passed to `CMA_sampler` class init as keyword arguments', CMA_rankmu='1.0 # multiplier for rank-mu update learning rate of covariance matrix', CMA_rankone='1.0 # multiplier for rank-one update learning rate of covariance matrix', CMA_recombination_weights='None # a list, see class RecombinationWeights, overwrites CMA_mu and popsize options', CMA_dampsvec_fac='np.Inf # tentative and subject to changes, 0.5 would be a "default" damping for sigma vector update', CMA_dampsvec_fade='0.1 # tentative fading out parameter for sigma vector update', CMA_teststds='None # factors for non-isotropic initial distr. of C, mainly for test purpose, see CMA_stds for production', CMA_stds='None # multipliers for sigma0 in each coordinate (not represented in C), or use `cma.ScaleCoordinates` instead', CSA_dampfac='1 #v positive multiplier for step-size damping, 0.3 is close to optimal on the sphere', CSA_damp_mueff_exponent='0.5 # zero would mean no dependency of damping on mueff, useful with CSA_disregard_length option', CSA_disregard_length='False #v True is untested, also changes respective parameters', CSA_clip_length_value='None #v poorly tested, [0, 0] means const length N**0.5, [-1, 1] allows a variation of +- N/(N+2), etc.', CSA_squared='False #v use squared length for sigma-adaptation ', BoundaryHandler='BoundTransform # or BoundPenalty, unused when ``bounds in (None, [None, None])``', bounds='[None, None] # lower (=bounds[0]) and upper domain boundaries, each a scalar or a list/vector', conditioncov_alleviate='[1e8, 1e12] # when to alleviate the condition in the coordinates and in main axes', eval_final_mean='True # evaluate the final mean, which is a favorite return candidate', fixed_variables='None # dictionary with index-value pairs like {0:1.1, 2:0.1} that are not optimized', ftarget='-inf #v target function value, minimization', integer_variables='[] # index list, invokes basic integer handling: prevent std dev to become too small in the given variables', is_feasible='is_feasible #v a function that computes feasibility, by default lambda x, f: f not in (None, np.NaN)', maxfevals='inf #v maximum number of function evaluations', maxiter='100 + 150 * (N+3)**2 // popsize**0.5 #v maximum number of iterations', mean_shift_line_samples='False #v sample two new solutions colinear to previous mean shift', mindx='0 #v minimal std in any arbitrary direction, cave interference with tol*', minstd='0 #v minimal std (scalar or vector) in any coordinate direction, cave interference with tol*', maxstd='None #v maximal std (scalar or vector) in any coordinate direction', maxstd_boundrange='1/3 # maximal std relative to bound_range per coordinate, overruled by maxstd', pc_line_samples='False #v one line sample along the evolution path pc', popsize='4 + 3 * np.log(N) # population size, AKA lambda, int(popsize) is the number of new solution per iteration', popsize_factor='1 # multiplier for popsize, convenience option to increase default popsize', randn='np.random.randn #v randn(lam, N) must return an np.array of shape (lam, N), see also cma.utilities.math.randhss', scaling_of_variables='None # deprecated, rather use fitness_transformations.ScaleCoordinates instead (or CMA_stds). Scale for each variable in that effective_sigma0 = sigma0*scaling. Internally the variables are divided by scaling_of_variables and sigma is unchanged, default is `np.ones(N)`', seed='time # random number seed for `numpy.random`; `None` and `0` equate to `time`, `np.nan` means "do nothing", see also option "randn"', signals_filename='cma_signals.in # read versatile options from this file (use `None` or `""` for no file) which contains a single options dict, e.g. ``{"timeout": 0}`` to stop, string-values are evaluated, e.g. "np.inf" is valid', termination_callback='[] #v a function or list of functions returning True for termination, called in `stop` with `self` as argument, could be abused for side effects', timeout='inf #v stop if timeout seconds are exceeded, the string "2.5 * 60**2" evaluates to 2 hours and 30 minutes', tolconditioncov='1e14 #v stop if the condition of the covariance matrix is above `tolconditioncov`', tolfacupx='1e3 #v termination when step-size increases by tolfacupx (diverges). That is, the initial step-size was chosen far too small and better solutions were found far away from the initial solution x0', tolupsigma='1e20 #v sigma/sigma0 > tolupsigma * max(eivenvals(C)**0.5) indicates "creeping behavior" with usually minor improvements', tolflatfitness='1 #v iterations tolerated with flat fitness before termination', tolfun='1e-11 #v termination criterion: tolerance in function value, quite useful', tolfunhist='1e-12 #v termination criterion: tolerance in function value history', tolfunrel='0 #v termination criterion: relative tolerance in function value: Delta f current < tolfunrel * (median0 - median_min)', tolstagnation='int(100 + 100 * N**1.5 / popsize) #v termination if no improvement over tolstagnation iterations', tolx='1e-11 #v termination criterion: tolerance in x-changes', transformation='None # depreciated, use cma.fitness_transformations.FitnessTransformation instead.\n [t0, t1] are two mappings, t0 transforms solutions from CMA-representation to f-representation (tf_pheno),\n t1 is the (optional) back transformation, see class GenoPheno', typical_x='None # used with scaling_of_variables', updatecovwait='None #v number of iterations without distribution update, name is subject to future changes', verbose='3 #v verbosity e.g. of initial/final message, -1 is very quiet, -9 maximally quiet, may not be fully implemented', verb_append='0 # initial evaluation counter, if append, do not overwrite output files', verb_disp='100 #v verbosity: display console output every verb_disp iteration', verb_disp_overwrite='inf #v start overwriting after given iteration', verb_filenameprefix=CMADataLogger.default_prefix+' # output path (folder) and filenames prefix', verb_log='1 #v verbosity: write data to files every verb_log iteration, writing can be time critical on fast to evaluate functions', verb_log_expensive='N * (N <= 50) # allow to execute eigendecomposition for logging every verb_log_expensive iteration, 0 or False for never', verb_plot='0 #v in fmin2(): plot() is called every verb_plot iteration', verb_time='True #v output timings on console', vv='{} #? versatile set or dictionary for hacking purposes, value found in self.opts["vv"]'):

use this function to get keyword completion for CMAOptions.

cma.CMAOptions('substr') provides even substring search.

returns default options as a dict (not a cma.CMAOptions dict).

def fmin(objective_function, x0, sigma0, options=None, args=(), gradf=None, restarts=0, restart_from_best='False', incpopsize=2, eval_initial_x=False, parallel_objective=None, noise_handler=None, noise_change_sigma_exponent=1, noise_kappa_exponent=0, bipop=False, callback=None):

functional interface to the stochastic optimizer CMA-ES for non-convex function minimization.

fmin2 provides the cleaner return values.

Calling Sequences

fmin(objective_function, x0, sigma0)
minimizes objective_function starting at x0 and with standard deviation sigma0 (step-size)
fmin(objective_function, x0, sigma0, options={'ftarget': 1e-5})
minimizes objective_function up to target function value 1e-5, which is typically useful for benchmarking.
fmin(objective_function, x0, sigma0, args=('f',))
minimizes objective_function called with an additional argument 'f'.
fmin(objective_function, x0, sigma0, options={'ftarget':1e-5, 'popsize':40})
uses additional options ftarget and popsize
fmin(objective_function, esobj, None, options={'maxfevals': 1e5})
uses the CMAEvolutionStrategy object instance esobj to optimize objective_function, similar to esobj.optimize().

Arguments

objective_function
called as objective_function(x, *args) to be minimized. x is a one-dimensional numpy.ndarray. See also the parallel_objective argument. objective_function can return numpy.NaN, which is interpreted as outright rejection of solution x and invokes an immediate resampling and (re-)evaluation of a new solution not counting as function evaluation. The attribute variable_annotations is passed into the CMADataLogger.persistent_communication_dict.
x0
list or numpy.ndarray, initial guess of minimum solution before the application of the geno-phenotype transformation according to the transformation option. It can also be a callable that is called (without input argument) before each restart to yield the initial guess such that each restart may start from a different place. Otherwise, x0 can also be a cma.CMAEvolutionStrategy object instance, in that case sigma0 can be None.
sigma0
scalar, initial standard deviation in each coordinate. sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected). The variables in objective_function should be scaled such that they presumably have similar sensitivity. See also ScaleCoordinates.
options
a dictionary with additional options passed to the constructor of class CMAEvolutionStrategy, see cma.CMAOptions () for a list of available options.
args=()
arguments to be used to call the objective_function
gradf=None
gradient of f, where len(gradf(x, *args)) == len(x). gradf is called once in each iteration if gradf is not None.
restarts=0
number of restarts with increasing population size, see also parameter incpopsize, implementing the IPOP-CMA-ES restart strategy, see also parameter bipop; to restart from different points (recommended), pass x0 as a string.
restart_from_best=False
which point to restart from
incpopsize=2
multiplier for increasing the population size popsize before each restart
parallel_objective
an objective function that accepts a list of numpy.ndarray as input and returns a list, which is mostly used instead of objective_function, but for the initial (also initial elitist) and the final evaluations unless not callable(objective_function). If parallel_objective is given, the objective_function (first argument) may be None.
eval_initial_x=None
evaluate initial solution, for None only with elitist option
noise_handler=None
a NoiseHandler class or instance or None. Example: cma.fmin(f, 6 * [1], 1, noise_handler=cma.NoiseHandler(6)) see help(cma.NoiseHandler).
noise_change_sigma_exponent=1
exponent for the sigma increment provided by the noise handler for additional noise treatment. 0 means no sigma change.
noise_evaluations_as_kappa=0
instead of applying reevaluations, the "number of evaluations" is (ab)used as scaling factor kappa (experimental).
bipop=False
if bool(bipop) is True, run as BIPOP-CMA-ES; BIPOP is a special restart strategy switching between two population sizings - small (relative to the large population size and with varying initial sigma, the first run is accounted on the "small" budget) and large (progressively increased as in IPOP). This makes the algorithm potentially solve both, functions with many regularly or irregularly arranged local optima (the latter by frequently restarting with small populations). Small populations are (re-)started as long as the cumulated budget_small is smaller than bipop x max(1, budget_large). For the bipop parameter to actually conduct restarts also with the larger population size, select a non-zero number of (IPOP) restarts; the recommended setting is restarts<=9 and x0 passed as a string using numpy.rand to generate initial solutions. Small-population restarts do not count into this total restart count.
callback=None
callable or list of callables called at the end of each iteration with the current CMAEvolutionStrategy instance as argument.

Optional Arguments

All values in the options dictionary are evaluated if they are of type str, besides verb_filenameprefix, see class CMAOptions for details. The full list is available by calling cma.CMAOptions().

>>> import cma
>>> cma.CMAOptions()  #doctest: +ELLIPSIS
{...

Subsets of options can be displayed, for example like cma.CMAOptions('tol'), or cma.CMAOptions('bound'), see also class CMAOptions.

Return

Return the list provided in CMAEvolutionStrategy.result appended with termination conditions, an OOOptimizer and a BaseDataLogger:

res = es.result + (es.stop(), es, logger)
where
  • res[0] (xopt) -- best evaluated solution
  • res[1] (fopt) -- respective function value
  • res[2] (evalsopt) -- respective number of function evaluations
  • res[3] (evals) -- number of overall conducted objective function evaluations
  • res[4] (iterations) -- number of overall conducted iterations
  • res[5] (xmean) -- mean of the final sample distribution
  • res[6] (stds) -- effective stds of the final sample distribution
  • res[-3] (stop) -- termination condition(s) in a dictionary
  • res[-2] (cmaes) -- class CMAEvolutionStrategy instance
  • res[-1] (logger) -- class CMADataLogger instance

Details

This function is an interface to the class CMAEvolutionStrategy. The latter class should be used when full control over the iteration loop of the optimizer is desired.

Examples

The following example calls fmin optimizing the Rosenbrock function in 10-D with initial solution 0.1 and initial step-size 0.5. The options are specified for the usage with the doctest module.

>>> import cma
>>> # cma.CMAOptions()  # returns all possible options
>>> options = {'CMA_diagonal':100, 'seed':1234, 'verb_time':0}
>>>
>>> res = cma.fmin(cma.ff.rosen, [0.1] * 10, 0.3, options)  #doctest: +ELLIPSIS
(5_w,10)-aCMA-ES (mu_w=3.2,w_1=45%) in dimension 10 (seed=1234...)
   Covariance matrix is diagonal for 100 iterations (1/ccov=26...
Iterat #Fevals   function value  axis ratio  sigma ...
    1     10 ...
termination on tolfun=1e-11 ...
final/bestever f-value = ...
>>> assert res[1] < 1e-12  # f-value of best found solution
>>> assert res[2] < 8000  # evaluations

The above call is pretty much equivalent with the slightly more verbose call:

res = cma.CMAEvolutionStrategy([0.1] * 10, 0.3,
            options=options).optimize(cma.ff.rosen).result

where optimize returns a CMAEvolutionStrategy instance. The following example calls fmin optimizing the Rastrigin function in 3-D with random initial solution in [-2,2], initial step-size 0.5 and the BIPOP restart strategy (that progressively increases population). The options are specified for the usage with the doctest module.

>>> import cma
>>> # cma.CMAOptions()  # returns all possible options
>>> options = {'seed':12345, 'verb_time':0, 'ftarget': 1e-8}
>>>
>>> res = cma.fmin(cma.ff.rastrigin, lambda : 2. * np.random.rand(3) - 1, 0.5,
...                options, restarts=9, bipop=True)  #doctest: +ELLIPSIS
(3_w,7)-aCMA-ES (mu_w=2.3,w_1=58%) in dimension 3 (seed=12345...

In either case, the method:

cma.plot();

(based on matplotlib.pyplot) produces a plot of the run and, if necessary:

cma.s.figshow()

shows the plot in a window. Finally:

cma.s.figsave('myfirstrun')  # figsave from matplotlib.pyplot

will save the figure in a png.

We can use the gradient like

>>> import cma
>>> res = cma.fmin(cma.ff.rosen, np.zeros(10), 0.1,
...             options = {'ftarget':1e-8,},
...             gradf=cma.ff.grad_rosen,
...         )  #doctest: +ELLIPSIS
(5_w,...
>>> assert cma.ff.rosen(res[0]) < 1e-8
>>> assert res[2] < 3600  # 1% are > 3300
>>> assert res[3] < 3600  # 1% are > 3300

If solution can only be comparatively ranked, either use CMAEvolutionStrategy directly or the objective accepts a list of solutions as input:

>>> def parallel_sphere(X): return [cma.ff.sphere(x) for x in X]
>>> x, es = cma.fmin2(None, 3 * [0], 0.1, {'verbose': -9},
...                   parallel_objective=parallel_sphere)
>>> assert es.result[1] < 1e-9
See Also
CMAEvolutionStrategy, OOOptimizer.optimize, plot, CMAOptions, scipy.optimize.fmin
def fmin2(objective_function, x0, sigma0, options=None, args=(), gradf=None, restarts=0, restart_from_best='False', incpopsize=2, eval_initial_x=False, parallel_objective=None, noise_handler=None, noise_change_sigma_exponent=1, noise_kappa_exponent=0, bipop=False, callback=None):

wrapper around cma.fmin returning the tuple (xbest, es),

and with the same in input arguments as fmin. Hence a typical calling pattern may be:

x, es = cma.fmin2(...)  # recommended pattern
es = cma.fmin2(...)[1]  # `es` contains all available information
x = cma.fmin2(...)[0]   # keep only the best evaluated solution

fmin2 is an alias for:

res = fmin(...)
return res[0], res[-2]

fmin from fmin2 is:

es = fmin2(...)[1]  # fmin2(...)[0] is es.result[0]
return es.result + (es.stop(), es, es.logger)

The best found solution is equally available under:

fmin(...)[0]
fmin2(...)[0]
fmin2(...)[1].result[0]
fmin2(...)[1].result.xbest
fmin2(...)[1].best.x

The incumbent, current estimate for the optimum is available under:

fmin(...)[5]
fmin2(...)[1].result[5]
fmin2(...)[1].result.xfavorite
def fmin_con(objective_function, x0, sigma0, g=no_constraints, h=no_constraints, post_optimization=False, archiving=True, **kwargs):

Deprecated: use cma.ConstrainedFitnessAL or cma.fmin_con2 instead.

Optimize f with constraints g (inequalities) and h (equalities).

Construct an Augmented Lagrangian instance f_aug_lag of the type cma.constraints_handler.AugmentedLagrangian from objective_function and g and h.

Equality constraints should preferably be passed as two inequality constraints like [h - eps, -h - eps], with eps >= 0. When eps > 0, also feasible solution tracking can succeed.

Return a tuple es.results.xfavorite:numpy.array, es:CMAEvolutionStrategy, where es == cma.fmin2(f_aug_lag, x0, sigma0, **kwargs)[1].

Depending on kwargs['logging'] and on the verbosity settings in kwargs['options'], the AugmentedLagrangian writes (hidden) logging files.

The second return value:CMAEvolutionStrategy has an (additional) attribute best_feasible which contains the information about the best feasible solution in the best_feasible.info dictionary, given any feasible solution was found. This only works with inequality constraints (equality constraints are wrongly interpreted as inequality constraints).

If post_optimization is set to True, then the attribute best_feasible of the second return value will be updated with the best feasible solution obtained by optimizing the sum of the positive constraints squared starting from the point es.results.xfavorite. Additionally, the first return value will be the best feasible solution obtained in post-optimization.

In case when equality constraints are present and a "feasible" solution is requested, then post_optimization must be a strictly positive float indicating the error on the inequality constraints.

The second return value:CMAEvolutionStrategy has also a con_archives attribute which is nonempty if archiving. The last element of each archive is the best feasible solution if there was any.

See cma.fmin for further parameters **kwargs.

>>> import cma
>>> x, es = cma.evolution_strategy.fmin_con(
...             cma.ff.sphere, 3 * [0], 1, g=lambda x: [1 - x[0]**2, -(1 - x[0]**2) - 1e-6],
...             options={'termination_callback': lambda es: -1e-5 < sum(es.mean**2) - 1 < 1e-5,
...                      'verbose':-9})
>>> assert 'callback' in es.stop()
>>> assert es.result.evaluations < 1500  # 10%-ish above 1000, 1%-ish above 1300
>>> assert (sum(es.mean**2) - 1)**2 < 1e-9, es.mean
>>> x, es = cma.evolution_strategy.fmin_con(
...             cma.ff.sphere, 2 * [0], 1, g=lambda x: [1 - x[0]**2],
...             options={'termination_callback': lambda es: -1e-8 < sum(es.mean**2) - 1 < 1e-8,
...                      'seed':1, 'verbose':-9})
>>> assert es.best_feasible.f < 1 + 1e-5, es.best_feasible.f
>>> ".info attribute dictionary keys: {}".format(sorted(es.best_feasible.info))
".info attribute dictionary keys: ['f', 'g', 'g_al', 'x']"

Details: this is a versatile function subject to changes. It is possible to access the AugmentedLagrangian instance like

>>> al = es.augmented_lagrangian
>>> isinstance(al, cma.constraints_handler.AugmentedLagrangian)
True
>>> # al.logger.plot()  # plots the evolution of AL coefficients
>>> x, es = cma.evolution_strategy.fmin_con(
...             cma.ff.sphere, 2 * [0], 1, g=lambda x: [y+1 for y in x],
...             post_optimization=True, options={"verbose": -9})
>>> assert all(y <= -1 for y in x)  # assert feasibility of x
def fmin_con2(objective_function, x0, sigma0, constraints=no_constraints, find_feasible_first=False, find_feasible_final=False, kwargs_confit=None, **kwargs_fmin):

optimize f with inequality constraints g.

constraints is a function that returns a list of constraints values, where feasibility means <= 0. An equality constraint h(x) == 0 can be expressed as two inequality constraints like [h(x) - eps, -h(x) - eps] with eps >= 0.

find_feasible_... arguments toggle to search for a feasible solution before and after the constrained problem is optimized. Because this can not work with equality constraints, where the feasible domain has zero volume, find-feasible are off by default.

kwargs_confit are keyword arguments to instantiate constraints_handler.ConstrainedFitnessAL which is optimized and returned as objective_function attribute in the second return argument (type CMAEvolutionStrategy).

Other and further keyword arguments are passed (in **kwargs_fmin) to cma.fmin2.

Consider using ConstrainedFitnessAL directly instead of fmin_con2.

def fmin_lq_surr(objective_function, x0, sigma0, options=None, **kwargs):

minimize objective_function with lq-CMA-ES.

See help(cma.fmin) for the input parameter descriptions where parallel_objective is not available and noise-related options may fail.

Returns the tuple xbest, es similar to fmin2, however xbest takes into account only some of the recent history and not all evaluations. es.result is partly based on surrogate f-values and may hence be confusing. In particular, es.best contains the solution with the best _surrogate_ value (which is usually of little interest). See fmin_lq_surr2 for a fix.

As in general, es.result.xfavorite is considered the best available estimate of the optimal solution.

Example code

>>> import cma
>>> x, es = cma.fmin_lq_surr(cma.ff.rosen, 2 * [0], 0.1,
...                          {'verbose':-9,  # verbosity for doctesting
...                           'ftarget':1e-2, 'seed':11})
>>> assert 'ftarget' in es.stop(), (es.stop(), es.result_pretty())
>>> assert es.result.evaluations < 90, es.result.evaluations  # can be 137 depending on seed

Details

lq-CMA-ES builds a linear or quadratic (global) model as a surrogate to try to circumvent evaluations of the objective function, see link below.

This function calls fmin2 with a surrogate as parallel_objective argument. The model is kept the same for each restart. Use fmin_lq_surr2 if this is not desirable.

kwargs['callback'] is modified by appending a callable that injects model.xopt. This can be prevented by passing callback=False or adding False as an element of the callback list (see also cma.fmin).

parallel_objective is assigned to a surrogate model instance of cma.fitness_models.SurrogatePopulation.

es.countevals is updated from the evaluations attribute of the constructed surrogate to count only "true" evaluations.

See https://cma-es.github.io/lq-cma for references and details about the algorithm.

def fmin_lq_surr2(objective_function, x0, sigma0, options=None, inject=True, restarts=0, incpopsize=2, keep_model=False, not_evaluated=np.isnan, callback=None):

minimize objective_function with lq-CMA-ES.

x0 is the initial solution or can be a callable that returns an initial solution (different for each restarted run). See cma.fmin for further input documentations and cma.CMAOptions() for the available options.

inject determines whether the best solution of the model is reinjected in each iteration. By default, a new surrogate model is used after each restart (keep_model=False) and the population size is multiplied by a factor of two (incpopsize=2) like in IPOP-CMA-ES (see also help(cma.fmin)).

Returns the tuple xbest, es like fmin2. As in general, es.result.xfavorite (and es.mean as genotype) is considered the best available estimate of the optimal solution.

Example code

>>> import cma
>>> x, es = cma.fmin_lq_surr2(cma.ff.rosen, 2 * [0], 0.1,
...                           {'verbose':-9,  # verbosity for doctesting
...                            'ftarget':1e-2, 'seed':3})
>>> assert 'ftarget' in es.stop(), (es.stop(), es.result_pretty())
>>> assert es.result.evaluations < 90, es.result.evaluations  # can be >130? depending on seed
>>> assert es.countiter < 60, es.countiter

Details

lq-CMA-ES builds a linear or quadratic (global) model as a surrogate to circumvent evaluations of the objective function, see link below.

This code uses the ask-and-tell interface to CMA-ES via the class CMAEvolutionStrategy to the options dict is passed.

To pass additional arguments to the objective function use functools.partial.

not_evaluated must return True if a value indicates (by convention of cma.fitness_models.SurrogatePopulation.EvaluationManager.fvalues) a missing "true" evaluation of the objective_function.

See https://cma-es.github.io/lq-cma for references and details about the algorithm.

def is_feasible(x, f):

default to check feasibility of f-values.

Used for rejection sampling in method ask_and_eval.

See Also
CMAOptions, CMAOptions('feas').
def no_constraints(x):

Undocumented

def safe_str(s):

return a string safe to eval or raise an exception.

Selected words and chars are considered safe such that all default string-type option values from CMAOptions() pass. This function is implemented for convenience, to keep the default option format backwards compatible, and to be able to pass, for example, 3 * N. Function or class names other than those from the default values cannot be passed as strings (any more) but only as the function or class themselves.

all_stoppings: list =

Undocumented

cma_allowed_options_keys =

Undocumented

cma_default_options =

Undocumented

cma_versatile_options =

Undocumented

meta_parameters =

Undocumented

use_archives =

speed up for very large population size. use_archives prevents the need for an inverse gp-transformation, relies on collections module, not sure what happens if set to False.

def _al_set_logging(al, kwargs, *more_kwargs):

try to figure a good logging value from various verbosity options

_assertions_cubic: bool =

Undocumented

_assertions_quadratic: bool =

Undocumented

_CMASolutionDict =

Undocumented

_debugging: bool =

Undocumented

_depreciated: bool =

Undocumented

_new_injections: bool =

Undocumented