CMAES (evolution strategy), the main submodule of cma
providing
in particular CMAOptions
, CMAEvolutionStrategy
, and fmin2
Class 

CMAES stochastic optimizer class with askandtell interface. 
Class 

A results tuple from CMAEvolutionStrategy property result. 
Class 

a dictionary with the available options and their default values for class CMAEvolutionStrategy . 
Class 

collection of many meta parameters. 
Exception 

Injected solutions are not passed to tell as expected 
Function  cma 
use this function to get keyword completion for CMAOptions . 
Function  fmin 
functional interface to the stochastic optimizer CMAES for nonconvex function minimization. 
Function  fmin2 
wrapper around cma.fmin returning the tuple (xbest, es), 
Function  fmin 
Deprecated: use cma.ConstrainedFitnessAL or cma.fmin_con2 instead. 
Function  fmin 
optimize f with inequality constraints g. 
Function  fmin 
minimize objective_function with lqCMAES. 
Function  fmin 
minimize objective_function with lqCMAES. 
Function  is 
default to check feasibility of fvalues. 
Function  no 
Undocumented 
Function  safe 
return a string safe to eval or raise an exception. 
Variable  all 
Undocumented 
Variable  cma 
Undocumented 
Variable  cma 
Undocumented 
Variable  cma 
Undocumented 
Variable  meta 
Undocumented 
Variable  use 
speed up for very large population size. use_archives prevents the need for an inverse gptransformation, relies on collections module, not sure what happens if set to False. 
Class  _ 
A results tuple from CMAEvolutionStrategy property result. 
Class  _ 
strategy parameters like population size and learning rates. 
Class  _ 
a hack to get most code examples running 
Class  _ 
No class docstring; 0/2 instance variable, 1/2 method documented 
Class  _ 
keep and update a termination condition dictionary. 
Function  _al 
try to figure a good logging value from various verbosity options 
Variable  _assertions 
Undocumented 
Variable  _assertions 
Undocumented 
Variable  _ 
Undocumented 
Variable  _debugging 
Undocumented 
Variable  _depreciated 
Undocumented 
Variable  _new 
Undocumented 
use this function to get keyword completion for CMAOptions
.
cma.CMAOptions('substr') provides even substring search.
returns default options as a dict
(not a cma.CMAOptions
dict
).
functional interface to the stochastic optimizer CMAES for nonconvex function minimization.
fmin2
provides the cleaner return values.
Calling Sequences
 fmin(objective_function, x0, sigma0)
 minimizes objective_function starting at x0 and with standard deviation sigma0 (stepsize)
 fmin(objective_function, x0, sigma0, options={'ftarget': 1e5})
 minimizes objective_function up to target function value 1e5, which is typically useful for benchmarking.
 fmin(objective_function, x0, sigma0, args=('f',))
 minimizes objective_function called with an additional argument 'f'.
 fmin(objective_function, x0, sigma0, options={'ftarget':1e5, 'popsize':40})
 uses additional options ftarget and popsize
 fmin(objective_function, esobj, None, options={'maxfevals': 1e5})
 uses the
CMAEvolutionStrategy
object instance esobj to optimize objective_function, similar to esobj.optimize().
Arguments
 objective_function
 called as objective_function(x, *args) to be minimized.
x is a onedimensional
numpy.ndarray
. See also theparallel_objective
argument. objective_function can returnnumpy.NaN
, which is interpreted as outright rejection of solution x and invokes an immediate resampling and (re)evaluation of a new solution not counting as function evaluation. The attribute variable_annotations is passed into the CMADataLogger.persistent_communication_dict.  x0
 list or
numpy.ndarray
, initial guess of minimum solution before the application of the genophenotype transformation according to the transformation option. It can also be a callable that is called (without input argument) before each restart to yield the initial guess such that each restart may start from a different place. Otherwise, x0 can also be acma.CMAEvolutionStrategy
object instance, in that case sigma0 can be None.  sigma0
 scalar, initial standard deviation in each coordinate.
sigma0 should be about 1/4th of the search domain width
(where the optimum is to be expected). The variables in
objective_function should be scaled such that they
presumably have similar sensitivity.
See also
ScaleCoordinates
.  options
 a dictionary with additional options passed to the constructor of class CMAEvolutionStrategy, see cma.CMAOptions () for a list of available options.
 args=()
 arguments to be used to call the objective_function
 gradf=None
 gradient of f, where len(gradf(x, *args)) == len(x). gradf is called once in each iteration if gradf is not None.
 restarts=0
 number of restarts with increasing population size, see also parameter incpopsize, implementing the IPOPCMAES restart strategy, see also parameter bipop; to restart from different points (recommended), pass x0 as a string.
 restart_from_best=False
 which point to restart from
 incpopsize=2
 multiplier for increasing the population size popsize before each restart
 parallel_objective
 an objective function that accepts a list of
numpy.ndarray
as input and returns alist
, which is mostly used instead ofobjective_function
, but for the initial (also initial elitist) and the final evaluations unless not callable(objective_function). If parallel_objective is given, the objective_function (first argument) may be None.  eval_initial_x=None
 evaluate initial solution, for None only with elitist option
 noise_handler=None
 a NoiseHandler class or instance or None. Example: cma.fmin(f, 6 * [1], 1, noise_handler=cma.NoiseHandler(6)) see help(cma.NoiseHandler).
 noise_change_sigma_exponent=1
 exponent for the sigma increment provided by the noise handler for additional noise treatment. 0 means no sigma change.
 noise_evaluations_as_kappa=0
 instead of applying reevaluations, the "number of evaluations" is (ab)used as scaling factor kappa (experimental).
 bipop=False
 if bool(bipop) is True, run as BIPOPCMAES; BIPOP is a special
restart strategy switching between two population sizings  small
(relative to the large population size and with varying initial
sigma, the first run is accounted on the "small" budget) and large
(progressively increased as in IPOP). This makes the algorithm
potentially solve both, functions with many regularly or
irregularly arranged local optima (the latter by frequently
restarting with small populations). Small populations are
(re)started as long as the cumulated budget_small is smaller than
bipop
x max(1, budget_large). For thebipop
parameter to actually conduct restarts also with the larger population size, select a nonzero number of (IPOP) restarts; the recommended setting is restarts<=9 andx0
passed as a string usingnumpy.rand
to generate initial solutions. Smallpopulation restarts do not count into this total restart count.  callback=None
callable
or list of callables called at the end of each iteration with the currentCMAEvolutionStrategy
instance as argument.
Optional Arguments
All values in the options
dictionary are evaluated if they are of
type str
, besides verb_filenameprefix
, see class CMAOptions
for
details. The full list is available by calling cma.CMAOptions().
>>> import cma >>> cma.CMAOptions() #doctest: +ELLIPSIS {...
Subsets of options can be displayed, for example like
cma.CMAOptions('tol'), or cma.CMAOptions('bound'),
see also class CMAOptions
.
Return
Return the list provided in CMAEvolutionStrategy.result
appended
with termination conditions, an OOOptimizer
and a BaseDataLogger
:
res = es.result + (es.stop(), es, logger)
 where
 res[0] (xopt)  best evaluated solution
 res[1] (fopt)  respective function value
 res[2] (evalsopt)  respective number of function evaluations
 res[3] (evals)  number of overall conducted objective function evaluations
 res[4] (iterations)  number of overall conducted iterations
 res[5] (xmean)  mean of the final sample distribution
 res[6] (stds)  effective stds of the final sample distribution
 res[3] (stop)  termination condition(s) in a dictionary
 res[2] (cmaes)  class
CMAEvolutionStrategy
instance  res[1] (logger)  class
CMADataLogger
instance
Details
This function is an interface to the class CMAEvolutionStrategy
. The
latter class should be used when full control over the iteration loop
of the optimizer is desired.
Examples
The following example calls fmin
optimizing the Rosenbrock function
in 10D with initial solution 0.1 and initial stepsize 0.5. The
options are specified for the usage with the doctest
module.
>>> import cma >>> # cma.CMAOptions() # returns all possible options >>> options = {'CMA_diagonal':100, 'seed':1234, 'verb_time':0} >>> >>> res = cma.fmin(cma.ff.rosen, [0.1] * 10, 0.3, options) #doctest: +ELLIPSIS (5_w,10)aCMAES (mu_w=3.2,w_1=45%) in dimension 10 (seed=1234...) Covariance matrix is diagonal for 100 iterations (1/ccov=26... Iterat #Fevals function value axis ratio sigma ... 1 10 ... termination on tolfun=1e11 ... final/bestever fvalue = ... >>> assert res[1] < 1e12 # fvalue of best found solution >>> assert res[2] < 8000 # evaluations
The above call is pretty much equivalent with the slightly more verbose call:
res = cma.CMAEvolutionStrategy([0.1] * 10, 0.3, options=options).optimize(cma.ff.rosen).result
where optimize
returns a CMAEvolutionStrategy
instance. The
following example calls fmin
optimizing the Rastrigin function
in 3D with random initial solution in [2,2], initial stepsize 0.5
and the BIPOP restart strategy (that progressively increases population).
The options are specified for the usage with the doctest
module.
>>> import cma >>> # cma.CMAOptions() # returns all possible options >>> options = {'seed':12345, 'verb_time':0, 'ftarget': 1e8} >>> >>> res = cma.fmin(cma.ff.rastrigin, lambda : 2. * np.random.rand(3)  1, 0.5, ... options, restarts=9, bipop=True) #doctest: +ELLIPSIS (3_w,7)aCMAES (mu_w=2.3,w_1=58%) in dimension 3 (seed=12345...
In either case, the method:
cma.plot();
(based on matplotlib.pyplot
) produces a plot of the run and, if
necessary:
cma.s.figshow()
shows the plot in a window. Finally:
cma.s.figsave('myfirstrun') # figsave from matplotlib.pyplot
will save the figure in a png.
We can use the gradient like
>>> import cma >>> res = cma.fmin(cma.ff.rosen, np.zeros(10), 0.1, ... options = {'ftarget':1e8,}, ... gradf=cma.ff.grad_rosen, ... ) #doctest: +ELLIPSIS (5_w,... >>> assert cma.ff.rosen(res[0]) < 1e8 >>> assert res[2] < 3600 # 1% are > 3300 >>> assert res[3] < 3600 # 1% are > 3300
If solution can only be comparatively ranked, either use
CMAEvolutionStrategy
directly or the objective accepts a list
of solutions as input:
>>> def parallel_sphere(X): return [cma.ff.sphere(x) for x in X] >>> x, es = cma.fmin2(None, 3 * [0], 0.1, {'verbose': 9}, ... parallel_objective=parallel_sphere) >>> assert es.result[1] < 1e9
See Also  
CMAEvolutionStrategy , OOOptimizer.optimize , plot ,
CMAOptions , scipy.optimize.fmin 
wrapper around cma.fmin
returning the tuple (xbest, es),
and with the same in input arguments as fmin
. Hence a typical
calling pattern may be:
x, es = cma.fmin2(...) # recommended pattern es = cma.fmin2(...)[1] # `es` contains all available information x = cma.fmin2(...)[0] # keep only the best evaluated solution
fmin2
is an alias for:
res = fmin(...) return res[0], res[2]
es = fmin2(...)[1] # fmin2(...)[0] is es.result[0] return es.result + (es.stop(), es, es.logger)
The best found solution is equally available under:
fmin(...)[0] fmin2(...)[0] fmin2(...)[1].result[0] fmin2(...)[1].result.xbest fmin2(...)[1].best.x
The incumbent, current estimate for the optimum is available under:
fmin(...)[5] fmin2(...)[1].result[5] fmin2(...)[1].result.xfavorite
Deprecated: use cma.ConstrainedFitnessAL
or cma.fmin_con2
instead.
Optimize f with constraints g (inequalities) and h (equalities).
Construct an Augmented Lagrangian instance f_aug_lag of the type
cma.constraints_handler.AugmentedLagrangian
from objective_function
and g
and h
.
Equality constraints should preferably be passed as two inequality constraints like [h  eps, h  eps], with eps >= 0. When eps > 0, also feasible solution tracking can succeed.
Return a tuple
es.results.xfavorite:numpy.array, es:CMAEvolutionStrategy,
where es == cma.fmin2(f_aug_lag, x0, sigma0, **kwargs)[1].
Depending on kwargs['logging'] and on the verbosity settings in
kwargs['options'], the AugmentedLagrangian
writes (hidden)
logging files.
The second return value:CMAEvolutionStrategy
has an (additional)
attribute best_feasible which contains the information about the
best feasible solution in the best_feasible.info dictionary, given
any feasible solution was found. This only works with inequality
constraints (equality constraints are wrongly interpreted as inequality
constraints).
If post_optimization
is set to True, then the attribute best_feasible
of the second return value will be updated with the best feasible solution obtained by
optimizing the sum of the positive constraints squared starting from
the point es.results.xfavorite. Additionally, the first return value will
be the best feasible solution obtained in postoptimization.
In case when equality constraints are present and a "feasible" solution is requested,
then post_optimization
must be a strictly positive float indicating the error
on the inequality constraints.
The second return value:CMAEvolutionStrategy
has also a
con_archives
attribute which is nonempty if archiving
. The last
element of each archive is the best feasible solution if there was any.
See cma.fmin
for further parameters **kwargs.
>>> import cma >>> x, es = cma.evolution_strategy.fmin_con( ... cma.ff.sphere, 3 * [0], 1, g=lambda x: [1  x[0]**2, (1  x[0]**2)  1e6], ... options={'termination_callback': lambda es: 1e5 < sum(es.mean**2)  1 < 1e5, ... 'verbose':9}) >>> assert 'callback' in es.stop() >>> assert es.result.evaluations < 1500 # 10%ish above 1000, 1%ish above 1300 >>> assert (sum(es.mean**2)  1)**2 < 1e9, es.mean
>>> x, es = cma.evolution_strategy.fmin_con( ... cma.ff.sphere, 2 * [0], 1, g=lambda x: [1  x[0]**2], ... options={'termination_callback': lambda es: 1e8 < sum(es.mean**2)  1 < 1e8, ... 'seed':1, 'verbose':9}) >>> assert es.best_feasible.f < 1 + 1e5, es.best_feasible.f >>> ".info attribute dictionary keys: {}".format(sorted(es.best_feasible.info)) ".info attribute dictionary keys: ['f', 'g', 'g_al', 'x']"
Details: this is a versatile function subject to changes. It is possible to access
the AugmentedLagrangian
instance like
>>> al = es.augmented_lagrangian >>> isinstance(al, cma.constraints_handler.AugmentedLagrangian) True >>> # al.logger.plot() # plots the evolution of AL coefficients
>>> x, es = cma.evolution_strategy.fmin_con( ... cma.ff.sphere, 2 * [0], 1, g=lambda x: [y+1 for y in x], ... post_optimization=True, options={"verbose": 9}) >>> assert all(y <= 1 for y in x) # assert feasibility of x
optimize f with inequality constraints g.
constraints
is a function that returns a list of constraints values,
where feasibility means <= 0. An equality constraint h(x) == 0 can
be expressed as two inequality constraints like [h(x)  eps, h(x) 
eps] with eps >= 0.
find_feasible_...
arguments toggle to search for a feasible solution
before and after the constrained problem is optimized. Because this can
not work with equality constraints, where the feasible domain has zero
volume, findfeasible are off by default.
kwargs_confit
are keyword arguments to instantiate
constraints_handler.ConstrainedFitnessAL
which is optimized and
returned as objective_function
attribute in the second return
argument (type CMAEvolutionStrategy
).
Other and further keyword arguments are passed (in **kwargs_fmin)
to cma.fmin2
.
Consider using ConstrainedFitnessAL
directly instead of fmin_con2
.
minimize objective_function
with lqCMAES.
See help(cma.fmin) for the input parameter descriptions where
parallel_objective
is not available and noiserelated options may
fail.
Returns the tuple xbest, es similar to fmin2
, however xbest
takes into account only some of the recent history and not all
evaluations. es.result
is partly based on surrogate fvalues and may
hence be confusing. In particular, es.best
contains the solution with
the best _surrogate_ value (which is usually of little interest). See
fmin_lq_surr2
for a fix.
As in general, es.result.xfavorite
is considered the best available
estimate of the optimal solution.
Example code
>>> import cma >>> x, es = cma.fmin_lq_surr(cma.ff.rosen, 2 * [0], 0.1, ... {'verbose':9, # verbosity for doctesting ... 'ftarget':1e2, 'seed':11}) >>> assert 'ftarget' in es.stop(), (es.stop(), es.result_pretty()) >>> assert es.result.evaluations < 90, es.result.evaluations # can be 137 depending on seed
Details
lqCMAES builds a linear or quadratic (global) model as a surrogate to try to circumvent evaluations of the objective function, see link below.
This function calls fmin2
with a surrogate as parallel_objective
argument. The model is kept the same for each restart. Use
fmin_lq_surr2
if this is not desirable.
kwargs['callback'] is modified by appending a callable that injects
model.xopt. This can be prevented by passing callback=False
or
adding False
as an element of the callback list (see also cma.fmin
).
parallel_objective is assigned to a surrogate model instance of cma.fitness_models.SurrogatePopulation.
es.countevals
is updated from the evaluations
attribute of the
constructed surrogate to count only "true" evaluations.
See https://cmaes.github.io/lqcma for references and details about the algorithm.
minimize objective_function
with lqCMAES.
x0
is the initial solution or can be a callable that returns an
initial solution (different for each restarted run). See cma.fmin
for further input documentations and cma.CMAOptions() for the
available options.
inject
determines whether the best solution of the model is
reinjected in each iteration. By default, a new surrogate model is used
after each restart (keep_model=False) and the population size is
multiplied by a factor of two (incpopsize=2) like in IPOPCMAES
(see also help(cma.fmin)).
Returns the tuple xbest, es like fmin2
. As in general,
es.result.xfavorite
(and es.mean
as genotype) is considered the
best available estimate of the optimal solution.
Example code
>>> import cma >>> x, es = cma.fmin_lq_surr2(cma.ff.rosen, 2 * [0], 0.1, ... {'verbose':9, # verbosity for doctesting ... 'ftarget':1e2, 'seed':3}) >>> assert 'ftarget' in es.stop(), (es.stop(), es.result_pretty()) >>> assert es.result.evaluations < 90, es.result.evaluations # can be >130? depending on seed >>> assert es.countiter < 60, es.countiter
Details
lqCMAES builds a linear or quadratic (global) model as a surrogate to circumvent evaluations of the objective function, see link below.
This code uses the askandtell interface to CMAES via the class
CMAEvolutionStrategy
to the options
dict
is passed.
To pass additional arguments to the objective function use
functools.partial
.
not_evaluated
must return True
if a value indicates (by convention
of cma.fitness_models.SurrogatePopulation.EvaluationManager.fvalues
)
a missing "true" evaluation of the objective_function
.
See https://cmaes.github.io/lqcma for references and details about the algorithm.
default to check feasibility of fvalues.
Used for rejection sampling in method ask_and_eval
.
See Also  
CMAOptions, CMAOptions('feas'). 
return a string safe to eval
or raise an exception.
Selected words and chars are considered safe such that all default
stringtype option values from CMAOptions()
pass. This function is
implemented for convenience, to keep the default option format
backwards compatible, and to be able to pass, for example, 3 * N
.
Function or class names other than those from the default values cannot
be passed as strings (any more) but only as the function or class
themselves.
speed up for very large population size. use_archives
prevents the
need for an inverse gptransformation, relies on collections module,
not sure what happens if set to False.