Module purecma
[hide private]
[frames] | no frames]

Source Code for Module purecma

   1  #!/usr/bin/env python 
   2  """A minimalistic implemention of CMA-ES without using `numpy`. 
   3   
   4  The Covariance Matrix Adaptation Evolution Strategy, CMA-ES, serves for 
   5  numerical nonlinear function minimization. 
   6   
   7  The **main functionality** is implemented in 
   8   
   9  1. class `CMAES`, and 
  10   
  11  2. function `fmin` which is a small single-line-usage wrapper around 
  12     `CMAES`. 
  13   
  14  This code has two **purposes**: 
  15   
  16  1. for READING and UNDERSTANDING the basic flow and the details of the 
  17     CMA-ES *algorithm*. The source code is meant to be read. For a quick 
  18     glance, study the first few code lines of `fmin` and the code of 
  19     method `CMAES.tell`, where all the real work is done in about 20 lines 
  20     of code (search "def tell" in the source). Otherwise, reading from 
  21     the top is a feasible option, where the codes of `fmin`, 
  22     `CMAES.__init__`, `CMAES.ask`, `CMAES.tell` are of particular 
  23     interest. 
  24   
  25  2. apply CMA-ES when the python module `numpy` is not available. 
  26     When `numpy` is available, `cma.fmin` or `cma.CMAEvolutionStrategy` are 
  27     preferred to run "serious" simulations. The latter code has many more 
  28     lines, but usually executes faster, offers a richer user interface, 
  29     better termination options, boundary and noise handling, injection, 
  30     automated restarts... 
  31   
  32  Dependencies: `math.exp`, `math.log` and `random.normalvariate` (modules 
  33  `matplotlib.pylab` and `sys` are optional). 
  34   
  35  Testing: call ``python purecma.py`` at the OS shell. Tested with 
  36  Python 2.6, 2.7, 3.3, 3.5, 3.6. 
  37   
  38  URL: http://github.com/CMA-ES/pycma 
  39   
  40  Last change: September, 2017, version 3.0.0 
  41   
  42  :Author: Nikolaus Hansen, 2010-2011, 2017 
  43   
  44  This code is released into the public domain (that is, you may 
  45  use and modify it however you like). 
  46   
  47  """ 
  48  from __future__ import division  # such that 1/2 != 0 
  49  from __future__ import print_function  # available since 2.6, not needed 
  50   
  51  from sys import stdout as _stdout # not strictly necessary 
  52  from math import log, exp 
  53  from random import normalvariate as random_normalvariate 
  54   
  55  try: 
  56      from .interfaces import OOOptimizer, BaseDataLogger as _BaseDataLogger 
  57  except (ImportError, ValueError): 
  58      OOOptimizer, _BaseDataLogger = object, object 
  59  try: 
  60      from .recombination_weights import RecombinationWeights 
  61  except (ImportError, ValueError): 
  62      RecombinationWeights = None 
  63  del division, print_function  #, absolute_import, unicode_literals, with_statement 
  64   
  65  __version__ = '3.0.0' 
  66  __author__ = 'Nikolaus Hansen' 
  67  __docformat__ = 'reStructuredText' 
68 69 70 -def fmin(objective_fct, xstart, sigma, 71 args=(), 72 maxfevals='1e3 * N**2', ftarget=None, 73 verb_disp=100, verb_log=1, verb_save=1000):
74 """non-linear non-convex minimization procedure, a functional 75 interface to CMA-ES. 76 77 Parameters 78 ========== 79 `objective_fct`: `callable` 80 a function that takes as input a `list` of floats (like 81 [3.0, 2.2, 1.1]) and returns a single `float` (a scalar). 82 The objective is to find ``x`` with ``objective_fct(x)`` 83 to be as small as possible. 84 `xstart`: `list` or sequence 85 list of numbers (like `[3.2, 2, 1]`), initial solution vector 86 `sigma`: `float` 87 initial step-size, standard deviation in any coordinate 88 `args`: `tuple` or sequence 89 arguments to `objective_fct` 90 `ftarget`: `float` 91 target function value 92 `maxfevals`: `int` or `str` 93 maximal number of function evaluations, a string 94 is evaluated with ``N`` being the search space dimension 95 `verb_disp`: `int` 96 display on console every `verb_disp` iteration, 0 for never 97 `verb_log`: `int` 98 data logging every `verb_log` iteration, 0 for never 99 `verb_save`: `int` 100 save logged data every ``verb_save * verb_log`` iteration 101 102 Return 103 ====== 104 The `tuple` (``xmin``:`list`, ``es``:`CMAES`), where ``xmin`` is the 105 best seen (evaluated) solution and ``es`` is the correspoding `CMAES` 106 instance. Consult ``help(es.result)`` of property `result` for further 107 results. 108 109 Example 110 ======= 111 The following example minimizes the function `ff.elli`:: 112 113 >> from cma import purecma, ff 114 >> res = purecma.fmin(ff.elli, 3 * [0.5], 0.3, verb_disp=100) 115 evals: ax-ratio max(std) f-value 116 7: 1.0 3.4e-01 240.2716966 117 14: 1.0 3.9e-01 2341.50170536 118 700: 247.9 2.4e-01 0.629102574062 119 1400: 1185.9 5.3e-07 4.83466373808e-13 120 1421: 1131.2 2.9e-07 5.50167024417e-14 121 termination by {'tolfun': 1e-12} 122 best f-value = 2.72976881789e-14 123 solution = [5.284564665206811e-08, 2.4608091035303e-09, -1.3582873173543187e-10] 124 >> print(res[0]) 125 [5.284564665206811e-08, 2.4608091035303e-09, -1.3582873173543187e-10] 126 >> print(res[1].result[1]) 127 2.72976881789e-14 128 >> res[1].logger.plot() # needs pylab/matplotlib to be installed 129 130 Details 131 ======= 132 This call:: 133 134 >> import cma.purecma as pcma 135 >> pcma.fmin(pcma.ff.elli, 10 * [0.5], 0.3, verb_save=0) 136 137 and these lines:: 138 139 >> import cma.purecma as pcma 140 >> es.logger = pcma.CMAESDataLogger() 141 >> pcma.CMAES(10 * [0.5], 0.3).optimize(pcma.ff.elli, 142 ... callback=es.logger.add) 143 144 do pretty much the same. The `verb_save` parameter to `fmin` adds 145 the possibility to plot the saved data *during* the execution from a 146 different Python shell like ``pcma.CMAESDataLogger().load().plot()``. 147 For example, with ``verb_save == 3`` every third time the logger 148 records data they are saved to disk as well. 149 150 :See: `CMAES`, `OOOptimizer`. 151 """ 152 es = CMAES(xstart, sigma, maxfevals=maxfevals, ftarget=ftarget) 153 if verb_log: # prepare data logging 154 es.logger = CMAESDataLogger(verb_log).add(es, force=True) 155 while not es.stop(): 156 X = es.ask() # get a list of sampled candidate solutions 157 fit = [objective_fct(x, *args) for x in X] # evaluate candidates 158 es.tell(X, fit) # update distribution parameters 159 160 # that's it! The remainder is managing output behavior only. 161 es.disp(verb_disp) 162 if verb_log: 163 if es.counteval / es.params.lam % verb_log < 1: 164 es.logger.add(es) 165 if verb_save and (es.counteval / es.params.lam 166 % (verb_save * verb_log) < 1): 167 es.logger.save() 168 169 if verb_disp: # do not print by default to allow silent verbosity 170 es.disp(1) 171 print('termination by', es.stop()) 172 print('best f-value =', es.result[1]) 173 print('solution =', es.result[0]) 174 if verb_log: 175 es.logger.add(es, force=True) 176 es.logger.save() if verb_save else None 177 return [es.best.x if es.best.f < objective_fct(es.xmean) else 178 es.xmean, es]
179
180 181 -class CMAESParameters(object):
182 """static "internal" parameter setting for `CMAES` 183 184 """ 185 default_popsize = '4 + int(3 * log(N))'
186 - def __init__(self, N, popsize=None, 187 RecombinationWeights=None):
188 """set static, fixed "strategy" parameters once and for all. 189 190 Input parameter ``RecombinationWeights`` may be set to the class 191 `RecombinationWeights`. 192 """ 193 self.dimension = N 194 self.chiN = (1 - 1. / (4 * N) + 1. / (21 * N**2)) 195 196 # Strategy parameter setting: Selection 197 self.lam = eval(safe_str(popsize if popsize else 198 CMAESParameters.default_popsize, 199 {'int': 'int', 'log': 'log', 'N': N})) 200 self.mu = int(self.lam / 2) # number of parents/points/solutions for recombination 201 if RecombinationWeights: 202 self.weights = RecombinationWeights(self.lam) 203 self.mueff = self.weights.mueff 204 else: # set non-negative recombination weights "manually" 205 _weights = [log(self.mu+0.5) - log(i+1) if i < self.mu else 0 206 for i in range(self.lam)] 207 w_sum = sum(_weights[:self.mu]) 208 self.weights = [w / w_sum for w in _weights] # sum is one now 209 self.mueff = sum(self.weights[:self.mu])**2 / \ 210 sum(w**2 for w in self.weights[:self.mu]) # variance-effectiveness of sum w_i x_i 211 212 # Strategy parameter setting: Adaptation 213 self.cc = (4 + self.mueff/N) / (N+4 + 2 * self.mueff/N) # time constant for cumulation for C 214 self.cs = (self.mueff + 2) / (N + self.mueff + 5) # time constant for cumulation for sigma control 215 self.c1 = 2 / ((N + 1.3)**2 + self.mueff) # learning rate for rank-one update of C 216 self.cmu = min([1 - self.c1, 2 * (self.mueff - 2 + 1/self.mueff) / ((N + 2)**2 + self.mueff)]) # and for rank-mu update 217 self.damps = 2 * self.mueff/self.lam + 0.3 + self.cs # damping for sigma, usually close to 1 218 219 if RecombinationWeights: 220 self.weights.finalize_negative_weights(N, self.c1, self.cmu) 221 # gap to postpone eigendecomposition to achieve O(N**2) per eval 222 # 0.5 is chosen such that eig takes 2 times the time of tell in >=20-D 223 self.lazy_gap_evals = 0.5 * N * self.lam * (self.c1 + self.cmu)**-1 / N**2
224
225 -class CMAES(OOOptimizer): # could also inherit from object
226 """class for non-linear non-convex numerical minimization with CMA-ES. 227 228 The class implements the interface define in `OOOptimizer`, namely 229 the methods `__init__`, `ask`, `tell`, `stop`, `disp` and property 230 `result`. 231 232 Examples 233 -------- 234 235 The Jupyter notebook or IPython are the favorite environments to 236 execute these examples, both in ``%pylab`` mode. All examples 237 minimize the function `elli`, output is not shown. 238 239 First we need to import the module we want to use. We import `purecma` 240 from `cma` as (aliased to) ``pcma``:: 241 242 from cma import purecma as pcma 243 244 The shortest example uses the inherited method 245 `OOOptimizer.optimize`:: 246 247 es = pcma.CMAES(8 * [0.1], 0.5).optimize(pcma.ff.elli) 248 249 See method `CMAES.__init__` for a documentation of the input 250 parameters to `CMAES`. We might have a look at the result:: 251 252 print(es.result[0]) # best solution and 253 print(es.result[1]) # its function value 254 255 `result` is a property of `CMAES`. In order to display more exciting 256 output, we may use the `CMAESDataLogger` instance in the `logger` 257 attribute of `CMAES`:: 258 259 es.logger.plot() # if matplotlib is available 260 261 Virtually the same example can be written with an explicit loop 262 instead of using `optimize`, see also `fmin`. This gives insight 263 into the `CMAES` class interface and entire control over the 264 iteration loop:: 265 266 pcma.fmin?? # print source, works in jupyter/ipython only 267 es = pcma.CMAES(9 * [0.5], 0.3) # calls CMAES.__init__() 268 269 # this loop resembles the method optimize 270 while not es.stop(): # iterate 271 X = es.ask() # get candidate solutions 272 f = [pcma.ff.elli(x) for x in X] # evaluate solutions 273 es.tell(X, f) # do all the real work 274 es.disp(20) # display info every 20th iteration 275 es.logger.add(es) # log another "data line" 276 277 # final output 278 print('termination by', es.stop()) 279 print('best f-value =', es.result[1]) 280 print('best solution =', es.result[0]) 281 282 print('potentially better solution xmean =', es.result[5]) 283 print("let's check f(xmean) = ", pcma.ff.elli(es.result[5])) 284 es.logger.plot() # if matplotlib is available 285 286 A very similar example which may also save the logged data within 287 the loop is the implementation of function `fmin`. 288 289 Details 290 ------- 291 Most of the work is done in the method `tell`. The property 292 `result` contains more useful output. 293 294 :See: `fmin`, `OOOptimizer.optimize` 295 296 """
297 - def __init__(self, xstart, sigma, # mandatory 298 popsize=CMAESParameters.default_popsize, 299 ftarget=None, 300 maxfevals='100 * popsize + ' # 100 iterations plus... 301 '150 * (N + 3)**2 * popsize**0.5', 302 randn=random_normalvariate):
303 """Instantiate `CMAES` object instance using `xstart` and `sigma`. 304 305 Parameters 306 ---------- 307 `xstart`: `list` 308 of numbers (like ``[3, 2, 1.2]``), initial 309 solution vector 310 `sigma`: `float` 311 initial step-size (standard deviation in each coordinate) 312 `popsize`: `int` or `str` 313 population size, number of candidate samples per iteration 314 `maxfevals`: `int` or `str` 315 maximal number of function evaluations, a string is 316 evaluated with ``N`` as search space dimension 317 `ftarget`: `float` 318 target function value 319 `randn`: `callable` 320 normal random number generator, by default 321 `random.normalvariate` 322 323 Details: this method initializes the dynamic state variables and 324 creates a `CMAESParameters` instance for static parameters. 325 """ 326 # process some input parameters and set static parameters 327 N = len(xstart) # number of objective variables/problem dimension 328 self.params = CMAESParameters(N, popsize) 329 self.maxfevals = eval(safe_str(maxfevals, 330 known_words={'N': N, 'popsize': self.params.lam})) 331 self.ftarget = ftarget # stop if fitness <= ftarget 332 self.randn = randn 333 334 # initializing dynamic state variables 335 self.xmean = xstart[:] # initial point, distribution mean, a copy 336 self.sigma = sigma 337 self.pc = N * [0] # evolution path for C 338 self.ps = N * [0] # and for sigma 339 self.C = DecomposingPositiveMatrix(N) # covariance matrix 340 self.counteval = 0 # countiter should be equal to counteval / lam 341 self.fitvals = [] # for bookkeeping output and termination 342 self.best = BestSolution() 343 self.logger = CMAESDataLogger() # for convenience and output
344
345 - def ask(self):
346 """sample lambda candidate solutions 347 348 distributed according to:: 349 350 m + sigma * Normal(0,C) = m + sigma * B * D * Normal(0,I) 351 = m + B * D * sigma * Normal(0,I) 352 353 and return a `list` of the sampled "vectors". 354 """ 355 self.C.update_eigensystem(self.counteval, 356 self.params.lazy_gap_evals) 357 candidate_solutions = [] 358 for k in range(self.params.lam): # repeat lam times 359 z = [self.sigma * eigenval**0.5 * self.randn(0, 1) 360 for eigenval in self.C.eigenvalues] 361 y = dot(self.C.eigenbasis, z) 362 candidate_solutions.append(plus(self.xmean, y)) 363 return candidate_solutions
364
365 - def tell(self, arx, fitvals):
366 """update the evolution paths and the distribution parameters m, 367 sigma, and C within CMA-ES. 368 369 Parameters 370 ---------- 371 `arx`: `list` of "row vectors" 372 a list of candidate solution vectors, presumably from 373 calling `ask`. ``arx[k][i]`` is the i-th element of 374 solution vector k. 375 `fitvals`: `list` 376 the corresponding objective function values, to be 377 minimised 378 """ 379 ### bookkeeping and convenience short cuts 380 self.counteval += len(fitvals) # evaluations used within tell 381 N = len(self.xmean) 382 par = self.params 383 xold = self.xmean # not a copy, xmean is assigned anew later 384 385 ### Sort by fitness 386 arx = [arx[k] for k in argsort(fitvals)] # sorted arx 387 self.fitvals = sorted(fitvals) # used for termination and display only 388 self.best.update(arx[0], self.fitvals[0], self.counteval) 389 390 ### recombination, compute new weighted mean value 391 self.xmean = dot(arx[0:par.mu], par.weights[:par.mu], transpose=True) 392 # = [sum(self.weights[k] * arx[k][i] for k in range(self.mu)) 393 # for i in range(N)] 394 395 ### Cumulation: update evolution paths 396 y = minus(self.xmean, xold) 397 z = dot(self.C.invsqrt, y) # == C**(-1/2) * (xnew - xold) 398 csn = (par.cs * (2 - par.cs) * par.mueff)**0.5 / self.sigma 399 for i in range(N): # update evolution path ps 400 self.ps[i] = (1 - par.cs) * self.ps[i] + csn * z[i] 401 ccn = (par.cc * (2 - par.cc) * par.mueff)**0.5 / self.sigma 402 # turn off rank-one accumulation when sigma increases quickly 403 hsig = (sum(x**2 for x in self.ps) # squared length of ps 404 / (1-(1-par.cs)**(2*self.counteval/par.lam)) / N 405 < 2 + 4./(N+1)) 406 for i in range(N): # update evolution path pc 407 self.pc[i] = (1 - par.cc) * self.pc[i] + ccn * hsig * y[i] 408 409 ### Adapt covariance matrix C 410 # minor adjustment for the variance loss from hsig 411 c1a = par.c1 * (1 - (1-hsig**2) * par.cc * (2-par.cc)) 412 self.C.multiply_with(1 - c1a - par.cmu * sum(par.weights)) # C *= 1 - c1 - cmu * sum(w) 413 self.C.addouter(self.pc, par.c1) # C += c1 * pc * pc^T, so-called rank-one update 414 for k, wk in enumerate(par.weights): # so-called rank-mu update 415 if wk < 0: # guaranty positive definiteness 416 wk *= N * (self.sigma / self.C.mahalanobis_norm(minus(arx[k], xold)))**2 417 self.C.addouter(minus(arx[k], xold), # C += wk * cmu * dx * dx^T 418 wk * par.cmu / self.sigma**2) 419 420 ### Adapt step-size sigma 421 cn, sum_square_ps = par.cs / par.damps, sum(x**2 for x in self.ps) 422 self.sigma *= exp(min(1, cn * (sum_square_ps / N - 1) / 2))
423 # self.sigma *= exp(min(1, cn * (sum_square_ps**0.5 / par.chiN - 1))) 424
425 - def stop(self):
426 """return satisfied termination conditions in a dictionary, 427 428 generally speaking like ``{'termination_reason':value, ...}``, 429 for example ``{'tolfun':1e-12}``, or the empty `dict` ``{}``. 430 """ 431 res = {} 432 if self.counteval <= 0: 433 return res 434 if self.counteval >= self.maxfevals: 435 res['maxfevals'] = self.maxfevals 436 if self.ftarget is not None and len(self.fitvals) > 0 \ 437 and self.fitvals[0] <= self.ftarget: 438 res['ftarget'] = self.ftarget 439 if self.C.condition_number > 1e14: 440 res['condition'] = self.C.condition_number 441 if len(self.fitvals) > 1 \ 442 and self.fitvals[-1] - self.fitvals[0] < 1e-12: 443 res['tolfun'] = 1e-12 444 if self.sigma * max(self.C.eigenvalues)**0.5 < 1e-11: 445 # remark: max(D) >= max(diag(C))**0.5 446 res['tolx'] = 1e-11 447 return res
448 449 @property
450 - def result(self):
451 """the `tuple` ``(xbest, f(xbest), evaluations_xbest, evaluations, 452 iterations, xmean, stds)`` 453 """ 454 return (self.best.x, 455 self.best.f, 456 self.best.evals, 457 self.counteval, 458 int(self.counteval / self.params.lam), 459 self.xmean, 460 [self.sigma * C_ii**0.5 for C_ii in self.C.diag])
461
462 - def disp(self, verb_modulo=1):
463 """`print` some iteration info to `stdout` 464 """ 465 if verb_modulo is None: 466 verb_modulo = 20 467 if not verb_modulo: 468 return 469 iteration = self.counteval / self.params.lam 470 471 if iteration == 1 or iteration % (10 * verb_modulo) < 1: 472 print('evals: ax-ratio max(std) f-value') 473 if iteration <= 2 or iteration % verb_modulo < 1: 474 print(str(self.counteval).rjust(5) + ': ' + 475 ' %6.1f %8.1e ' % (self.C.condition_number**0.5, 476 self.sigma * max(self.C.diag)**0.5) + 477 str(self.fitvals[0])) 478 _stdout.flush()
479
480 481 # ----------------------------------------------- 482 -class CMAESDataLogger(_BaseDataLogger): # could also inherit from object
483 """data logger for class `CMAES`, that can record and plot data. 484 485 Examples 486 ======== 487 488 Load and plot previously generated data:: 489 490 >>> import cma.purecma as pcma 491 >>> logger = pcma.CMAESDataLogger().load() 492 >>> logger.filename == "_CMAESDataLogger_datadict.py" 493 True 494 495 >> logger.plot() 496 497 The data may come from `fmin` or `CMAES` and the simulation may 498 still be running in a different Python shell. 499 500 Use the default logger from `CMAES`:: 501 502 >>> import cma.purecma as pcma 503 >>> es = pcma.CMAES(3 * [0.1], 1) 504 >>> isinstance(es.logger, pcma.CMAESDataLogger) # type(es.logger) 505 True 506 >>> while not es.stop(): 507 ... X = es.ask() 508 ... es.tell(X, [pcma.ff.elli(x) for x in X]) 509 ... es.logger.add(es) # doctest: +ELLIPSIS 510 <cma... 511 512 >> es.logger.plot() 513 514 TODO: the recorded data are kept in memory and keep growing, which 515 may well lead to performance issues for (very?) long runs. Ideally, 516 it should be possible to dump data to a file and clear the memory and 517 also to downsample data to prevent plotting of long runs to take 518 forever. ``"], 'key': "`` or ``"]}"`` is the place where to 519 prepend/append new data in the file. 520 """ 521 522 plotted = 0 523 """plot count for all instances""" 524
525 - def __init__(self, verb_modulo=1):
526 """`verb_modulo` controls whether and when logging takes place 527 for each call to the method `add` 528 529 """ 530 # _BaseDataLogger.__init__(self) # not necessary 531 self.filename = "_CMAESDataLogger_datadict.py" 532 self.optim = None 533 self.modulo = verb_modulo 534 self._data = {'eval': [], 'iter': [], 'stds': [], 'D': [], 535 'sigma': [], 'fit': [], 'xmean': [], 'more_data': []} 536 self.counter = 0 # number of calls of add
537
538 - def add(self, es=None, force=False, more_data=None):
539 """append some logging data from CMAES class instance `es`, 540 if ``number_of_times_called modulo verb_modulo`` equals zero 541 """ 542 es = es or self.optim 543 if not isinstance(es, CMAES): 544 raise RuntimeWarning('logged object must be a CMAES instance,' 545 ' was %s' % type(es)) 546 dat = self._data # a convenient alias 547 self.counter += 1 548 if force and self.counter == 1: 549 self.counter = 0 550 if (self.modulo 551 and (len(dat['eval']) == 0 552 or es.counteval != dat['eval'][-1]) 553 and (self.counter < 4 or force 554 or int(self.counter) % self.modulo == 0)): 555 dat['eval'].append(es.counteval) 556 dat['iter'].append(es.counteval / es.params.lam) 557 dat['stds'].append([es.C[i][i]**0.5 558 for i in range(len(es.C))]) 559 dat['D'].append(sorted(ev**0.5 for ev in es.C.eigenvalues)) 560 dat['sigma'].append(es.sigma) 561 dat['fit'].append(es.fitvals[0] if hasattr(es, 'fitvals') 562 and es.fitvals 563 else None) 564 dat['xmean'].append([x for x in es.xmean]) 565 if more_data is not None: 566 dat['more_data'].append(more_data) 567 return self
568
569 - def plot(self, fig_number=322):
570 """plot the stored data in figure `fig_number`. 571 572 Dependencies: `matlabplotlib.pylab` 573 """ 574 from matplotlib import pylab 575 from matplotlib.pylab import ( 576 gca, figure, plot, xlabel, grid, semilogy, text, draw, show, 577 subplot, tight_layout, rcParamsDefault, xlim, ylim 578 ) 579 def title_(*args, **kwargs): 580 kwargs.setdefault('size', rcParamsDefault['axes.labelsize']) 581 pylab.title(*args, **kwargs)
582 def subtitle(*args, **kwargs): 583 kwargs.setdefault('horizontalalignment', 'center') 584 text(0.5 * (xlim()[1] - xlim()[0]), 0.9 * ylim()[1], 585 *args, **kwargs) 586 def legend_(*args, **kwargs): 587 kwargs.setdefault('framealpha', 0.3) 588 kwargs.setdefault('fancybox', True) 589 kwargs.setdefault('fontsize', rcParamsDefault['font.size'] - 2) 590 pylab.legend(*args, **kwargs) 591 592 figure(fig_number) 593 594 dat = self._data # dictionary with entries as given in __init__ 595 if not dat: 596 return 597 try: # a hack to get the presumable population size lambda 598 strpopsize = ' (evaluations / %s)' % str(dat['eval'][-2] - 599 dat['eval'][-3]) 600 except IndexError: 601 strpopsize = '' 602 603 # plot fit, Delta fit, sigma 604 subplot(221) 605 gca().clear() 606 if dat['fit'][0] is None: # plot is fine with None, but comput- 607 dat['fit'][0] = dat['fit'][1] # tations need numbers 608 # should be reverted later, but let's be lazy 609 assert dat['fit'].count(None) == 0 610 fmin = min(dat['fit']) 611 imin = dat['fit'].index(fmin) 612 dat['fit'][imin] = max(dat['fit']) + 1 613 fmin2 = min(dat['fit']) 614 dat['fit'][imin] = fmin 615 semilogy(dat['iter'], [f - fmin if f - fmin > 1e-19 else None 616 for f in dat['fit']], 617 'c', linewidth=1, label='f-min(f)') 618 semilogy(dat['iter'], [max((fmin2 - fmin, 1e-19)) if f - fmin <= 1e-19 else None 619 for f in dat['fit']], 'C1*') 620 621 semilogy(dat['iter'], [abs(f) for f in dat['fit']], 'b', 622 label='abs(f-value)') 623 semilogy(dat['iter'], dat['sigma'], 'g', label='sigma') 624 semilogy(dat['iter'][imin], abs(fmin), 'r*', label='abs(min(f))') 625 if dat['more_data']: 626 gca().twinx() 627 plot(dat['iter'], dat['more_data']) 628 grid(True) 629 legend_(*[[v[i] for i in [1, 0, 2, 3]] # just a reordering 630 for v in gca().get_legend_handles_labels()]) 631 632 # plot xmean 633 subplot(222) 634 gca().clear() 635 plot(dat['iter'], dat['xmean']) 636 for i in range(len(dat['xmean'][-1])): 637 text(dat['iter'][0], dat['xmean'][0][i], str(i)) 638 text(dat['iter'][-1], dat['xmean'][-1][i], str(i)) 639 subtitle('mean solution') 640 grid(True) 641 642 # plot squareroot of eigenvalues 643 subplot(223) 644 gca().clear() 645 semilogy(dat['iter'], dat['D'], 'm') 646 xlabel('iterations' + strpopsize) 647 title_('Axis lengths') 648 grid(True) 649 650 # plot stds 651 subplot(224) 652 # if len(gcf().axes) > 1: 653 # sca(pylab.gcf().axes[1]) 654 # else: 655 # twinx() 656 gca().clear() 657 semilogy(dat['iter'], dat['stds']) 658 for i in range(len(dat['stds'][-1])): 659 text(dat['iter'][-1], dat['stds'][-1][i], str(i)) 660 title_('Coordinate-wise STDs w/o sigma') 661 grid(True) 662 xlabel('iterations' + strpopsize) 663 _stdout.flush() 664 tight_layout() 665 draw() 666 show() 667 CMAESDataLogger.plotted += 1 668
669 - def save(self, name=None):
670 """save data to file `name` or ``self.filename``""" 671 with open(name or self.filename, 'w') as f: 672 f.write(repr(self._data))
673
674 - def load(self, name=None):
675 """load data from file `name` or ``self.filename``""" 676 from ast import literal_eval 677 with open(name or self.filename, 'r') as f: 678 self._data = literal_eval(f.read()) 679 return self
680
681 #_____________________________________________________________________ 682 #_________________ Fitness (Objective) Functions _____________________ 683 684 -class ff(object): # instead of a submodule
685 """versatile collection of test functions in static methods""" 686 687 @staticmethod # syntax available since 2.4
688 - def elli(x):
689 """ellipsoid test objective function""" 690 n = len(x) 691 aratio = 1e3 692 return sum(x[i]**2 * aratio**(2.*i/(n-1)) for i in range(n))
693 694 @staticmethod
695 - def sphere(x):
696 """sphere, ``sum(x**2)``, test objective function""" 697 return sum(x[i]**2 for i in range(len(x)))
698 699 @staticmethod
700 - def tablet(x):
701 """discus test objective function""" 702 return sum(xi**2 for xi in x) + (1e6-1) * x[0]**2
703 704 @staticmethod
705 - def rosenbrock(x):
706 """Rosenbrock test objective function""" 707 n = len(x) 708 if n < 2: 709 raise ValueError('dimension must be greater one') 710 return sum(100 * (x[i]**2 - x[i+1])**2 + (x[i] - 1)**2 for i 711 in range(n-1))
712
713 #_____________________________________________________________________ 714 #_______________________ Helper Class&Functions ______________________ 715 # 716 -class BestSolution(object):
717 """container to keep track of the best solution seen"""
718 - def __init__(self, x=None, f=None, evals=None):
719 """take `x`, `f`, and `evals` to initialize the best solution 720 """ 721 self.x, self.f, self.evals = x, f, evals
722
723 - def update(self, x, f, evals=None):
724 """update the best solution if ``f < self.f`` 725 """ 726 if self.f is None or f < self.f: 727 self.x = x 728 self.f = f 729 self.evals = evals 730 return self
731 @property
732 - def all(self):
733 """``(x, f, evals)`` of the best seen solution""" 734 return self.x, self.f, self.evals
735
736 -class SquareMatrix(list): # inheritance from numpy.ndarray is not recommended
737 """rudimental square matrix class"""
738 - def __init__(self, dimension):
739 """initialize with identity matrix""" 740 for i in range(dimension): 741 self.append(dimension * [0]) 742 self[i][i] = 1
743
744 - def multiply_with(self, factor):
745 """multiply matrix in place with `factor`""" 746 for row in self: 747 for j in range(len(row)): 748 row[j] *= factor 749 return self
750
751 - def addouter(self, b, factor=1):
752 """Add in place `factor` times outer product of vector `b`, 753 754 without any dimensional consistency checks. 755 """ 756 for i, row in enumerate(self): 757 for j in range(len(row)): 758 row[j] += factor * b[i] * b[j] 759 return self
760 @property
761 - def diag(self):
762 """diagonal of the matrix as a copy (save to change) 763 """ 764 return [self[i][i] for i in range(len(self)) if i < len(self[i])]
765
766 -class DecomposingPositiveMatrix(SquareMatrix):
767 """Symmetric matrix maintaining its own eigendecomposition. 768 769 If ``isinstance(C, DecomposingPositiveMatrix)``, 770 the eigendecomposion (the return value of `eig`) is stored in 771 the attributes `eigenbasis` and `eigenvalues` such that the i-th 772 eigenvector is:: 773 774 [row[i] for row in C.eigenbasis] # or equivalently 775 [C.eigenbasis[j][i] for j in range(len(C.eigenbasis))] 776 777 with eigenvalue ``C.eigenvalues[i]`` and hence:: 778 779 C = C.eigenbasis x diag(C.eigenvalues) x C.eigenbasis^T 780 781 """
782 - def __init__(self, dimension):
783 SquareMatrix.__init__(self, dimension) 784 self.eigenbasis = eye(dimension) 785 self.eigenvalues = dimension * [1] 786 self.condition_number = 1 787 self.invsqrt = eye(dimension) 788 self.updated_eval = 0
789
790 - def update_eigensystem(self, current_eval, lazy_gap_evals):
791 """Execute eigendecomposition of `self` if 792 ``current_eval > lazy_gap_evals + last_updated_eval``. 793 794 Assumes (for sake of simplicity) that `self` is positive 795 definite and hence raises a `RuntimeError` otherwise. 796 """ 797 if current_eval <= self.updated_eval + lazy_gap_evals: 798 return self 799 self._enforce_symmetry() # probably not necessary with eig 800 self.eigenvalues, self.eigenbasis = eig(self) # O(N**3) 801 if min(self.eigenvalues) <= 0: 802 raise RuntimeError( 803 "The smallest eigenvalue is <= 0 after %d evaluations!" 804 "\neigenvectors:\n%s \neigenvalues:\n%s" 805 % (current_eval, str(self.eigenbasis), str(self.eigenvalues))) 806 self.condition_number = max(self.eigenvalues) / min(self.eigenvalues) 807 # now compute invsqrt(C) = C**(-1/2) = B D**(-1/2) B' 808 # this is O(n^3) and takes about 25% of the time of eig 809 for i in range(len(self)): 810 for j in range(i+1): 811 self.invsqrt[i][j] = self.invsqrt[j][i] = sum( 812 self.eigenbasis[i][k] * self.eigenbasis[j][k] 813 / self.eigenvalues[k]**0.5 for k in range(len(self))) 814 self.updated_eval = current_eval 815 return self
816
817 - def mahalanobis_norm(self, dx):
818 """return ``(dx^T * C^-1 * dx)**0.5`` 819 """ 820 return sum(xi**2 for xi in dot(self.invsqrt, dx))**0.5
821
822 - def _enforce_symmetry(self):
823 for i in range(len(self)): 824 for j in range(i): 825 self[i][j] = self[j][i] = (self[i][j] + self[j][i]) / 2 826 return self
827
828 -def eye(dimension):
829 """return identity matrix as `list` of "vectors" (lists themselves)""" 830 m = [dimension * [0] for i in range(dimension)] 831 # m = N * [N * [0]] fails because it gives N times the same reference 832 for i in range(dimension): 833 m[i][i] = 1 834 return m
835
836 -def dot(A, b, transpose=False):
837 """ usual dot product of "matrix" A with "vector" b. 838 839 ``A[i]`` is the i-th row of A. With ``transpose=True``, A transposed 840 is used. 841 """ 842 if not transpose: 843 return [sum(A[i][j] * b[j] for j in range(len(b))) 844 for i in range(len(A))] 845 else: 846 return [sum(A[j][i] * b[j] for j in range(len(b))) 847 for i in range(len(A[0]))]
848
849 -def plus(a, b):
850 """add vectors, return a + b """ 851 return [a[i] + b[i] for i in range(len(a))]
852
853 -def minus(a, b):
854 """subtract vectors, return a - b""" 855 return [a[i] - b[i] for i in range(len(a))]
856
857 -def argsort(a):
858 """return index list to get `a` in order, ie 859 ``a[argsort(a)[i]] == sorted(a)[i]`` 860 """ 861 return sorted(range(len(a)), key=a.__getitem__) # a.__getitem__(i) is a[i]
862
863 -def safe_str(s, known_words=None):
864 """return ``s`` as `str` safe to `eval` or raise an exception. 865 866 Strings in the `dict` `known_words` are replaced by their values 867 surrounded with a space, which the caller considers safe to evaluate 868 with `eval` afterwards. 869 870 Known issues:: 871 872 >>> from cma.purecma import safe_str 873 >>> safe_str('int(p)', {'int': 'int', 'p': 3.1}) # fine 874 ' int ( 3.1 )' 875 >>> safe_str('int(n)', {'int': 'int', 'n': 3.1}) # unexpected 876 ' i 3.1 t ( 3.1 )' 877 878 """ 879 safe_chars = ' 0123456789.,+-*()[]e' 880 if s != str(s): 881 return str(s) 882 if not known_words: 883 known_words = {} 884 stest = s[:] # test this string 885 sret = s[:] # return this string 886 for word in sorted(known_words.keys(), key=len, reverse=True): 887 stest = stest.replace(word, ' ') 888 sret = sret.replace(word, " %s " % known_words[word]) 889 for c in stest: 890 if c not in safe_chars: 891 raise ValueError('"%s" is not a safe string' 892 ' (known words are %s)' % (s, str(known_words))) 893 return sret
894
895 #____________________________________________________________ 896 #____________________________________________________________ 897 # 898 # C and B are arrays rather than matrices, because they are 899 # addressed via B[i][j], matrices can only be addressed via B[i,j] 900 901 # tred2(N, B, diagD, offdiag); 902 # tql2(N, diagD, offdiag, B); 903 904 # Symmetric Householder reduction to tridiagonal form, translated from 905 # JAMA package. 906 907 -def eig(C):
908 """eigendecomposition of a symmetric matrix. 909 910 Return the eigenvalues and an orthonormal basis 911 of the corresponding eigenvectors, ``(EVals, Basis)``, where 912 913 - ``Basis[i]``: `list`, is the i-th row of ``Basis`` 914 - the i-th column of ``Basis``, ie ``[Basis[j][i] for j in range(len(Basis))]`` 915 is the i-th eigenvector with eigenvalue ``EVals[i]`` 916 917 Details: much slower than `numpy.linalg.eigh`. 918 """ 919 # class eig(object): 920 # def __call__(self, C): 921 922 # Householder transformation of a symmetric matrix V into tridiagonal 923 # form. 924 # -> n : dimension 925 # -> V : symmetric nxn-matrix 926 # <- V : orthogonal transformation matrix: 927 # tridiag matrix == V * V_in * V^t 928 # <- d : diagonal 929 # <- e[0..n-1] : off diagonal (elements 1..n-1) 930 931 # Symmetric tridiagonal QL algorithm, iterative 932 # Computes the eigensystem from a tridiagonal matrix in roughtly 3N^3 933 # operations 934 # -> n : Dimension. 935 # -> d : Diagonale of tridiagonal matrix. 936 # -> e[1..n-1] : off-diagonal, output from Householder 937 # -> V : matrix output von Householder 938 # <- d : eigenvalues 939 # <- e : garbage? 940 # <- V : basis of eigenvectors, according to d 941 942 # tred2(N, B, diagD, offdiag); B=C on input 943 # tql2(N, diagD, offdiag, B); 944 945 #import numpy as np 946 #return np.linalg.eigh(C) # return sorted EVs 947 try: 948 num_opt = False # True doesn't work (yet) 949 if num_opt: 950 import numpy as np 951 except ImportError: 952 num_opt = False 953 954 # private void tred2 (int n, double V[][], double d[], double e[]) { 955 def tred2(n, V, d, e): 956 # This is derived from the Algol procedures tred2 by 957 # Bowdler, Martin, Reinsch, and Wilkinson, Handbook for 958 # Auto. Comp., Vol.ii-Linear Algebra, and the corresponding 959 # Fortran subroutine in EISPACK. 960 961 # num_opt = False # factor 1.5 in 30-D 962 963 d[:] = V[n-1][:] # d is output argument 964 if num_opt: 965 # V = np.asarray(V, dtype=float) 966 e = np.asarray(e, dtype=float) 967 968 # Householder reduction to tridiagonal form. 969 970 for i in range(n-1, 0, -1): 971 # Scale to avoid under/overflow. 972 h = 0.0 973 if not num_opt: 974 scale = 0.0 975 for k in range(i): 976 scale = scale + abs(d[k]) 977 else: 978 scale = sum(np.abs(d[0:i])) 979 980 if scale == 0.0: 981 e[i] = d[i-1] 982 for j in range(i): 983 d[j] = V[i-1][j] 984 V[i][j] = 0.0 985 V[j][i] = 0.0 986 else: 987 988 # Generate Householder vector. 989 if not num_opt: 990 for k in range(i): 991 d[k] /= scale 992 h += d[k] * d[k] 993 else: 994 d[:i] /= scale 995 h = np.dot(d[:i], d[:i]) 996 997 f = d[i-1] 998 g = h**0.5 999 1000 if f > 0: 1001 g = -g 1002 1003 e[i] = scale * g 1004 h -= f * g 1005 d[i-1] = f - g 1006 if not num_opt: 1007 for j in range(i): 1008 e[j] = 0.0 1009 else: 1010 e[:i] = 0.0 1011 1012 # Apply similarity transformation to remaining columns. 1013 for j in range(i): 1014 f = d[j] 1015 V[j][i] = f 1016 g = e[j] + V[j][j] * f 1017 if not num_opt: 1018 for k in range(j+1, i): 1019 g += V[k][j] * d[k] 1020 e[k] += V[k][j] * f 1021 e[j] = g 1022 else: 1023 e[j+1:i] += V.T[j][j+1:i] * f 1024 e[j] = g + np.dot(V.T[j][j+1:i], d[j+1:i]) 1025 1026 f = 0.0 1027 if not num_opt: 1028 for j in range(i): 1029 e[j] /= h 1030 f += e[j] * d[j] 1031 else: 1032 e[:i] /= h 1033 f += np.dot(e[:i], d[:i]) 1034 1035 hh = f / (h + h) 1036 if not num_opt: 1037 for j in range(i): 1038 e[j] -= hh * d[j] 1039 else: 1040 e[:i] -= hh * d[:i] 1041 1042 for j in range(i): 1043 f = d[j] 1044 g = e[j] 1045 if not num_opt: 1046 for k in range(j, i): 1047 V[k][j] -= (f * e[k] + g * d[k]) 1048 else: 1049 V.T[j][j:i] -= (f * e[j:i] + g * d[j:i]) 1050 1051 d[j] = V[i-1][j] 1052 V[i][j] = 0.0 1053 1054 d[i] = h 1055 # end for i-- 1056 1057 # Accumulate transformations. 1058 1059 for i in range(n-1): 1060 V[n-1][i] = V[i][i] 1061 V[i][i] = 1.0 1062 h = d[i+1] 1063 if h != 0.0: 1064 if not num_opt: 1065 for k in range(i+1): 1066 d[k] = V[k][i+1] / h 1067 else: 1068 d[:i+1] = V.T[i+1][:i+1] / h 1069 1070 for j in range(i+1): 1071 if not num_opt: 1072 g = 0.0 1073 for k in range(i+1): 1074 g += V[k][i+1] * V[k][j] 1075 for k in range(i+1): 1076 V[k][j] -= g * d[k] 1077 else: 1078 g = np.dot(V.T[i+1][0:i+1], V.T[j][0:i+1]) 1079 V.T[j][:i+1] -= g * d[:i+1] 1080 1081 if not num_opt: 1082 for k in range(i+1): 1083 V[k][i+1] = 0.0 1084 else: 1085 V.T[i+1][:i+1] = 0.0 1086 1087 if not num_opt: 1088 for j in range(n): 1089 d[j] = V[n-1][j] 1090 V[n-1][j] = 0.0 1091 else: 1092 d[:n] = V[n-1][:n] 1093 V[n-1][:n] = 0.0 1094 1095 V[n-1][n-1] = 1.0 1096 e[0] = 0.0
1097 1098 # Symmetric tridiagonal QL algorithm, taken from JAMA package. 1099 # private void tql2 (int n, double d[], double e[], double V[][]) { 1100 # needs roughly 3N^3 operations 1101 def tql2(n, d, e, V): 1102 # This is derived from the Algol procedures tql2, by 1103 # Bowdler, Martin, Reinsch, and Wilkinson, Handbook for 1104 # Auto. Comp., Vol.ii-Linear Algebra, and the corresponding 1105 # Fortran subroutine in EISPACK. 1106 1107 # num_opt = False # True doesn't work 1108 1109 if not num_opt: 1110 for i in range(1, n): # (int i = 1; i < n; i++): 1111 e[i-1] = e[i] 1112 else: 1113 e[0:n-1] = e[1:n] 1114 e[n-1] = 0.0 1115 1116 f = 0.0 1117 tst1 = 0.0 1118 eps = 2.0**-52.0 1119 for l in range(n): # (int l = 0; l < n; l++) { 1120 1121 # Find small subdiagonal element 1122 1123 tst1 = max(tst1, abs(d[l]) + abs(e[l])) 1124 m = l 1125 while m < n: 1126 if abs(e[m]) <= eps*tst1: 1127 break 1128 m += 1 1129 1130 # If m == l, d[l] is an eigenvalue, 1131 # otherwise, iterate. 1132 1133 if m > l: 1134 iiter = 0 1135 while 1: # do { 1136 iiter += 1 # (Could check iteration count here.) 1137 1138 # Compute implicit shift 1139 1140 g = d[l] 1141 p = (d[l+1] - g) / (2.0 * e[l]) 1142 r = (p**2 + 1)**0.5 # hypot(p, 1.0) 1143 if p < 0: 1144 r = -r 1145 1146 d[l] = e[l] / (p + r) 1147 d[l+1] = e[l] * (p + r) 1148 dl1 = d[l+1] 1149 h = g - d[l] 1150 if not num_opt: 1151 for i in range(l+2, n): 1152 d[i] -= h 1153 else: 1154 d[l+2:n] -= h 1155 1156 f = f + h 1157 1158 # Implicit QL transformation. 1159 1160 p = d[m] 1161 c = 1.0 1162 c2 = c 1163 c3 = c 1164 el1 = e[l+1] 1165 s = 0.0 1166 s2 = 0.0 1167 1168 # hh = V.T[0].copy() # only with num_opt 1169 for i in range(m-1, l-1, -1): 1170 # (int i = m-1; i >= l; i--) { 1171 c3 = c2 1172 c2 = c 1173 s2 = s 1174 g = c * e[i] 1175 h = c * p 1176 r = (p**2 + e[i]**2)**0.5 # hypot(p,e[i]) 1177 e[i+1] = s * r 1178 s = e[i] / r 1179 c = p / r 1180 p = c * d[i] - s * g 1181 d[i+1] = h + s * (c * g + s * d[i]) 1182 1183 # Accumulate transformation. 1184 1185 if not num_opt: # overall factor 3 in 30-D 1186 for k in range(n): # (int k = 0; k < n; k++){ 1187 h = V[k][i+1] 1188 V[k][i+1] = s * V[k][i] + c * h 1189 V[k][i] = c * V[k][i] - s * h 1190 else: # about 20% faster in 10-D 1191 hh = V.T[i+1].copy() 1192 # hh[:] = V.T[i+1][:] 1193 V.T[i+1] = s * V.T[i] + c * hh 1194 V.T[i] = c * V.T[i] - s * hh 1195 # V.T[i] *= c 1196 # V.T[i] -= s * hh 1197 1198 p = -s * s2 * c3 * el1 * e[l] / dl1 1199 e[l] = s * p 1200 d[l] = c * p 1201 1202 # Check for convergence. 1203 if abs(e[l]) <= eps*tst1: 1204 break 1205 # } while (Math.abs(e[l]) > eps*tst1); 1206 1207 d[l] += f 1208 e[l] = 0.0 1209 1210 # Sort eigenvalues and corresponding vectors. 1211 if 11 < 3: 1212 for i in range(n-1): # (int i = 0; i < n-1; i++) { 1213 k = i 1214 p = d[i] 1215 for j in range(i+1, n): # (int j = i+1; j < n; j++) { 1216 if d[j] < p: # NH find smallest k>i 1217 k = j 1218 p = d[j] 1219 1220 if k != i: 1221 d[k] = d[i] # swap k and i 1222 d[i] = p 1223 for j in range(n): # (int j = 0; j < n; j++) { 1224 p = V[j][i] 1225 V[j][i] = V[j][k] 1226 V[j][k] = p 1227 # tql2 1228 N = len(C[0]) 1229 V = [C[i][:] for i in range(N)] 1230 d = N * [0] 1231 e = N * [0] 1232 tred2(N, V, d, e) 1233 tql2(N, d, e, V) 1234 return d, V # sorting of V-columns in place is non-trivial 1235
1236 -def test():
1237 """test of the `purecma` module, called ``if __name__ == "__main__"``. 1238 1239 Currently only based on `doctest`:: 1240 1241 >>> try: import cma.purecma as pcma 1242 ... except ImportError: import purecma as pcma 1243 >>> import random 1244 >>> random.seed(3) 1245 >>> xmin, es = pcma.fmin(pcma.ff.rosenbrock, 4 * [0.5], 0.5, 1246 ... verb_disp=0, verb_log=1) 1247 >>> print(es.counteval) 1248 1680 1249 >>> print(es.best.evals) 1250 1664 1251 >>> assert es.best.f < 1e-12 1252 >>> random.seed(5) 1253 >>> es = pcma.CMAES(4 * [0.5], 0.5) 1254 >>> es.params = pcma.CMAESParameters(es.params.dimension, 1255 ... es.params.lam, 1256 ... pcma.RecombinationWeights) 1257 >>> while not es.stop(): 1258 ... X = es.ask() 1259 ... es.tell(X, [pcma.ff.rosenbrock(x) for x in X]) 1260 >>> print("%s, %d" % (pcma.ff.rosenbrock(es.result[0]) < 1e13, 1261 ... es.result[2])) 1262 True, 1584 1263 1264 Large population size:: 1265 1266 >>> try: import cma.purecma as pcma 1267 ... except ImportError: import purecma as pcma 1268 >>> import random 1269 >>> random.seed(4) 1270 >>> es = pcma.CMAES(3 * [1], 1) 1271 >>> es.params = pcma.CMAESParameters(es.params.dimension, 300, 1272 ... pcma.RecombinationWeights) 1273 >>> es.logger = pcma.CMAESDataLogger() 1274 >>> try: 1275 ... es = es.optimize(pcma.ff.elli, verb_disp=0) 1276 ... except AttributeError: # OOOptimizer.optimize is not available 1277 ... while not es.stop(): 1278 ... X = es.ask() 1279 ... es.tell(X, [pcma.ff.elli(x) for x in X]) 1280 >>> assert es.result[1] < 1e13 1281 >>> print(es.result[2]) 1282 9300 1283 1284 """ 1285 import doctest 1286 print('launching doctest...') 1287 print(doctest.testmod(report=True, verbose=0)) # module test
1288 1289 #_____________________________________________________________________ 1290 #_____________________________________________________________________ 1291 # 1292 if __name__ == "__main__": 1293 1294 test() 1295 1296 # fmin(ff.rosenbrock, 10 * [0.5], 0.5) 1297