`class AugmentedLagrangian(object):`

Augmented Lagrangian with adaptation of the coefficients

for minimization, implemented after Atamna et al FOGA 2017, Algorithm 1 and Sect 8.2, https://hal.inria.fr/hal-01455379/document.

`cma.ConstrainedFitnessAL`

provides a (more) user-friendly interface to
the `AugmentedLagrangian`

class.

Input `dimension`

is the search space dimension, boolean `equality`

may be an iterable of length of number of constraints indicating the
type for each constraint.

Below, the objective function value is denoted as `f = f(x)`, the
constraints values as `g = g(x) <= 0`, the penalties compute to
`penalties(x) = self(g(x)) = lam g + mu g^2 / 2` (if g > -lam / mu) for
each element of g, as returned by calling the instance with g as argument.

The penalized "fitness" value `f + sum(self(g))` shall be minimized.
lam and mu are the Lagrange multipliers or coefficients.

An additional method, `set_coefficients`

allows to initialize the
Lagrange multipliers from data.

A short example (and doctest):

>>> import cma >>> from cma.constraints_handler import AugmentedLagrangian, PopulationEvaluator >>> m = 2 # number of constraints >>> def objective(x): ... return sum(x[m:]**2) + sum((x[:m] - 1)**2) - m >>> def constraints(x): ... return x[:m] >>> es = cma.CMAEvolutionStrategy(3 * [1], 1, { ... 'termination_callback': lambda es: sum(es.mean**2) < 1e-8}) #doctest: +ELLIPSIS (3_w,7)-aCMA-ES... >>> al = AugmentedLagrangian(es.N) # lam and mu still need to be set >>> # al.chi_domega = 1.15 # is the new default, which seems to give better results than the original value >>> # al.lam, al.mu = ... # we could set the initial Lagrange coefficients here >>> while not es.stop(): ... eva = PopulationEvaluator(objective, constraints)(es.ask(), m=es.mean) ... al.set_coefficients(eva.F, eva.G) # set lam and mu, not part of the original algorithm ... al.update(eva.m['f'], eva.m['g']) ... es.tell(eva.X, [f + sum(al(g)) for f, g in zip(eva.F, eva.G)]) >>> if es.result.evaluations > 3100: ... print("evaluations %d !< 3100. 1500 is normal, 2700 happens rarely" % es.result.evaluations) >>> assert 'callback' in es.stop() >>> assert len(eva.feasibility_ratios) == m >>> assert sum(eva.feasibility_ratios < 0) == sum(eva.feasibility_ratios > 1) == 0

Details: the input `dimension`

is needed to compute the default change
rate `chi_domega`

(if `chi_domega is None`), to compute initial
coefficients and to compare between h and g to update mu. The default
dependency of `chi_domega`

on the dimension seems to be however
suboptimal. Setting `self.chi_domega = 1.15` as is the current
default seems to give better results than the original setting.

More testing based on the simpler `ConstrainedFitnessAL`

interface:

>>> import cma >>> for algorithm, evals in zip((0, 1, 2, 3), (2000, 2200, 1500, 1800)): ... alf = cma.ConstrainedFitnessAL(cma.ff.sphere, lambda x: [x[0] + 1], 3, ... find_feasible_first=True) ... _ = alf.al.set_algorithm(algorithm) ... alf.al.logging = False ... x, es = cma.fmin2(alf, 3 * [1], 0.1, {'verbose':-9, 'seed':algorithm}, # verbosity+seed for doctest only ... callback=alf.update) ... assert sum((es.mean - [-1, 0, 0])**2) < 1e-9, (algorithm, es.mean) ... assert es.countevals < evals, (algorithm, es.countevals) ... assert alf.best_feas.f < 10, (algorithm, str(alf.best_feas)) ... # print(algorithm, es.countevals, ) #alf.best_feas.__dict__)

Method | `__call__` |
return `list` of AL penalties for constraints values in `g` . |

Method | `__init__` |
use `set_algorithm()` and set_...() methods to change defaults |

Method | `muminus2` |
provisorial function to correct mu minus condition, original is `True` |

Method | `muminus3` |
return `True` if mu should be reduced, else `False` |

Method | `muminus4` |
return `True` if mu should be reduced, else `False` |

Method | `muplus2` |
provisorial function to correct mu plus condition, original is `True` |

Method | `muplus3` |
return `True` if mu should be increased, else `False` |

Method | `muplus4` |
return `True` if mu should be increased, else `False` |

Method | `set` |
if algorithm not `None` , set it and return self, |

Method | `set` |
Atamna et al 2017 parameter settings |

Method | `set` |
compute initial coefficients based on some f- and g-values. |

Method | `set` |
Dufosse & Hansen 2020 parameter settings |

Method | `set` |
initialize attributes which depend on the number of constraints. |

Method | `update` |
f is a scalar, g is a vector. |

Instance Variable | `algorithm` |
Undocumented |

Instance Variable | `chi` |
Undocumented |

Instance Variable | `count` |
Undocumented |

Instance Variable | `count` |
Undocumented |

Instance Variable | `count` |
number of times g induced a penality in __call__ since last update |

Instance Variable | `count` |
number of same changes of mu, -3 means it decreased in all last three iterations |

Instance Variable | `counts` |
Undocumented |

Instance Variable | `dgamma` |
Undocumented |

Instance Variable | `dimension` |
Undocumented |

Instance Variable | `f` |
Undocumented |

Instance Variable | `F` |
Undocumented |

Instance Variable | `g` |
Undocumented |

Instance Variable | `G` |
Undocumented |

Instance Variable | `g` |
all g-values from calling the class, recorded but not in use |

Instance Variable | `g` |
g-values from calling update |

Instance Variable | `k1` |
Undocumented |

Instance Variable | `k2` |
Undocumented |

Instance Variable | `lam` |
Undocumented |

Instance Variable | `lam` |
Undocumented |

Instance Variable | `logger` |
Undocumented |

Instance Variable | `loggers` |
Undocumented |

Instance Variable | `logging` |
Undocumented |

Instance Variable | `mu` |
Undocumented |

Instance Variable | `mucdf3` |
number of data to compute cdf for mu adaptation |

Property | `count` |
number of constraints with initialized coefficients |

Property | `feasibility` |
or bias for equality constraints, versatile interface may change |

Property | `is` |
Undocumented |

Property | `isequality` |
bool array, `True` if `i` -th constraint is an equality constraint |

Property | `m` |
number of constraints, raise `TypeError` if not set yet |

Method | `_check` |
in case the user set the attributes |

Method | `_init_` |
allow to reset the logger with a single call |

Method | `_set` |
set parameters based on the value of `self.algorithm` |

Instance Variable | `_equality` |
Undocumented |

Instance Variable | `_initialized` |
Undocumented |

return `list`

of AL penalties for constraints values in `g`

.

Penalties are zero in the optimum and can be negative down to
`-lam**2 / mu / 2`.

if algorithm not `None`

, set it and return self,

otherwise return current algorithm value which should be an integer.

Values < 1 are the (new) default, 1 == Atamna et al 2017, 2 == modified 1.

compute initial coefficients based on some f- and g-values.

The formulas to set the coefficients:

lam = iqr(f) / (sqrt(n) * iqr(g)) mu = 2 * iqr(f) / (5 * n * (iqr(g) + iqr(g**2)))

are taken out of thin air and not thoroughly tested. They are additionally protected against division by zero.

Each row of `G`

represents the constraints of one sample measure.

Set lam and mu until a population contains more than 10% infeasible and more than 10% feasible at the same time. Afterwards, this at least...?...

f is a scalar, g is a vector.

Update Lagrange multipliers based on Atamna et al 2017. f and g are supposed to have been computed from the distribution mean.

Details: do nothing if Lagrange coefficients `lam`

were not yet set.

number of same changes of mu, -3 means it decreased in all last three iterations