class LQModel(object):
Constructor: LQModel(max_relative_size_init, max_relative_size_end, min_relative_size, max_absolute_size, sorted_index)
Up to a full quadratic model using the pseudo inverse to compute the model coefficients.
The full model has 1 + 2n + n(n-1)/2 = n(n+3) + 1 parameters. Model building "works" with any number of data.
Model size 1.0 doesn't work well on bbob-f10, 1.1 however works fine.
TODO: change self.types: List[str] to self.type: str with only one entry
>>> import numpy as np >>> import cma >>> import cma.fitness_models as fm >>> # fm.Logger, Logger = fm.LoggerDummy, fm.Logger >>> m = fm.LQModel() >>> for i in range(30): ... x = np.random.randn(3) ... y = cma.ff.elli(x - 1.2) ... _ = m.add_data_row(x, y) >>> assert np.allclose(m.coefficients, [ ... 1.44144144e+06, ... -2.40000000e+00, -2.40000000e+03, -2.40000000e+06, ... 1.00000000e+00, 1.00000000e+03, 1.00000000e+06, ... -4.65661287e-10, -6.98491931e-10, 1.97906047e-09, ... ], atol=1e-5) >>> assert np.allclose(m.xopt, [ 1.2, 1.2, 1.2]) >>> assert np.allclose(m.xopt, [ 1.2, 1.2, 1.2])
Check the same before the full model is build:
>>> m = fm.LQModel() >>> m.settings.min_relative_size = 3 * m.settings.truncation_ratio >>> for i in range(30): ... x = np.random.randn(4) ... y = cma.ff.elli(x - 1.2) ... _ = m.add_data_row(x, y) >>> print(m.types) ['quadratic'] >>> assert np.allclose(m.coefficients, [ ... 1.45454544e+06, ... -2.40000000e+00, -2.40000000e+02, -2.40000000e+04, -2.40000000e+06, ... 1.00000000e+00, 1.00000000e+02, 1.00000000e+04, 1.00000000e+06, ... ]) >>> assert np.allclose(m.xopt, [ 1.2, 1.2, 1.2, 1.2]) >>> assert np.allclose(m.xopt, [ 1.2, 1.2, 1.2, 1.2])
Check the Hessian in the rotated case:
>>> fitness = cma.fitness_transformations.Rotated(cma.ff.elli) >>> m = fm.LQModel(2, 2) >>> for i in range(30): ... x = np.random.randn(4) - 5 ... y = fitness(x - 2.2) ... _ = m.add_data_row(x, y) >>> R = fitness[1].dicMatrices[4] >>> H = np.dot(np.dot(R.T, np.diag([1, 1e2, 1e4, 1e6])), R) >>> assert np.all(np.isclose(H, m.hessian)) >>> assert np.allclose(m.xopt, 4 * [2.2]) >>> m.set_xoffset([2.335, 1.2, 2, 4]) >>> assert np.all(np.isclose(H, m.hessian)) >>> assert np.allclose(m.xopt, 4 * [2.2])
Check a simple linear case, the optimum is not necessarily at the expected position (the Hessian matrix is chosen somewhat arbitrarily)
>>> m = fm.LQModel() >>> m.settings.min_relative_size = 4 >>> _ = m.add_data_row([1, 1, 1], 220 + 10) >>> _ = m.add_data_row([2, 1, 1], 220) >>> print(m.types) [] >>> assert np.allclose(m.coefficients, [80, -10, 80, 80]) >>> assert np.allclose(m.xopt, [22, -159, -159]) # [ 50, -400, -400]) # depends on Hessian >>> # fm.Logger = Logger
For results see:
Hansen (2019). A Global Surrogate Model for CMA-ES. In Genetic and Evolutionary Computation Conference (GECCO 2019), Proceedings, ACM.
lq-CMA-ES at http://lq-cma.gforge.inria.fr/ppdata-archives/pap-gecco2019/figure5/
Method | __init__ |
Increase model complexity if the number of data exceeds max(min_relative_size * df_biggest_model_type, self.min_absolute_size). |
Method | adapt |
Undocumented |
Method | add |
add a sequence of x- and y-data, sorted by y-data (best last) |
Method | add |
add x to self if `force` or x not in self |
Method | eval |
return Model value of x |
Method | evalpop |
never used, return Model values of x for x in X |
Method | expand |
Undocumented |
Method | index |
Undocumented |
Method | isin |
return False if x is not (anymore) in the model archive |
Method | kendall |
return Kendall tau between true F-values (Y) and model values. |
Method | mahalanobis |
caveat: this can be negative because hessian is not guarantied to be pos def. |
Method | old |
return weighted Z, worst entries are clipped if possible. |
Method | optimize |
this works very poorly e.g. on Rosenbrock |
Method | prune |
prune data depending on size parameters |
Method | reset |
Undocumented |
Method | reset_ |
set x-values Z attribute |
Method | set |
Undocumented |
Method | sort |
sort last number entries |
Method | sorted |
regression weights in decreasing order |
Method | type |
one of the model known_types , depending on self.size. |
Method | update |
model type/size depends on the number of observed data |
Method | weighted |
return weighted Z, worst entries are clipped if possible. |
Method | xmean |
Undocumented |
Class Variable | complexity |
Undocumented |
Class Variable | known |
Undocumented |
Instance Variable | count |
Undocumented |
Instance Variable | counts |
Undocumented |
Instance Variable | F |
Undocumented |
Instance Variable | hashes |
Undocumented |
Instance Variable | log |
Undocumented |
Instance Variable | logger |
Undocumented |
Instance Variable | max |
Undocumented |
Instance Variable | number |
Undocumented |
Instance Variable | settings |
Undocumented |
Instance Variable | tau |
Undocumented |
Instance Variable | type |
Undocumented |
Instance Variable | types |
Undocumented |
Instance Variable | X |
Undocumented |
Instance Variable | Y |
Undocumented |
Instance Variable | Z |
Undocumented |
Property | b |
Undocumented |
Property | coefficients |
model coefficients that are linear in self.expand(.) |
Property | current |
degrees of freedom (nb of parameters) of the current model |
Property | dim |
Undocumented |
Property | eigenvalues |
eigenvalues of the Hessian of the model |
Property | hessian |
Undocumented |
Property | logging |
some data of the current state which may be interesting to display |
Property | max |
Undocumented |
Property | max |
Undocumented |
Property | min |
smallest f-values in data queue |
Property | pinv |
return Pseudoinverse, computed unconditionally (not lazy). |
Property | size |
number of data available to build the model |
Property | xopt |
Undocumented |
Method | _hash |
Undocumented |
Method | _prune |
deprecated |
Method | _sort |
old? sort last number entries TODO: for some reason this seems not to pass the doctest |
Class Variable | _complexities |
Undocumented |
Instance Variable | _coefficients |
Undocumented |
Instance Variable | _current |
Undocumented |
Instance Variable | _fieldnames |
Undocumented |
Instance Variable | _type |
the model can have several types, for the time being |
Instance Variable | _xoffset |
Undocumented |
Instance Variable | _xopt |
Undocumented |
Increase model complexity if the number of data exceeds max(min_relative_size * df_biggest_model_type, self.min_absolute_size).
Limit the number of kept data max(max_absolute_size, max_relative_size * max_df).
this works very poorly e.g. on Rosenbrock
x, m = Model().optimize(cma.ff.rosen, [0.1, -0.1], 13)
TODO (implemented, next: test): account for xopt not changing.
one of the model known_types
, depending on self.size.
This may replace types
, but is not in use yet.
return Pseudoinverse, computed unconditionally (not lazy).
pinv
is usually not used directly but via the coefficients
property.
Should this depend on something and/or become lazy?