`class GaussDiagonalSampler(GaussSampler):`

Multi-variate normal distribution with zero mean and diagonal covariance matrix.

Provides methods to `sample`

from and `update`

a multi-variate
normal distribution with zero mean and diagonal covariance matrix.

## Arguments to `__init__`

`standard_deviations`

(required) define the diagonal of the
initial covariance matrix, and consequently also the
dimensionality (attribute `dim`

) of the normal distribution. If
`standard_deviations`

is an `int`

, `np.ones(standard_deviations)`
is used.

`constant_trace='None'`

: 'arithmetic' or 'geometric' or 'aeigen'
or 'geigen' (geometric mean of eigenvalues) are available to be
constant.

`randn=np.random.randn`

is used to generate N(0,1) numbers.

>>> import cma, numpy as np >>> s = cma.sampler.GaussDiagonalSampler(np.ones(4)) >>> z = s.sample(1)[0] >>> assert s.norm([1,0,0,0]) == 1 >>> s.update([[1., 0., 0., 0]], [.9]) >>> assert s.norm([1,0,0,0]) == 1 >>> s.update([[4., 0., 0.,0]], [.5]) >>> g *= 2

## TODO

o DONE implement CMA_diagonal with samplers

o Clean up CMAEvolutionStrategy attributes related to sampling (like usage of B, C, D, dC, sigma_vec, these are pretty substantial changes). In particular this should become compatible with any StatisticalModelSampler. Plan: keep B, C, D, dC for the time being as output-info attributes, keep sigma_vec (55 appearances) either as constant scaling or as a class. Current favorite: make a class (DONE) .

- o combination of sigma_vec and C:
- update sigma_vec with y (this is wrong: use "z")
- rescale y according to the inverse update of sigma_vec (as if y is expressed in the new sigma_vec while C in the old)
- update C with the "new" y.

Method | `__imul__` |
sm *= factor is a shortcut for sm = sm.__imul__(factor). |

Method | `__init__` |
declarative init, doesn't need to be executed |

Method | `correlation` |
return correlation between variables i and j. |

Method | `multiply_` |
multiply `self.C` with `factor` updating internal states. |

Method | `norm` |
compute the Mahalanobis norm that is induced by the statistical model / sample distribution, specifically by covariance matrix C. The expected Mahalanobis norm is about sqrt(dimension). |

Method | `reset` |
reset distribution while keeping all other parameters |

Method | `sample` |
return list of i.i.d. samples. |

Method | `to` |
"re-scale" C to a correlation matrix and return the scaling factors as standard deviations. |

Method | `to` |
return associated linear transformation. |

Method | `to` |
return associated inverse linear transformation. |

Method | `transform` |
apply linear transformation C**0.5 to `x` . |

Method | `transform` |
apply inverse linear transformation C**-0.5 to `x` . |

Method | `update` |
update/learn by natural gradient ascent. |

Instance Variable | `C` |
covariance matrix diagonal |

Instance Variable | `constant` |
Undocumented |

Instance Variable | `count` |
Undocumented |

Instance Variable | `dimension` |
Undocumented |

Instance Variable | `quadratic` |
Undocumented |

Instance Variable | `randn` |
Undocumented |

Property | `condition` |
Undocumented |

Property | `correlation` |
return correlation matrix of the distribution. |

Property | `covariance` |
Undocumented |

Property | `variances` |
vector of coordinate-wise (marginal) variances |

Inherited from `GaussSampler`

:

Method | `set_` |
set Hessian w.r.t. which to compute the eigen spectrum. |

Method | `set_` |
set Hessian from f at x0. |

Property | `chin` |
approximation of the expected length when isotropic with variance 1. |

Property | `corr` |
condition number of the correlation matrix |

Property | `eigenspectrum` |
return eigen spectrum w.r.t. H like sqrt(H) C sqrt(H) |

Instance Variable | `_left` |
Undocumented |

Instance Variable | `_right` |
Undocumented |

Inherited from `StatisticalModelSamplerWithZeroMeanBaseClass`

(via `GaussSampler`

):

Method | `inverse` |
return scalar correction alpha such that X and f fit to f(x) = (x-mean) (alpha * C)**-1 (x-mean) |

Method | `parameters` |
return `dict` with (default) parameters, e.g., `c1` and `cmu` . |

Instance Variable | `_lam` |
Undocumented |

Instance Variable | `_mueff` |
Undocumented |

Instance Variable | `_parameters` |
Undocumented |

`sm *= factor` is a shortcut for `sm = sm.__imul__(factor)`.

Multiplies the covariance matrix with `factor`

.

`cma.sampler.GaussSampler.__init__`

declarative init, doesn't need to be executed

multiply `self.C`

with `factor`

updating internal states.

`factor`

can be a scalar, a vector or a matrix. The vector
is used as outer product, i.e. `multiply_C(diag(C)**-0.5)`
generates a correlation matrix.

compute the Mahalanobis norm that is induced by the
statistical model / sample distribution, specifically by
covariance matrix `C`. The expected Mahalanobis norm is
about `sqrt(dimension)`.

## Example

>>> import cma, numpy as np >>> sm = cma.sampler.GaussFullSampler(np.ones(10)) >>> x = np.random.randn(10) >>> d = sm.norm(x)

`d`

is the norm "in" the true sample distribution,
sampled points have a typical distance of `sqrt(2*sm.dim)`,
where `sm.dim` is the dimension, and an expected distance of
close to `dim**0.5` to the sample mean zero. In the example,
`d`

is the Euclidean distance, because C = I.

return list of i.i.d. samples.

Parameters | |

number | is the number of samples. |

same | Undocumented |

update | controls a possibly lazy update of the sampler. |

"re-scale" C to a correlation matrix and return the scaling factors as standard deviations.

See also: `to_linear_transformation`

.

return associated linear transformation.

If `B = sm.to_linear_transformation()` and z ~ N(0, I), then
np.dot(B, z) ~ Normal(0, sm.C) and sm.C and B have the same
eigenvectors. With `reset=True`

, also `np.dot(B, sm.sample(1)[0])`
obeys the same distribution after the call.

See also: `to_unit_matrix`

`cma.interfaces.StatisticalModelSamplerWithZeroMeanBaseClass.to_linear_transformation_inverse`

return associated inverse linear transformation.

If `B = sm.to_linear_transformation_inverse()` and z ~
Normal(0, sm.C), then np.dot(B, z) ~ Normal(0, I) and sm.C and
B have the same eigenvectors. With `reset=True`

,
also `sm.sample(1)[0] ~ Normal(0, I)` after the call.

See also: `to_unit_matrix`

update/learn by natural gradient ascent.

The natural gradient used for the update of the coordinate-wise variances is:

np.dot(weights, vectors**2)

Details: The weights include the learning rate and
`-1 <= sum(weights[idx]) <= 1` must be `True`

for
`idx = weights > 0` and for `idx = weights < 0`.
The content of `vectors`

with negative weights is changed.