queens.iterators package#

Iterators.

Modules for parameter studies, uncertainty quantification, sensitivity analysis, Bayesian inverse analysis, and optimization.

Subpackages#

Submodules#

queens.iterators.adaptive_sampling_iterator module#

Adaptive sampling iterator.

class AdaptiveSamplingIterator(model, parameters, global_settings, likelihood_model, initial_train_iterator, solving_iterator, num_new_samples, num_steps, seed=41, restart_file=None, cs_div_criterion=0.01)[source]#

Bases: Iterator

Adaptive sampling iterator.

likelihood_model#

Likelihood model (Only Gaussian Likelihood supported)

Type:

Model

initial_train_iterator#

Iterator to draw initial training samples (e.g. MC, LHS)

Type:

Iterator

solving_iterator#

Iterator to solve inverse problem (SequentialMonteCarloChopinIterator, MetropolisHastingsIterator and GridIterator supported)

Type:

Iterator

num_new_samples#

Number of new training samples in each adaptive step

Type:

int

num_steps#

Number of adaptive sampling steps

Type:

int

seed#

Seed for random number generation

Type:

int, opt

restart_file#

Result file path for restarts

Type:

str, opt

cs_div_criterion#

Cauchy-Schwarz divergence stopping criterion threshold

Type:

float

x_train#

Training input samples

Type:

np.ndarray

x_train_new#

Newly drawn training samples

Type:

np.ndarray

y_train#

Training likelihood output samples

Type:

np.ndarray

model_outputs#

Training model output samples

Type:

np.ndarray

choose_new_samples(particles, weights)[source]#

Choose new training samples.

Choose new training samples from approximated posterior distribution.

Parameters:
  • particles (np.ndarray) – Particles of approximated posterior

  • weights (np.ndarray) – Particle weights of approximated posterior

Returns:

x_train_new (np.ndarray) – New training samples

core_run()[source]#

Core run.

eval_log_likelihood()[source]#

Evaluate log likelihood.

Returns:

log_likelihood (np.ndarray) – Log likelihood

get_particles_and_weights()[source]#

Get particles and weights of solving iterator.

Returns:
  • particles (np.ndarray) – particles from approximated posterior distribution

  • weights (np.ndarray) – weights corresponding to particles

  • log_posterior (np.ndarray) – log_posterior values corresponding to particles

post_run()[source]#

Post run.

pre_run()[source]#

Pre run.

write_results(particles, weights, log_posterior, iteration)[source]#

Write results to output file and calculate cs_div.

Parameters:
  • particles (np.ndarray) – Particles of approximated posterior

  • weights (np.ndarray) – Particle weights of approximated posterior

  • log_posterior (np.ndarray) – Log posterior value of particles

  • iteration (int) – Iteration count

Returns:

cs_div (float) – Maximum Cauchy-Schwarz divergence between marginals of the current and previous step

cauchy_schwarz_divergence(samples_1, samples_2)[source]#

Maximum Cauchy-Schwarz divergence between marginals of two sample sets.

Parameters:
  • samples_1 (np.ndarray) – Sample set 1

  • samples_2 (np.ndarray) – Sample set 2

Returns:

cs_div_max (np.ndarray) – Maximum Cauchy-Schwarz divergence between marginals of two sample sets.

queens.iterators.black_box_variational_bayes module#

Black box variational inference iterator.

class BBVIIterator(model, parameters, global_settings, result_description, variational_distribution, n_samples_per_iter, random_seed, max_feval, control_variates_scaling_type, loo_control_variates_scaling, stochastic_optimizer, variational_transformation=None, variational_parameter_initialization=None, memory=0, natural_gradient=True, FIM_dampening=True, decay_start_iteration=50, dampening_coefficient=0.01, FIM_dampening_lower_bound=1e-08, model_eval_iteration_period=1000, resample=False, verbose_every_n_iter=10)[source]#

Bases: VariationalInferenceIterator

Black box variational inference (BBVI) iterator.

For Bayesian inverse problems. BBVI does not require model gradients and can hence be used with any simulation model and without the need for adjoint implementations. The algorithm is based on [1]. The expectations for the gradient computations are computed using an importance sampling approach where the IS-distribution is constructed as a mixture of the variational distribution from previous iterations (similar as in [2]).

Keep in mind: This algorithm requires the logpdf of the variational distribution to be differentiable w.r.t. the variational parameters. This is not the case for certain distributions, e.g. uniform distribution, and can therefore not be used in combination with this algorithm (see [3] page 13)!

References

[1]: Ranganath, Rajesh, Sean Gerrish, and David M. Blei. “Black Box Variational Inference.”

Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics. 2014.

[2]: Arenz, Neumann & Zhong. “Efficient Gradient-Free Variational Inference using Policy

Search.” Proceedings of the 35th International Conference on Machine Learning (2018) in PMLR 80:234-243

[3]: Mohamed et al. “Monte Carlo Gradient Estimation in Machine Learning”. Journal of

Machine Learning Research. 21(132):1−62, 2020.

control_variates_scaling_type#

Flag to decide how to compute control variate scaling.

Type:

str

loo_cv_bool#

True if leave-one-out procedure is used for the control variate scaling estimations. Is quite slow!

Type:

boolean

random_seed#

Seed for the random number generators.

Type:

int

max_feval#

Maximum number of simulation runs for this analysis.

Type:

int

memory#

Number of previous iterations that should be included in the MC ELBO gradient estimations. For memory=0 the algorithm reduces to standard the standard BBVI algorithm. (Better variable name is welcome.)

Type:

int

model_eval_iteration_period#

If the iteration number is a multiple of this number, the probabilistic model is sampled independent of the other conditions.

Type:

int

resample#

True is resampling should be used.

Type:

bool

log_variational_mat#

Logpdf evaluations of the variational distribution.

Type:

np.array

grad_params_log_variational_mat#

Column-wise grad params logpdf (score function) of the variational distribution.

Type:

np.array

log_posterior_unnormalized#

Row-vector logarithmic probabilistic model evaluation (generally unnormalized).

Type:

np.array

samples_list#

List of samples from previous iterations for the ISMC gradient.

Type:

list

parameter_list#

List of parameters from previous iterations for the ISMC gradient.

Type:

list

log_posterior_unnormalized_list#

List of probabilistic model evaluations from previous iterations for the ISMC gradient.

Type:

list

ess#

Effective sample size of the current iteration (in case IS is used).

Type:

float

sampling_bool#

True if probabilistic model has to be sampled. If importance sampling is used the forward model might not evaluated in every iteration.

Type:

bool

sample_set#

Set of samples used to evaluate the probabilistic model is not needed in other VI methods.

Type:

np.ndarray

core_run()[source]#

Core run for black-box variational inference.

eval_log_likelihood(samples)[source]#

Calculate the log-likelihood of the observation data.

Evaluation of the likelihood model for all inputs of the sample batch will trigger the actual forward simulation.

Parameters:

samples (np.array) – Samples (n_samples x n_dimension)

Returns:
  • log_likelihood (np.array) – Vector of the log-likelihood function for all input

  • samples of the current batch

get_importance_sampling_weights(variational_params_list, samples)[source]#

Get the importance sampling weights for the MC gradient estimation.

Uses a special computation of the weights using the logpdfs to reduce numerical issues:

\(w=\frac{q_i}{\sum_{j=0}^{memory+1} \frac{1}{memory+1}q_j}=\frac{memory+1} {\sum_{j=0}^{memory+1}exp(ln(q_j)-ln(q_i))}\)

and is therefore slightly slower. Assumes the mixture coefficients are all equal.

Parameters:
  • variational_params_list (list) – Variational parameters list of the current and the desired previous iterations

  • samples (np.array) – Samples (n_samples x n_dimension)

Returns:
  • weights (np.array) – (Unnormalized) weights for the ISMC evaluated for the

  • given samples (1 x n_samples)

get_log_posterior_unnormalized(samples)[source]#

Calculate the unnormalized log posterior for a sample batch.

Parameters:

samples (np.array) – Samples (n_samples x n_dimension)

Returns:
  • unnormalized_log_posterior (np.array) – Values of unnormalized log posterior

  • distribution at positions of sample batch

get_log_prior(samples)[source]#

Evaluate the log prior of the model for a sample batch.

The samples are transformed according to the selected transformation.

Parameters:

samples (np.array) – Samples (n_samples x n_dimension)

Returns:

log_prior (np.array) – log-prior vector evaluated for current sample batch

queens.iterators.bmfia_iterator module#

Iterator for Bayesian multi-fidelity inverse analysis.

class BMFIAIterator(parameters, global_settings, features_config, hf_model, lf_model, initial_design, X_cols=None, num_features=None, coord_cols=None)[source]#

Bases: Iterator

Bayesian multi-fidelity inverse analysis iterator.

Iterator for Bayesian multi-fidelity inverse analysis. Here, we build the multi-fidelity probabilistic surrogate, determine optimal training points X_train and evaluate the low- and high-fidelity model for these training inputs, to yield Y_LF_train and Y_HF_train training data. The actual inverse problem is not solved or iterated in this module but instead we iterate over the training data to approximate the probabilistic mapping p(yhf|ylf).

X_train#

Input training matrix for HF and LF model.

Type:

np.array

Y_LF_train#

Corresponding LF model response to X_train input.

Type:

np.array

Y_HF_train#

Corresponding HF model response to X_train input.

Type:

np.array

Z_train#

Corresponding LF informative features to X_train input.

Type:

np.array

features_config#

Type of feature selection method.

Type:

str

hf_model#

High-fidelity model object.

Type:

obj

lf_model#

Low-fidelity model object.

Type:

obj

coords_experimental_data#

Coordinates of the experimental data.

Type:

np.array

time_vec#

Time vector of experimental observations.

Type:

np.array

y_obs_vec#

Output data of experimental observations.

Type:

np.array

x_cols#

List of columns for features taken from input variables.

Type:

list

num_features#

Number of features to be selected.

Type:

int

coord_cols#

List of columns for coordinates taken from input variables.

Type:

list

Returns:

BMFIAIterator (obj) – Instance of the BMFIAIterator

classmethod calculate_initial_x_train(initial_design_dict, parameters)[source]#

Optimal training data set for probabilistic model.

Based on the selected design method, determine the optimal set of input points X_train to run the HF and the LF model on for the construction of the probabilistic surrogate.

Parameters:
  • initial_design_dict (dict) – Dictionary with description of initial design.

  • model (obj) – A model object on which the calculation is performed (only needed for interfaces here. The model is not evaluated here)

  • parameters (obj) – Parameters object

Returns:

x_train (np.array) – Optimal training input samples

core_run()[source]#

Trigger main or core run of the BMFIA iterator.

It summarizes the actual evaluation of the HF and LF models for these data and the determination of LF informative features.

Returns:
  • Z_train (np.array) – Matrix with low-fidelity feature training data

  • Y_HF_train (np.array) – Matrix with HF training data

eval_model()[source]#

Evaluate the LF and HF model to for the training inputs.

X_train.

evaluate_HF_model_for_X_train()[source]#

Evaluate the high-fidelity model for the X_train input data-set.

evaluate_LF_model_for_X_train()[source]#

Evaluate the low-fidelity model for the X_train input data-set.

expand_training_data(additional_x_train, additional_y_lf_train=None)[source]#

Update or expand the training data.

Data is appended by an additional input/output vector of data.

Parameters:
  • additional_x_train (np.array) – Additional input vector

  • additional_y_lf_train (np.array, optional) – Additional LF model response corresponding to additional input vector. Default to None

Returns:
  • Z_train (np.array) – Matrix with low-fidelity feature training data

  • Y_HF_train (np.array) – Matrix with HF training data

classmethod get_design_method(initial_design_dict)[source]#

Get the design method for initial training data.

Select the method for the generation of the initial training data for the probabilistic regression model.

Parameters:

initial_design_dict (dict) – Dictionary with description of initial design.

Returns:

run_design_method (obj) – Design method for selecting the HF training set

static random_design(initial_design_dict, parameters)[source]#

Generate a uniformly random design strategy.

Get a random initial design using the Monte-Carlo sampler with a uniform distribution.

Parameters:
  • initial_design_dict (dict) – Dictionary with description of initial design.

  • model (obj) – A model object on which the calculation is performed (only needed for interfaces here. The model is not evaluated here)

  • parameters (obj) – Parameters object

Returns:

x_train (np.array) – Optimal training input samples

set_feature_strategy(y_lf_mat, x_mat, coords_mat)[source]#

Get the low-fidelity feature matrix.

Compose the low-fidelity feature matrix that consists of the low- fidelity model outputs and the low-fidelity informative features.

y_lf_mat (np.array): Low-fidelity output matrix with row-wise model realizations.

Columns are different dimensions of the output.

x_mat (np.array): Input matrix for the simulation model with row-wise input points,

and colum-wise variable dimensions.

coords_mat (np.array): Coordinate matrix for the observations with row-wise coordinate

points and different dimensions per column.

Returns:

z_mat (np.array) – Extended low-fidelity matrix containing informative feature dimensions. Every row is one data point with dimensions per column.

update_probabilistic_mapping_with_features()[source]#

Update multi-fidelity mapping with optimal lf-features.

queens.iterators.bmfmc_iterator module#

Iterator for Bayesian multi-fidelity UQ.

class BMFMCIterator(model, parameters, global_settings, result_description, initial_design, plotting_options=None)[source]#

Bases: Iterator

Iterator for the Bayesian multi-fidelity Monte-Carlo method.

The iterator fulfills the following tasks:

  1. Load the low-fidelity Monte Carlo data.

  2. Based on low-fidelity data, calculate optimal X_train to evaluate the high-fidelity model.

  3. Based on X_train return the corresponding Y_LFs_train.

  4. Initialize the BMFMC_model (this is not the high-fidelity model but the probabilistic mapping) with X_train and Y_LFs_train. Note that the BMFMC_model itself triggers the computation of the high-fidelity training data Y_HF_train.

  5. Trigger the evaluation of the BMFMC_model. Here evaluation refers to computing the posterior statistics of the high-fidelity model. This is implemented in the BMFMC_model itself.

model#

Instance of the BMFMCModel.

Type:

obj

result_description#

Dictionary containing settings for plotting and saving data/results.

Type:

dict

X_train#

Corresponding input for the simulations that are used to train the probabilistic mapping.

Type:

np.array

Y_LFs_train#

Outputs of the low-fidelity models that correspond to the training inputs X_train.

Type:

np.array

output#

Dictionary containing the output quantities:

  • Z_mc: Corresponding Monte-Carlo point in LF informative feature space

  • m_f_mc: Corresponding Monte-Carlo points of posterior mean of

    the probabilistic mapping

  • var_y_mc: Corresponding Monte-Carlo posterior variance samples of the

    probabilistic mapping

  • y_pdf_support: Support vector for QoI output distribution

  • p_yhf_mean: Vector containing mean function of HF output

    posterior distribution

  • p_yhf_var: Vector containing posterior variance function of HF output

    distribution

  • p_yhf_mean_BMFMC: Vector containing mean function of HF output

    posterior distribution calculated without informative features \(\boldsymbol{\gamma}\)

  • p_yhf_var_BMFMC: Vector containing posterior variance function of HF

    output distribution calculated without informative features \(\boldsymbol{\gamma}\)

  • p_ylf_mc: Vector with low-fidelity output distribution (kde from MC

    data)

  • p_yhf_mc: Vector with reference HF output distribution (kde from MC

    reference data)

  • Z_train: Corresponding training data in LF feature space

  • Y_HF_train: Outputs of the high-fidelity model that correspond to the

    training inputs X_train such that \(Y_{HF}=y_{HF}(X)\)

  • X_train: Corresponding input for the simulations that are used to

    train the probabilistic mapping

Type:

dict

initial_design#

Dictionary containing settings for the selection strategy/initial design of training points for the probabilistic mapping.

Type:

dict

visualization#

Visualization object for BMFMC.

Type:

BMFMCVisualization

calculate_optimal_X_train()[source]#

Calculate the optimal model inputs X_train.

Based on the low-fidelity sampling data, calculate the optimal model inputs X_train, on which the high-fidelity model should be evaluated to construct the training data set for BMFMC. This selection is performed based on the following method options:

  • random: Divides the \(y_{LF}\) data set in bins and selects training candidates randomly from each bin until \(n_{train}\) is reached.

  • diverse subset: Determine the most important input features \(\gamma_i\) (this information is provided by the BMFMCModel), and find a space filling subset (diverse subset), given the LF sampling data with respect to the most important features \(\gamma_i\). The number of features to be considered can be set in the input file.

    Remark: An optimization routine for the optimal number of features to be considered will be added in the future.

core_run()[source]#

Main run of the BMFMCIterator.

The BMFMCIterator covers the following points:

  1. Reading the sampling data from the low-fidelity model in QUEENS.

  2. Based on LF data, determine optimal X_train for which the high-fidelity model should be evaluated \(Y_{HF}=y_{HF}(X)\).

  3. Update the BMFMCModel with the partial training data set of X_train, Y_LF_train (Y_HF_train is determined in the BMFMCModel).

  4. Evaluate the BMFMCModel, which means that the posterior statistics \(\mathbb{E}_{f}\left[p(y_{HF}^*|f,\mathcal{D})\right]\) and \(\mathbb{V}_{f}\left[p(y_{HF}^*|f,\mathcal{D})\right]\) are computed based on the BMFMC algorithm, which is implemented in the BMFMCModel.

diverse_subset_design(n_points)[source]#

Calculate the HF training points based on psa_select.

Calculate the HF training points from large LF-MC data-set based on the diverse subset strategy based on the psa_select method from diversipy.

Parameters:

n_points (int) – Number of HF training points to be selected

get_design_method(design_method)[source]#

Get the design method for selecting the HF data.

Get the design method for selecting the HF data from the LF MC dataset.

Parameters:

design_method (str) – Design method specified in input file

Returns:

run_design_method (obj) – Design method for selecting the HF training set

post_run()[source]#

Saving and plotting the results.

random_design(n_points)[source]#

Calculate the HF training points based on random selection.

Calculate the HF training points from large LF-MC data-set based on random selection from bins over y_LF.

Parameters:

n_points (int) – Number of HF training points to be selected

queens.iterators.classification module#

Binary classification iterator.

This iterator trains a classification algorithm based on a forward and classification model.

class ClassificationIterator(model, parameters, global_settings, result_description, num_sample_points, num_model_calls, random_sampling_frequency, classifier, seed, classification_function=<function default_classification_function>)[source]#

Bases: Iterator

Iterator for machine leaning based classification.

result_description#

Description of desired results

Type:

dict

num_sample_points#

number of total points

Type:

int

num_model_calls#

total number of model calls

Type:

int

random_sampling_frequency#

in case of active sampling every iteration index that is a multiple of this number the samples are selected randomly

Type:

int

classifier#

queens classifier object

Type:

obj

visualization_obj#

object for visualization

Type:

obj

classification_function#

function that classifies the model output

Type:

fun

samples#

samples on which the model was evaluated at

Type:

np.array

classified_outputs#

classified output of evaluated at samples

Type:

np.array

binarize(samples)[source]#

Classify the output.

Parameters:

samples (np.array) – Samples where to evaluate the model

Returns:

np.array – classified output

core_run()[source]#

Evaluate the samples on model and classify them.

post_run()[source]#

Analyze the results.

default_classification_function(features)[source]#

Default classification function checking for NaNs.

Parameters:

features (np.array) – input array containing the values that should be classified

Returns:

np.array – boolean array predictions where True represents non-NaN values, and False represents NaN values in the original array x

queens.iterators.control_variates_iterator module#

Monte Carlo Control Variates Iterator.

class ControlVariatesIterator(model, control_variate, parameters, global_settings, seed, num_samples, expectation_cv=None, num_samples_cv=None, use_optimal_num_samples=False, cost_model=None, cost_cv=None)[source]#

Bases: Iterator

Monte Carlo control variates iterator.

The control variates method in the context of Monte Carlo is used to quantify uncertainty in a model when input parameters are uncertain and only expressed as probability distributions. The Monte Carlo control variates method uses so-called low-fidelity models as control variates to make the quantification more precise. In the context of Monte Carlo, the control variate method is sometimes also called control variable method.

The estimator for the Monte Carlo control variates method with a single control variate is given by \(\hat{\mu}_{f}= \underbrace{\frac{1}{N} \sum\limits_{i=1}^{N} \Big [ f(x^{(i)}) - \alpha \Big(g(x^{(i)}) - \hat\mu_{g} \Big) \Big]}_\textrm{cross-model estimator}\) where \(f\) represents the model, \(g\) the control variate, and \(\hat\mu_{g}\) the expectation of the control variate. \(N\) represents the number of samples on the cross-model estimator and \(x^{(i)}\) are random parameter samples.

In case the mean of the control variate is known, \(\hat\mu_{g}\) can be passed to the iterator as expectation_cv. Otherwise, \(\hat\mu_{g}\) is estimated with the Monte Carlo method.

The implementation is based on chapter 9.3 in [1] and uses one control variate.

References

[1] D. P. Kroese, Z. I. Botev, and T. Taimre. “Handbook of Monte Carlo Methods”. Wiley,
model#

Main model. The uncertainties are quantified for this model.

Type:

Model

control_variate#

Control variate model.

Type:

Model

seed#

Seed for random samples.

Type:

int

num_samples#

Number of samples on the cross-model estimator.

Type:

int

expectation_cv#

Expectation of the control variate. If the expectation is None, it will be estimated via MC sampling.

Type:

float

output#

Output dict with the following entries:

  • mean (float): Cross-model estimator.

  • std (float): Estimated standard deviation of the cross-model estimator.

  • num_samples_cv (int): Number of samples to estimate the control variate

    mean.

  • mean_cv (float): Mean of control variate.

  • std_cv_mean_estimator (float): Standard deviation of control variate

    mean estimation.

  • cv_influence_coeff (float): Method specific parameter that determines

    the influence of the control variate.

  • sample_ratio (float): Ratio of number of samples on control variate to

    number of samples on main model. Is only part of output if use_optimal_num_samples is True.

Type:

dict

num_samples_cv#

Number of samples to use for computing the expectation of the control variate if this expectation is unknown.

Type:

int

samples#

Samples for the control variates estimator.

Type:

np.array

use_optimal_num_samples#

Determines wether the iterator calculates and uses the optimal number of samples to estimate the control variate mean such that the variance of the control variates estimator is minimized.

Type:

bool

cost_model#

Cost of evaluating the model.

Type:

float

cost_cv#

Cost of evaluating the control variate.

Type:

float

variance_cv_mean_estimator#

Variance of the control variate mean estimator.

Type:

float

core_run()[source]#

Core run of iterator.

Computes the cross-model estimator and its standard deviation.

post_run()[source]#

Write results to result file.

pre_run()[source]#

Draw samples for the core run.

queens.iterators.data_iterator module#

Data iterator.

class DataIterator(path_to_data, result_description, global_settings, parameters=None)[source]#

Bases: Iterator

Basic Data Iterator to enable restarts from data.

samples#

Array with all samples.

Type:

np.array

output#

Array with all model outputs.

Type:

np.array

eigenfunc#

Function for computing eigenfunctions or transformations applied to the data. This attribute is a placeholder and may be updated in future versions (refer to Issue #45).

Type:

obj

path_to_data#

Path to pickle file containing data.

Type:

string

result_description#

Description of desired results.

Type:

dict

core_run()[source]#

Read data from file.

post_run()[source]#

Analyze the results.

read_pickle_file()[source]#

Read in data from a pickle file.

Main reason for putting this functionality in a method is to make mocking reading input easy for testing.

Returns:
  • np.array, np.array – Two arrays, the first contains input samples,

  • the second the corresponding output samples

queens.iterators.elementary_effects_iterator module#

Elementary Effects iterator module.

Elementary Effects (also called Morris method) is a global sensitivity analysis method, which can be used for parameter fixing (ranking).

class ElementaryEffectsIterator(model, parameters, global_settings, num_trajectories, local_optimization, num_optimal_trajectories, number_of_levels, seed, confidence_level, num_bootstrap_samples, result_description)[source]#

Bases: Iterator

Iterator to compute Elementary Effects (Morris method).

num_trajectories#

Number of trajectories to generate.

Type:

int

local_optimization#

Flag whether to use local optimization according to Ruano et al. (2012). Speeds up the process tremendously for larger number of trajectories and num_levels. If set to False, brute force method is used.

Type:

bool

num_optimal_trajectories#

Number of optimal trajectories to sample (between 2 and N).

Type:

int

num_levels#

Number of grid levels.

Type:

int

seed#

Seed for random number generation.

Type:

int

confidence_level#

Size of confidence interval.

Type:

float

num_bootstrap_samples#

Number of bootstrap samples used to compute confidence intervals for sensitivity measures.

Type:

int

result_description#

Dictionary with desired result description.

Type:

dict

samples#

Samples at which the model is evaluated.

Type:

np.array

output#

Results at samples.

Type:

np.array

salib_problem#

Dictionary with SALib problem description.

Type:

dict

si#

Dictionary with all sensitivity indices.

Type:

dict

visualization#

Visualization object for SA.

Type:

SAVisualization

core_run()[source]#

Run Analysis on model.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples for subsequent analysis and update model.

print_results(results)[source]#

Print results to log.

Parameters:

results (dict) –

Dictionary with the results of the sensitivity analysis, including: - ‘parameter_names’: List of parameter names. - ‘sensitivity_indices’: Contains indices like:

  • ’names’: Parameter names.

  • ’mu_star’: Mean absolute effect.

  • ’mu’: Mean effect.

  • ’mu_star_conf’: Confidence interval for ‘mu_star’.

  • ’sigma’: Standard deviation of the effect.

process_results()[source]#

Write all results to self contained dictionary.

queens.iterators.grid_iterator module#

Grid Iterator.

class GridIterator(model, parameters, global_settings, result_description, grid_design)[source]#

Bases: Iterator

Grid Iterator to enable meshgrid evaluations.

Different axis scaling possible: as linear, log10 or ln.

grid_dict#

Dictionary containing grid information.

Type:

dict

result_description#

Description of desired results.

Type:

dict

samples#

Array with all samples.

Type:

np.array

output#

Array with all model outputs.

Type:

np.array

num_grid_points_per_axis#

List with number of grid points for each grid axis.

Type:

list

num_parameters#

Number of parameters to be varied.

Type:

int

scale_type#

List with string entries denoting scaling type for each grid axis.

Type:

list

visualization#

Visualization object for the grid iterator.

Type:

GridIteratorVisualization

core_run()[source]#

Evaluate the meshgrid on model.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples based on description in grid_dict.

queens.iterators.hmc_iterator module#

HMC algorithm.

“The Hamiltonian Monte Carlo sampler is a gradient based MCMC algortihm. It is used to sample from arbitrary probability distributions.

class HMCIterator(model, parameters, global_settings, num_samples, seed, num_burn_in=100, num_chains=1, discard_tuned_samples=True, result_description=None, summary=True, pymc_sampler_stats=False, as_inference_dict=False, use_queens_prior=False, progressbar=False, max_steps=100, target_accept=0.65, path_length=2.0, step_size=0.25, scaling=None, is_cov=False, init_strategy='auto', advi_iterations=50000)[source]#

Bases: PyMCIterator

Iterator based on HMC algorithm.

The HMC sampler is a state of the art MCMC sampler. It is based on the Hamiltonian mechanics.

max_steps#

Maximum of leapfrog steps to take in one iteration

Type:

int

target_accept#

Target accpetance rate which should be conistent after burn-in

Type:

float

path_length#

Maximum length of particle trajectory

Type:

float

step_size#

Step size, scaled by 1/(parameter dimension **0.25)

Type:

float

scaling#

The inverse mass, or precision matrix

Type:

np.array

is_cov#

Setting if the scaling is a mass or covariance matrix

Type:

boolean

init_strategy#

Strategy to tune mass damping matrix

Type:

str

advi_iterations#

Number of iteration steps of ADVI based init strategies

Type:

int

Returns:

hmc_iterator (obj) – Instance of HMC Iterator

init_mcmc_method()[source]#

Init the PyMC MCMC Model.

Returns:

step (obj) – The MCMC Method within the PyMC Model

queens.iterators.iterator module#

Base module for iterators or methods.

class Iterator(model, parameters, global_settings)[source]#

Bases: object

Base class for Iterator hierarchy.

This Iterator class is the base class for one of the primary class hierarchies in QUEENS. The job of the iterator hierarchy is to coordinate and execute simulations/function evaluations.

model#

Model to be evaluated by iterator.

Type:

obj

parameters#

Parameters object

global_settings#

settings of the QUEENS experiment including its name and the output directory

Type:

GlobalSettings

abstract core_run()[source]#

Core part of the run, implemented by all derived classes.

post_run()[source]#

Optional post-run portion of run.

E.g. for doing some post processing.

pre_run()[source]#

Optional pre-run portion of run.

run()[source]#

Orchestrate pre/core/post phases.

queens.iterators.lhs_iterator module#

Latin hypercube sampling iterator.

class LHSIterator(model, parameters, global_settings, seed, num_samples, result_description=None, num_iterations=10, criterion='maximin')[source]#

Bases: Iterator

Basic LHS Iterator to enable Latin Hypercube sampling.

seed#

Seed for numpy random number generator.

Type:

int

num_samples#

Number of samples to compute.

Type:

int

num_iterations#

Number of optimization iterations of design.

Type:

int

result_description#

Description of desired results.

Type:

dict

criterion#

Allowable values are:

  • center or c

  • maximin or m

  • centermaximin or cm

  • correlation or corr

Type:

str

samples#

Array with all samples.

Type:

np.array

output#

Array with all model outputs.

Type:

np.array

core_run()[source]#

Run LHS Analysis on model.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples for subsequent LHS analysis.

queens.iterators.lm_iterator module#

Levenberg Marquardt iterator.

class LMIterator(model, parameters, global_settings, result_description, initial_guess=None, bounds=None, jac_rel_step=0.0001, jac_abs_step=0.0, init_reg=1.0, update_reg='grad', convergence_tolerance=1e-06, max_feval=1, verbose_output=False)[source]#

Bases: Iterator

Iterator for Levenberg-Marquardt deterministic optimization problems.

Implements the Levenberg-Marquardt algorithm for optimization, adapted from the 4C gen_inv_analysis approach. This class focuses on a simplified yet controlled optimization process.

initial_guess#

Initial guess for the optimization parameters.

Type:

np.ndarray

bounds#

Tuple specifying the lower and upper bounds for parameters. If None, no bounds are applied.

Type:

tuple

havebounds#

Flag indicating if bounds are provided.

Type:

bool

param_current#

Current parameter values being optimized.

Type:

np.ndarray

jac_rel_step#

Relative step size used for finite difference approximation of the Jacobian.

Type:

float

max_feval#

Maximum number of allowed function evaluations.

Type:

int

result_description#

Configuration for result file handling and plotting.

Type:

dict

jac_abs_step#

Absolute step size used for finite difference approximation of the Jacobian.

Type:

float

reg_param#

Regularization parameter for the Levenberg-Marquardt algorithm.

Type:

float

init_reg#

Initial value for the regularization parameter.

Type:

float

update_reg#

Strategy for updating the regularization parameter (“res” for residual-based, “grad” for gradient-based).

Type:

str

tolerance#

Convergence tolerance for the optimization process.

Type:

float

verbose_output#

If True, provides detailed output during optimization.

Type:

bool

iter_opt#

The iteration number at which the lowest error was achieved.

Type:

int

lowesterror#

The minimum error encountered during the optimization process.

Type:

float or None

param_opt#

Parameter values corresponding to the minimum error.

Type:

np.ndarray

solution#

The final optimized parameter values.

Type:

np.ndarray

checkbounds(param_delta, i)[source]#

Check if proposed step is in bounds.

Otherwise double regularization and compute new step.

Parameters:
  • param_delta (numpy.ndarray) – Parameter step

  • i (int) – Iteration number

Returns:

stepisoutside (bool) – Flag if proposed step is out of bounds

core_run()[source]#

Core run of Levenberg Marquardt iterator.

get_positions_raw_2pointperturb(x0)[source]#

Get parameter sets for objective function evaluations.

Parameters:

x0 (numpy.ndarray) – Vector with current parameters

Returns:
  • positions (numpy.ndarray) – Parameter batch for function evaluation

  • delta_positions (numpy.ndarray) – Parameter perturbations for finite difference scheme

jacobian_and_residual(x0)[source]#

Evaluate Jacobian and residual of objective function at x0.

For LM we can restrict to “2-point”.

Parameters:

x0 (numpy.ndarray) – Vector with current parameters

Returns:
  • jacobian_matrix (numpy.ndarray) – Jacobian Matrix approximation from finite differences

  • f0 (numpy.ndarray) – Residual of objective function at x0

post_run()[source]#

Post run.

Write solution to the console and optionally create .html plot from result file.

pre_run()[source]#

Initialize run.

Print console output and optionally open .csv file for results and write header.

printstep(i, resnorm, gradnorm, param_delta)[source]#

Print iteration data to console and optionally to file.

Opens file in append mode, so that file is updated frequently.

Parameters:
  • i (int) – Iteration number

  • resnorm (float) – Residual norm

  • gradnorm (float) – Gradient norm

  • param_delta (numpy.ndarray) – Parameter step

queens.iterators.metropolis_hastings_iterator module#

Metropolis-Hastings algorithm.

“The Metropolis-Hastings algorithm is a Markov Chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult.” [1]

References

[1]: https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm

class MetropolisHastingsIterator(model, parameters, global_settings, result_description, proposal_distribution, num_samples, seed, tune=False, tune_interval=100, scale_covariance=1.0, num_burn_in=0, num_chains=1, as_smc_rejuvenation_step=False, temper_type='bayes')[source]#

Bases: Iterator

Iterator based on Metropolis-Hastings algorithm.

The Metropolis-Hastings algorithm can be considered the benchmark Markov Chain Monte Carlo (MCMC) algorithm. It may be used to sample from complex, intractable probability distributions from which direct sampling is difficult or impossible. The implemented version is a random walk Metropolis-Hastings algorithm.

num_chains#

Number of independent chains to run.

Type:

int

num_samples#

Number of samples to draw per chain.

Type:

int

proposal_distribution#

Proposal distribution used for generating candidate samples.

Type:

obj

result_description#

Description of the desired results.

Type:

dict

as_smc_rejuvenation_step#

Indicates whether this iterator is used as a rejuvenation step for an SMC iterator or as the main iterator itself.

Type:

bool

tune#

Whether to tune the scale of the proposal distribution’s covariance.

Type:

bool

scale_covariance#

Scale of covariance matrix.

Type:

float or np.array

num_burn_in#

Number of initial samples to discard as burn-in.

Type:

int

temper#

Tempering function used in the acceptance probability calculation.

Type:

function

gamma#

Tempering parameter.

Type:

float

tune_interval#

Interval for tuning the scale of the covariance matrix.

Type:

int

tot_num_samples#

Total number of samples to be drawn per chain, including burn-in.

Type:

int

chains#

Array storing all the samples drawn across all chains.

Type:

np.array

log_likelihood#

Logarithms of the likelihood of the samples.

Type:

np.array

log_prior#

Logarithms of the prior probabilities of the samples.

Type:

np.array

log_posterior#

Logarithms of the posterior probabilities of the samples.

Type:

np.array

seed#

Seed for random number generation.

Type:

int

accepted#

Number of accepted proposals per chain.

Type:

np.array

accepted_interval#

Number of proposals per chain in current tuning interval.

Type:

np.array

core_run()[source]#

Core run of Metropolis-Hastings iterator.

  1. Burn-in phase

  2. Sampling phase

do_mh_step(step_id)[source]#

Metropolis (Hastings) step.

Parameters:

step_id (int) – Current step index for the MCMC run.

eval_log_likelihood(samples)[source]#

Evaluate natural logarithm of likelihood at samples of chains.

Parameters:

samples (np.array) – Samples for which to evaluate the likelihood.

Returns:

np.array – Logarithms of the likelihood for each sample.

eval_log_prior(samples)[source]#

Evaluate natural logarithm of prior at samples of chains.

Parameters:

samples (np.array) – Samples for which to evaluate the prior.

Returns:

np.array – Logarithms of the prior probabilities for each sample.

post_run()[source]#

Analyze the resulting chain.

pre_run(initial_samples=None, initial_log_like=None, initial_log_prior=None, gamma=1.0, cov_mat=None)[source]#

Draw initial sample.

Parameters:
  • initial_samples (np.array, optional) – Initial samples for the chains.

  • initial_log_like (np.array, optional) – Initial log-likelihood values.

  • initial_log_prior (np.array, optional) – Initial log-prior values.

  • gamma (float, optional) – Tempering parameter for the posterior calculation.

  • cov_mat (np.array, optional) – Covariance matrix for the proposal distribution.

queens.iterators.metropolis_hastings_pymc_iterator module#

Metropolis Hastings algorithm.

“The Metropolis Hastings algorithm is a not-gradient based MCMC algortihm. It implements a random walk.

class MetropolisHastingsPyMCIterator(model, parameters, global_settings, num_samples, seed, num_burn_in=100, num_chains=1, discard_tuned_samples=True, result_description=None, summary=True, pymc_sampler_stats=False, as_inference_dict=False, use_queens_prior=False, progressbar=False, covariance=None, tune_interval=100, scaling=1.0)[source]#

Bases: PyMCIterator

Iterator based on MH-MCMC algorithm.

The Metropolis Hastings sampler is a basic MCMC sampler.

covariance#

Covariance for proposal distribution

Type:

np.array

tune_interval#

frequency of tuning

scaling#

Initial scale factor for proposal

Type:

float

Returns:

metropolis_hastings_iterator (obj) – Instance of Metropolis-Hastings Iterator

eval_log_likelihood(samples)[source]#

Evaluate the log-likelihood.

Parameters:

samples (np.array) – Samples to evaluate the likelihood at

Returns:

log_likelihood (np.array) – Log-likelihoods

eval_log_likelihood_grad(samples)[source]#

Evaluate the gradient of the log-likelihood.

eval_log_prior_grad(samples)[source]#

Evaluate the gradient of the log-prior.

init_distribution_wrapper()[source]#

Init the PyMC wrapper for the QUEENS distributions.

init_mcmc_method()[source]#

Init the PyMC MCMC Model.

Returns:

step (obj) – The MCMC Method within the PyMC Model

post_run()[source]#

Additional post run for MH.

queens.iterators.mlmc_iterator module#

Multilevel Monte Carlo Iterator.

class MLMCIterator(models, parameters, global_settings, seed, num_samples, cost_models=None, use_optimal_num_samples=False, num_bootstrap_samples=0)[source]#

Bases: Iterator

Multilevel Monte Carlo Iterator.

The equations were taken from [1]. This iterator can be used in two different modes by setting the truth value of the parameter use_optimal_num_samples. When set to false, the iterator uses the number of samples provided by the user. When set to true, the iterator calculates and uses the optimal number of samples on each estimator. The iterator does this by calculating the optimal ratio of samples between the estimators. The number of samples on the highest-fidelity model is set by the user.

The multilevel Monte Carlo (MLMC) estimator is given by \(\hat{\mu}_\mathrm{MLMC} = \underbrace{\frac{1}{N_{0}} \sum_{i=1}^{N_{0}} f_{0}(x^{(0, i)})}_\textrm{estimator 0} + \sum_{l=1}^{L} \underbrace{\bigg \{ \frac{1}{N_{l}} \sum_{i=1}^{N_ {l}} \Big ( f_{l}(x^{(l, i)}) - f_{l-1}(x^{(l, i)}) \Big ) \bigg \}}_{\textrm{estimator }l}\) where \(f_{l}\) are the models with increasing fidelity as \(l\) increases. \(N_{l}\) are the number of samples on the \(l\)-th estimator and \(x^{(l,i)}\) is the \(i\)-th sample on the \(l\)-th estimator.

References

[1] M. B. Giles. “Multilevel Monte Carlo methods”. Acta Numerica, 2018.

seed#

Seed for random number generation.

Type:

int

models#

Models of different fidelity to use for evaluation. The model fidelity and model cost increases with increasing index.

Type:

list(Model)

num_samples#

The number of samples to evaluate each estimator with. If use_optimal_num_samples is False (default), the values represent the final number of model evaluations on each estimator. If use_optimal_num_samples is True, the values represent the initial number of model evaluations on each estimator needed to estimate the variance of each estimator, after which the optimal number of samples of each estimator is computed. The i-th entry of the list corresponds to the i-th estimator.

Type:

list(int)

samples#

List of samples for each estimator.

Type:

list(np.array)

output#

Output dict with the following entries:

  • mean (float): MLMC estimator.

  • var (float): Variance of the MLMC estimator.

  • std (float): Standard deviation of the MLMC estimator.

  • result (np.array): Evaluated samples of each estimator.

  • mean_estimators (list): Estimated mean of each estimator.

  • var_estimators (list): Variance of each estimator.

  • num_samples (list): Number of evaluated samples of each estimator.

  • std_bootstrap (float): Bootstrap approximation of the calculated MLMC

    estimator standard deviation. This value is not computed if num_bootstrap_samples is 0.

Type:

dict

cost_estimators#

The relative cost of each estimator. The i-th entry of the list corresponds to the i-th estimator.

Type:

list(float)

use_optimal_num_samples#

Sets the mode of the iterator to either using num_samples as the number of model evaluations on each estimator or using num_samples as initial samples to calculate the optimal number of samples from.

Type:

bool

num_bootstrap_samples#

Number of resamples to use for bootstrap estimate of standard deviation of this estimator. If set to 0, the iterator won’t compute a bootstrap estimate.

Type:

int

core_run()[source]#

Perform multilevel Monte Carlo analysis.

post_run()[source]#

Write results to result file.

pre_run()[source]#

Generate samples for subsequent MLMC analysis.

queens.iterators.monte_carlo_iterator module#

Monte Carlo iterator.

class MonteCarloIterator(model, parameters, global_settings, seed, num_samples, result_description=None)[source]#

Bases: Iterator

Basic Monte Carlo Iterator to enable MC sampling.

seed#

Seed for random number generation.

Type:

int

num_samples#

Number of samples to compute.

Type:

int

result_description#

Description of desired results.

Type:

dict

samples#

Array with all samples.

Type:

np.array

output#

Array with all model outputs.

Type:

np.array

core_run()[source]#

Run Monte Carlo Analysis on model.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples for subsequent MC analysis and update model.

queens.iterators.nuts_iterator module#

No-U-Turn algorithm.

“The No-U-Turn sampler is a gradient based MCMC algortihm. It builds on the Hamiltonian Monte Carlo sampler to sample from (high dimensional) arbitrary probability distributions.

class NUTSIterator(model, parameters, global_settings, num_samples, seed, num_burn_in=100, num_chains=1, discard_tuned_samples=True, result_description=None, summary=True, pymc_sampler_stats=False, as_inference_dict=False, use_queens_prior=False, progressbar=False, max_treedepth=10, early_max_treedepth=8, step_size=0.25, target_accept=0.8, scaling=None, is_cov=False, init_strategy='auto', advi_iterations=500000)[source]#

Bases: PyMCIterator

Iterator based on HMC algorithm.

References

[1]: Hoffman et al. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. 2011.

The No-U-Turn sampler is a state of the art MCMC sampler. It is based on the Hamiltonian Monte Carlo sampler but eliminates the need for an specificed number of integration step by checking if the trajectory turns around. The algorithm is based on a building up a tree and selecting a random note as proposal.

max_treedepth#

Maximum depth for the tree-search

Type:

int

early_max_treedepth#

Max tree depth of first 200 tuning samples

Type:

int

step_size#

Step size, scaled by 1/(parameter dimension **0.25)

Type:

float

target_accept#

Target accpetance rate which should be conistent after burn-in

Type:

float

scaling#

The inverse mass, or precision matrix

Type:

np.array

is_cov#

Setting if the scaling is a mass or covariance matrix

Type:

boolean

init_strategy#

Strategy to tune mass damping matrix

Type:

str

advi_iterations#

Number of iteration steps of ADVI based init strategies

Type:

int

Returns:

nuts_iterator (obj) – Instance of NUTS Iterator

init_mcmc_method()[source]#

Init the PyMC MCMC Model.

Returns:

step (obj) – The MCMC Method within the PyMC Model

queens.iterators.optimization_iterator module#

Deterministic optimization toolbox.

class OptimizationIterator(model, parameters, global_settings, initial_guess, result_description, verbose_output=False, bounds=Bounds(array([-inf]), array([inf])), constraints=None, max_feval=None, algorithm='L-BFGS-B', jac_method='2-point', jac_rel_step=None, objective_and_jacobian=None)[source]#

Bases: Iterator

Iterator for deterministic optimization problems.

Based on the scipy.optimize optimization toolbox [1].

References

[1]: https://docs.scipy.org/doc/scipy/reference/optimize.html

algorithm#

String that defines the optimization algorithm to be used:

  • CG: Conjugate gradient optimization (unconstrained), using Jacobian

  • BFGS: Broyden–Fletcher–Goldfarb–Shanno algorithm (quasi-Newton) for

    optimization (iterative method for unconstrained nonlinear optimization), using Jacobian

  • L-BFGS-B: Limited memory Broyden–Fletcher–Goldfarb–Shanno algorithm

    with box constraints (for large number of variables)

  • TNC: Truncated Newton method (Hessian free) for nonlinear

    optimization with bounds involving a large number of variables. Jacobian necessary

  • SLSQP: Sequential Least Squares Programming minimization with bounds

    and constraints using Jacobian

  • LSQ: Nonlinear least squares with bounds using Jacobian

  • COBYLA: Constrained Optimization BY Linear Approximation

    (no Jacobian)

  • NELDER-MEAD: Downhill-simplex search method

    (unconstrained, unbounded) without the need for a Jacobian

  • POWELL: Powell’s conjugate direction method (unconstrained) without

    the need for a Jacobian. Minimizes the function by a bidirectional search along each search vector

Type:

str

bounds#

Bounds on variables for Nelder-Mead, L-BFGS-B, TNC, SLSQP, Powell, and trust-constr methods. There are two ways to specify the bounds:

  1. Instance of Bounds class.

2. A sequence with 2 elements. The first element corresponds to a sequence of lower bounds and the second element to sequence of upper bounds. The length of each of the two subsequences must be equal to the number of variables.

Type:

sequence, Bounds

cons#

Nonlinear constraints for the optimization. Only for COBYLA, SLSQP and trust-constr (see SciPy documentation for details)

Type:

np.array

initial_guess#

Initial guess, i.e. start point of optimization.

Type:

np.array

jac_method#

Method to calculate a finite difference based approximation of the Jacobian matrix:

  • ‘2-point’: a one-sided scheme by definition

  • ‘3-point’: more exact but needs twice as many function evaluations

Type:

str

jac_rel_step#

Relative step size to use for finite difference approximation of Jacobian matrix. If None (default) then it is selected automatically. (see SciPy documentation for details)

Type:

array_like

max_feval#

Maximum number of function evaluations.

Type:

int

result_description#

Description of desired post-processing.

Type:

dict

verbose_output#

Integer encoding which kind of verbose information should be printed by the optimizers.

Type:

int

precalculated_positions#

Dictionary containing precalculated positions and corresponding model responses.

Type:

dict

solution#

Solution obtained from the optimization process.

Type:

np.array

objective_and_jacobian#

If true, every time the objective is evaluated also the jacobian is evaluated. This leads to improved batching, but can lead to unnecessary evaluations of the jacobian during line-search. This option is only available for gradient methods. Default for ‘LSQ’ is true, and false for remaining gradient methods.

Type:

bool

Returns:

OptimizationIterator (obj) – Instance of the OptimizationIterator

check_precalculated(position)[source]#

Check if the model was already evaluated at defined position.

Parameters:

position (np.ndarray) – Position at which the model should be evaluated

Returns:

np.ndarray – Precalculated model response or None

core_run()[source]#

Core run of Optimization iterator.

eval_model(positions)[source]#

Evaluate model at defined positions.

Parameters:

positions (np.ndarray) – Positions at which the model is evaluated

Returns:

f_batch (np.ndarray) – Model response

evaluate_fd_positions(x0)[source]#

Evaluate objective function at finite difference positions.

Parameters:

x0 (np.array) – Position at which the Jacobian is computed.

Returns:
  • f0 (ndarray) – Objective function value at x0

  • f_perturbed (np.array) – Perturbed function values

  • delta_positions (np.array) – Delta between positions used to approximate Jacobian

  • use_one_sided (np.array) – Whether to switch to one-sided scheme due to closeness to bounds. Informative only for 3-point method.

jacobian(x0)[source]#

Evaluate Jacobian of objective function at x0.

Parameters:

x0 (np.array) – position to evaluate Jacobian at

Returns:

jacobian (np.array) – Jacobian matrix evaluated at x0

objective(x0)[source]#

Evaluate objective function at x0.

Parameters:

x0 (np.array) – position to evaluate objective at

Returns:

f0 (float) – Objective function evaluated at x0

post_run()[source]#

Analyze the resulting optimum.

pre_run()[source]#

Pre run of Optimization iterator.

queens.iterators.points_iterator module#

Iterator to run a model on predefined input points.

class PointsIterator(model, parameters, global_settings, points, result_description)[source]#

Bases: Iterator

Iterator at given input points.

result_description#

Settings for storing

Type:

dict

output#

Array with all model outputs

Type:

np.array

points#

Dictionary with name and samples

Type:

dict

points_array#

Array with all samples

Type:

np.ndarray

core_run()[source]#

Run model.

post_run()[source]#

Write results.

pre_run()[source]#

Prerun.

queens.iterators.polynomial_chaos_iterator module#

Polynomial chaos iterator.

Is a wrapper on the chaospy library.

class PolynomialChaosIterator(model, parameters, global_settings, num_collocation_points, polynomial_order, approach, result_description, sparse=None, sampling_rule=None, seed=42)[source]#

Bases: Iterator

Collocation-based polynomial chaos iterator.

seed#

Seed for random number generation.

Type:

int

num_collocation_points#

Number of samples to compute.

Type:

int

sampling_rule#

Rule according to which samples are drawn.

Type:

dict

polynomial_order#

Order of polynomial expansion.

Type:

int

result_description#

Description of desired results.

Type:

dict

sparse#

For pseudo-spectral, if True uses sparse collocation points.

Type:

bool

polynomial_chaos_approach#

Approach for the polynomial chaos approach.

Type:

str

distribution#

Joint input distribution.

Type:

cp.distribution

samples#

Sample points used in the computation.

result_dict#

Dictionary storing results including expansion, mean, and covariance.

core_run()[source]#

Core run for the polynomial chaos iterator.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Initialize run.

create_chaospy_distribution(distribution)[source]#

Create chaospy distribution object from queens distribution.

Parameters:

distribution (obj) – Queens distribution object

Returns:

distribution – Distribution object in chaospy format

create_chaospy_joint_distribution(parameters)[source]#

Get random variables in chaospy distribution format.

Parameters:

parameters (obj) – Parameters object

Returns:

chaospy distribution

queens.iterators.pymc_iterator module#

PyMC Iterators base class.

class PyMCIterator(model, parameters, global_settings, num_burn_in, num_chains, num_samples, discard_tuned_samples, result_description, summary, pymc_sampler_stats, as_inference_dict, seed, use_queens_prior, progressbar)[source]#

Bases: Iterator

Iterator based on PyMC.

References

[1]: Salvatier et al. “Probabilistic programming in Python using PyMC3”. PeerJ Computer Science. 2016.

result_description#

Settings for storing and visualizing the results

Type:

dict

discard_tuned_samples#

Setting to discard the samples of the burin-in period

Type:

boolean

num_chains#

Number of chains to sample

Type:

int

num_burn_in#

Number of burn-in steps

Type:

int

num_samples#

Number of samples to generate per chain, excluding burn-in period

Type:

int

chains#

Array with all samples

Type:

np.array

seed#

Seed for the random number generators

Type:

int

pymc_model#

PyMC Model as inference environment

Type:

obj

step#

PyMC MCMC method to be used for sampling

Type:

obj

use_queens_prior#

Setting for using the PyMC priors or the QUEENS prior functions

Type:

boolean

progressbar#

Setting for printing progress bar while sampling

Type:

boolean

log_prior#

Function to evaluate the QUEENS joint log-prior

Type:

fun

log_like#

Function to evaluate QUEENS log-likelihood

Type:

fun

results#

PyMC inference object with sampling results

Type:

obj

results_dict#

PyMC inference results as dict

Type:

dict

summary#

Print sampler summary

Type:

bool

pymc_sampler_stats#

Compute additional sampler statistics

Type:

bool

as_inference_dict#

Return inference_data object instead of trace object

Type:

bool

initvals#

Dict with distribution names and starting point of chains

Type:

dict

model_fwd_evals#

Number of model forward calls

Type:

int

model_grad_evals#

Number of model gradient calls

Type:

int

buffered_samples#

Most recent evalutated samples by the likelihood function

Type:

np.array

buffered_gradients#

Gradient of the most recent evaluated samples

Type:

np.array

buffered_likelihoods#

Most recent evalutated likelihoods

Type:

np.array

core_run()[source]#

Core run of PyMC iterator.

eval_log_likelihood(samples)[source]#

Evaluate the log-likelihood.

Parameters:

samples (np.array) – Samples to evaluate the likelihood at

Returns:

log_likelihood (np.array) – log-likelihoods

eval_log_likelihood_grad(samples)[source]#

Evaluate the gradient of the log-likelihood.

Parameters:

samples (np.array) – Samples to evaluate the gradient at

Returns:

gradient (np.array) – Gradients of the log likelihood

eval_log_prior(samples)[source]#

Evaluate natural logarithm of prior at samples of chains.

Parameters:

samples (np.array) – Samples to evaluate the prior at

Returns:

log_prior (np.array) – Prior log-pdf

eval_log_prior_grad(samples)[source]#

Evaluate the gradient of the log-prior.

Parameters:

samples (np.array) – Samples to evaluate the gradient at

Returns:

log_prior_grad (np.array) – Gradients of the log prior

init_distribution_wrapper()[source]#

Init the PyMC wrapper for the QUEENS distributions.

abstract init_mcmc_method()[source]#

Init the PyMC MCMC Model.

post_run()[source]#

Post-Processing of Results.

pre_run()[source]#

Prepare MCMC run.

queens.iterators.reparameteriztion_based_variational_inference module#

Reparameterization trick based variational inference.

class RPVIIterator(model, parameters, global_settings, result_description, variational_distribution, n_samples_per_iter, random_seed, max_feval, stochastic_optimizer, variational_transformation=None, variational_parameter_initialization=None, natural_gradient=True, FIM_dampening=True, decay_start_iteration=50, dampening_coefficient=0.01, FIM_dampening_lower_bound=1e-08, score_function_bool=False, verbose_every_n_iter=10)[source]#

Bases: VariationalInferenceIterator

Reparameterization based variational inference (RPVI).

Iterator for Bayesian inverse problems. This variational inference approach requires model gradients/Jacobians w.r.t. the parameters/the parameterization of the inverse problem. The latter can be provided by:

  • A finite differences approximation of the gradient/Jacobian, which requires in the simplest case d+1 additional solver calls

  • An externally provided gradient/Jacobian that was, e.g. calculated via adjoint methods or automated differentiation

The current implementation does not support the importance sampling of the MC gradient.

The mathematical details of the algorithm can be found in [1], [2], [3].

References

[1]: Kingma, D. P., Salimans, T., & Welling, M. (2015). Variational dropout and the local

reparameterization trick. Advances in neural information processing systems, 28, 2575-2583.

[2]: Roeder, G., Wu, Y., & Duvenaud, D. (2017). Sticking the landing: Simple,

lower-variance gradient estimators for variational inference. arXiv preprint arXiv:1703.09194.

[3]: Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2017). Variational inference: A

review for statisticians. Journal of the American statistical Association, 112(518), 859-877.

score_function_bool#

Boolean flag to decide whether the score function term should be considered in the elbo gradient. If True the score function is considered.

Type:

bool

Returns:

rpvi_obj (obj) – Instance of the RPVIIterator

core_run()[source]#

Core run for variational inference with reparameterization trick.

evaluate_and_gradient(sample_batch)[source]#

Calculate log-likelihood of observation data and its gradient.

Evaluation of the likelihood model and its gradient for all inputs of the sample batch will trigger the actual forward simulation (can be executed in parallel as batch-sequential procedure).

Parameters:

sample_batch (np.array) – Sample-batch with samples row-wise

Returns:
  • log_likelihood (np.array) – Vector of log-likelihood values for different input samples.

  • grad_log_likelihood_batch (np.array) – Row-wise gradients of log-Likelihood w.r.t. latent input samples.

queens.iterators.sequential_monte_carlo_chopin module#

Sequential Monte Carlo implementation using particles package.

class ParticlesChopinDistribution(queens_distribution)[source]#

Bases: ProbDist

Distribution interfacing QUEENS distributions to particles.

property dim#

Dimension of the distribution.

Returns:

int – dimension of the RV

logpdf(x)[source]#

Logpdf of the distribution.

Parameters:

x (np.ndarray) – Input locations

Returns:

np.ndarray – logpdf values

pdf(x)[source]#

Pdf of the distribution.

Parameters:

x (np.ndarray) – Input locations

Returns:

np.ndarray – pdf values

ppf(u)[source]#

Ppf of the distribution.

Parameters:

u (np.ndarray) – Input locations

Returns:

np.ndarray – ppf values

rvs(size=None)[source]#

Draw samples of the distribution.

size basically is the number of samples.

Parameters:

size (np.ndarray, optional) – Shape of the outputs. Defaults to None.

Returns:

np.ndarray – samples of the distribution

class SequentialMonteCarloChopinIterator(model, parameters, global_settings, result_description, num_particles, max_feval, seed, resampling_threshold, resampling_method, feynman_kac_model, num_rejuvenation_steps, waste_free)[source]#

Bases: Iterator

Sequential Monte Carlo algorithm from Chopin et al.

Sequential Monte Carlo algorithm based on the book [1] (especially chapter 17) and the particles library (nchopin/particles)

References

[1]: Chopin N. and Papaspiliopoulos O. (2020), An Introduction to Sequential Monte Carlo,

10.1007/978-3-030-47845-2 , Springer.

result_description#

Settings for storing and visualizing the results.

Type:

dict

seed#

Seed for random number generator.

Type:

int

num_particles#

Number of particles.

Type:

int

num_variables#

Number of primary variables.

Type:

int

n_sims#

Number of model calls.

Type:

int

max_feval#

Maximum number of model calls.

Type:

int

prior#

Particles Prior object.

Type:

object

smc_obj#

Particles SMC object.

Type:

object

resampling_threshold#

Ratio of ESS to particle number at which to resample.

Type:

float

resampling_method#

Resampling method implemented in particles.

Type:

str

feynman_kac_model#

Feynman Kac model for the smc object.

Type:

str

num_rejuvenation_steps#

Number of rejuvenation steps (e.g. MCMC steps).

Type:

int

waste_free#

If True, all intermediate Markov steps are kept.

Type:

bool

core_run()[source]#

Core run of Sequential Monte Carlo iterator.

The particles library is generator based. Hence, one step of the SMC algorithm is done using next(self.smc). As the next() function is called during the for loop, we only need to add some logging and check if the number of model runs is exceeded.

eval_log_likelihood(samples)[source]#

Evaluate natural logarithm of likelihood at sample.

Parameters:

samples (np.array) – Samples/particles of the SMC.

Returns:

log_likelihood (np.array) – Value of log-likelihood for samples.

initialize_feynman_kac(static_model)[source]#

Initialize the Feynman Kac model for the SMC approach.

Parameters:

static_model (StaticModel) – Static model from the particles library

Returns:

feynman_kac_model (FKSMCsampler) – Model for the smc object

post_run()[source]#

Analyze the resulting importance sample.

pre_run()[source]#

Draw initial sample.

queens.iterators.sequential_monte_carlo_iterator module#

Sequential Monte Carlo algorithm.

References

[1]: Del Moral, P., Doucet, A. and Jasra, A. (2007)

‘Sequential monte carlo for bayesian computation’, in Bernardo, J. M. et al. (eds) Bayesian Statistics 8. Oxford University Press, pp. 1–34.

[2]: Koutsourelakis, P. S. (2009)

‘A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters’, Journal of Computational Physics, 228(17), pp. 6184–6211. doi: 10.1016/j.jcp.2009.05.016.

[3]: Minson, S. E., Simons, M. and Beck, J. L. (2013)

‘Bayesian inversion for finite fault earthquake source models I-theory and algorithm’, Geophysical Journal International, 194(3), pp. 1701–1726. doi: 10.1093/gji/ggt180.

[4]: Del Moral, P., Doucet, A. and Jasra, A. (2006)

‘Sequential Monte Carlo samplers’, Journal of the Royal Statistical Society. Series B: Statistical Methodology. Blackwell Publishing Ltd, 68(3), pp. 411–436. doi: 10.1111/j.1467-9868.2006.00553.x.

class SequentialMonteCarloIterator(model, parameters, global_settings, num_particles, result_description, seed, temper_type, mcmc_proposal_distribution, num_rejuvenation_steps, plot_trace_every=0)[source]#

Bases: Iterator

Iterator based on Sequential Monte Carlo algorithm.

The Sequential Monte Carlo algorithm is a very general algorithm for sampling from complex, intractable probability distributions from which direct sampling is difficult or impossible. The implemented version is based on [1, 2, 3, 4].

plot_trace_every#

Print the current trace every plot_trace_every-th iteration. Default: 0 (do not print the trace).

Type:

int

result_description#

Description of desired results.

Type:

dict

seed#

Seed for random number generator.

Type:

int

mcmc_kernel#

Forward kernel for the rejuvenation steps.

Type:

MetropolisHastingsIterator

num_particles#

Number of particles.

Type:

int

num_variables#

Number of primary variables.

Type:

int

particles#

Array holding the current particles.

Type:

ndarray

weights#

Array holding the current weights.

Type:

ndarray

log_likelihood#

log of pdf of likelihood at particles.

Type:

ndarray

log_prior#

log of pdf of prior at particles.

Type:

ndarray

log_posterior#

log of pdf of posterior at particles.

Type:

ndarray

ess#

List storing the values of the effective sample size.

Type:

list

ess_cur#

Current effective sample size.

Type:

float

temper#

Tempering function that defines the transition to the goal distribution.

Type:

function

gamma_cur#

Current tempering parameter, sometimes called (reciprocal) temperature.

Type:

float

gammas#

List to store values of the tempering parameter.

Type:

list

a#

Parameter for the scaling of the covariance matrix of the proposal distribution of the MCMC kernel.

Type:

float

b#

Parameter for the scaling of the covariance matrix of the proposal distribution of the MCMC kernel.

Type:

float

calc_new_ess(gamma_new, gamma_old)[source]#

Calculate predicted Effective Sample Size at gamma_new.

Parameters:
  • gamma_new (float) – New gamma value.

  • gamma_old (float) – Previous gamma value.

Returns:

ess (float) – Effective sample size for the new gamma value.

calc_new_gamma(gamma_cur)[source]#

Calculate the new gamma value.

Based on the current gamma, calculate the new gamma such that the ESS at the new gamma is equal to zeta times current gamma. This ensures only a small reduction of the ESS.

Parameters:

gamma_cur (float) – Current gamma value.

Returns:

gamma_new (float) – Updated gamma value.

calc_new_weights(gamma_new, gamma_old)[source]#

Calculate the weights at new gamma value.

This is a core equation of the SMC algorithm. See for example:

  • Eq.(22) with Eq.(14) in [1]

  • Table 1: (2) in [2]

  • Eq.(31) with Eq.(11) in [4]

We use the exp-log trick here to avoid numerical problems and normalize the particles in this method.

Parameters:
  • gamma_new (float) – Old value of gamma blending parameter

  • gamma_old (float) – New value of gamma blending parameter

Returns:

weights_new (np.array) – New and normalized weights

core_run()[source]#

Core run of Sequential Monte Carlo iterator.

draw_trace(step)[source]#

Plot the trace of the current particle approximation.

Parameters:

step (int) – Current step index

eval_log_likelihood(sample_batch)[source]#

Evaluate natural logarithm of likelihood at sample batch.

Parameters:

sample_batch (np.array) – Batch of samples

Returns:

log_likelihood (np.array) – Logarithm of likelihood for the sample batch.

eval_log_prior(sample_batch)[source]#

Evaluate natural logarithm of prior at sample.

Parameters:

sample_batch (np.array) – Array of input samples

Returns:

log_prior_array (np.array) – Array of log-prior values for input samples

post_run()[source]#

Analyze the resulting importance sample.

pre_run()[source]#

Draw initial sample.

resample()[source]#

Resample particle distribution based on their weights.

Resampling reduces the variance of the particle approximation by eliminating particles with small weights and duplicating particles with large weights (see 2.2.1 in [2]).

Returns:

Tuple of updated particles, resampled weights, log-likelihood, and log-prior.

update_ess(resampled=False)[source]#

Update effective sample size (ESS) and store current value.

Based on the current weights, calculate the corresponding ESS. Store the new ESS value. In case of resampling, the weights have been reset in the current time step and therefore also the ESS has to be reset.

Parameters:

resampled (bool) – Indicates whether current weights are base on a resampling step

update_gamma(gamma_new)[source]#

Update the current gamma value and store old value.

Parameters:

gamma_new (float) – New gamma value to update.

update_weights(weights_new)[source]#

Update the weights to their new values.

Parameters:

weights_new (np.array) – New weights for the particles.

queens.iterators.sobol_index_gp_uncertainty_iterator module#

Iterator for Sobol indices with GP uncertainty.

class SobolIndexGPUncertaintyIterator(model, parameters, global_settings, result_description, num_procs=2, second_order=False, third_order=False, **additional_options)[source]#

Bases: Iterator

Iterator for Sobol indices with metamodel uncertainty.

This iterator estimates first- and total-order Sobol indices based on Monte-Carlo integration and the use of Gaussian process as surrogate model. Additionally, uncertainty estimates for the Sobol index estimates are calculated: total uncertainty and separate uncertainty due to Monte-Carlo integration and due to the use of the Gaussian process as a surrogate model. Second-order indices can optionally be estimated.

Alternatively, one specific third-order Sobol index can be estimated for one specific combination of three parameters (specified as third_order_parameters in the input file).

The approach is based on:

Le Gratiet, Loic, Claire Cannamela, and Bertrand Iooss. ‘A Bayesian Approach for Global Sensitivity Analysis of (Multifidelity) Computer Codes’. SIAM/ASA Journal on Uncertainty Quantification 2, no. 1 (1 January 2014): 336–63. https://doi.org/10.1137/130926869.

Further details can be found in:

Wirthl, B., Brandstaeter, S., Nitzler, J., Schrefler, B. A., & Wall, W. A. (2023). Global sensitivity analysis based on Gaussian-process metamodelling for complex biomechanical problems. International Journal for Numerical Methods in Biomedical Engineering, 39(3), e3675. https://doi.org/10.1002/cnm.3675

result_description#

Dictionary with desired result description.

Type:

dict

num_procs#

Number of processors.

Type:

int

sampler#

Sampler object.

Type:

Sampler object

predictor#

Metamodel predictor object.

Type:

Predictor object

index_estimator#

Estimator object.

Type:

SobolIndexEstimator object

statistics#

List of statistics objects.

Type:

list

calculate_second_order#

True if second-order indices are calculated.

Type:

bool

calculate_third_order#

True if third-order indices only are calculated.

Type:

bool

results#

Dictionary for results.

Type:

dict

calculate_index()[source]#

Calculate Sobol indices.

Run sensitivity analysis based on:

Le Gratiet, Loic, Claire Cannamela, and Bertrand Iooss. ‘A Bayesian Approach for Global Sensitivity Analysis of (Multifidelity) Computer Codes’. SIAM/ASA Journal on Uncertainty Quantification 2, no. 1 (1 January 2014): 336–63. https://doi.org/10.1137/130926869.

core_run()[source]#

Core-run.

evaluate_statistics(estimates)[source]#

Evaluate statistics of Sobol index estimates.

Parameters:

estimates (dict) – Dictionary of Sobol index estimates of different order

post_run()[source]#

Post-run.

pre_run()[source]#

Pre-run.

queens.iterators.sobol_index_iterator module#

Estimate Sobol indices.

class SobolIndexIterator(model, parameters, global_settings, seed, num_samples, calc_second_order, num_bootstrap_samples, confidence_level, result_description, skip_values=None)[source]#

Bases: Iterator

Sobol Index Iterator.

This class essentially provides a wrapper around the SALib library.

seed#

Seed for random number generator.

Type:

int

num_samples#

Number of samples.

Type:

int

calc_second_order#

Whether to calculate second-order sensitivities.

Type:

bool

skip_values#

Number of points in Sobol’ sequence to skip, ideally a value of base 2 (default: 1024).

Type:

int or None

num_bootstrap_samples#

Number of bootstrap samples for confidence intervals.

Type:

int

confidence_level#

Confidence level for the intervals.

Type:

float

result_description#

Description of the desired results.

Type:

dict

samples#

Samples used for analysis.

Type:

np.array

output#

Model outputs corresponding to samples.

Type:

dict

salib_problem#

Problem definition for SALib.

Type:

dict

num_params#

Number of parameters.

Type:

int

parameter_names#

List of parameter names.

Type:

list

sensitivity_indices#

Sensitivity indices from Sobol analysis.

Type:

dict

core_run()[source]#

Run Analysis on model.

get_all_samples()[source]#

Return all samples.

plot_results(results)[source]#

Create bar graph of first order sensitivity indices.

Parameters:

results (dict) – Dictionary with Sobol indices and confidence intervals

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples for subsequent analysis and update model.

print_results(results)[source]#

Print results.

Parameters:

results (dict) – Dictionary with Sobol indices and confidence intervals

process_results()[source]#

Write all results to self contained dictionary.

Returns:

results (dict) – Dictionary with Sobol indices and confidence intervals

extract_parameters_of_parameter_distributions(parameters)[source]#

Extract the parameters of the parameter distributions.

Parameters:

parameters (Parameters) – QUEENS Parameters object containing the metadata

Returns:
  • distribution_types (list) – list with distribution types of the parameter distributions

  • distribution_parameters (list) – list with parameters of the parameter distributions

queens.iterators.sobol_sequence_iterator module#

Sobol sequence iterator.

class SobolSequenceIterator(model, parameters, global_settings, seed, number_of_samples, result_description, randomize=False)[source]#

Bases: Iterator

Sobol sequence in multiple dimensions.

seed#

This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed.

Type:

int

number_of_samples#

Number of samples to compute.

Type:

int

randomize#

Setting this to True will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences.

Type:

bool

result_description#

Description of desired results.

Type:

dict

samples#

Array with all samples.

Type:

np.array

output#

Array with all model outputs.

Type:

np.array

core_run()[source]#

Run Sobol sequence analysis on model.

post_run()[source]#

Analyze the results.

pre_run()[source]#

Generate samples for subsequent Sobol sequence analysis.

queens.iterators.variational_inference module#

Base class for variational inference iterator.

class VariationalInferenceIterator(model, parameters, global_settings, result_description, variational_distribution, variational_params_initialization, n_samples_per_iter, variational_transformation, random_seed, max_feval, natural_gradient, FIM_dampening, decay_start_iter, dampening_coefficient, FIM_dampening_lower_bound, stochastic_optimizer, iteration_data, verbose_every_n_iter=10)[source]#

Bases: Iterator

Stochastic variational inference iterator.

References

[1]: Mohamed et al. “Monte Carlo Gradient Estimation in Machine Learning”. Journal of

Machine Learning Research. 21(132):1−62, 2020.

[2]: Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2017). Variational Inference: A

Review for Statisticians. Journal of the American Statistical Association, 112(518), 859–877. https://doi.org/10.1080/01621459.2017.1285773

[3]: Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational

inference. Journal of Machine Learning Research, 14(1), 1303–1347.

result_description#

Settings for storing and visualizing the results.

Type:

dict

variational_params_initialization_approach#

Flag to decide how to initialize the variational parameters.

Type:

str

n_samples_per_iter#

Batch size per iteration (number of simulations per iteration to estimate the involved expectations).

Type:

int

variational_transformation#

String encoding the transformation that will be applied to the variational density.

Type:

str

natural_gradient_bool#

True if natural gradient should be used.

Type:

boolean

fim_decay_start_iter#

Iteration at which the FIM dampening is started.

Type:

float

fim_dampening_coefficient#

Initial nugget term value for the FIM dampening.

Type:

float

fim_dampening_lower_bound#

Lower bound on the FIM dampening coefficient.

Type:

float

fim_dampening_bool#

True if FIM dampening should be used.

Type:

boolean

random_seed#

Seed for the random number generators.

Type:

int

max_feval#

Maximum number of simulation runs for this analysis.

Type:

int

num_parameters#

Actual number of model input parameters that should be calibrated.

Type:

int

stochastic_optimizer#

QUEENS stochastic optimizer object.

Type:

obj

variational_distribution#

Variational distribution object.

Type:

VariationalDistribution

n_sims#

Number of probabilistic model calls.

Type:

int

variational_params#

Row vector containing the variational parameters.

Type:

np.array

elbo#

Evidence lower bound.

nan_in_gradient_counter#

Count how many times NaNs appeared in the gradient estimate in a row.

Type:

int

iteration_data#

Object to store iteration data if desired.

Type:

CollectionObject

verbose_every_n_iter#

Number of iterations between printing, plotting, and saving

Type:

int

core_run()[source]#

Core run for stochastic variational inference.

get_gradient_function()[source]#

Select the gradient function for the stochastic optimizer.

Two options exist, with or without natural gradient.

Returns:

obj – function to evaluate the gradient

handle_gradient_nan(gradient_function)[source]#

Handle NaN in gradient estimations.

Parameters:

gradient_function (function) – Function that estimates the gradient

Returns:

function – Gradient function wrapped with the counter

post_run()[source]#

Write results and potentially visualize them.

pre_run()[source]#

Initialize the prior model and variational parameters.