queens.models package#
Models.
Modules for multi-query mapping of inputs to outputs, such as parameter samples to model evaluations.
Subpackages#
- queens.models.likelihood_models package
- Submodules
- queens.models.likelihood_models.bayesian_mf_gaussian_likelihood module
BMFGaussianModel
BMFGaussianModel.coords_mat
BMFGaussianModel.time_vec
BMFGaussianModel.output_label
BMFGaussianModel.coord_labels
BMFGaussianModel.mf_interface
BMFGaussianModel.mf_subiterator
BMFGaussianModel.normal_distribution
BMFGaussianModel.noise_var
BMFGaussianModel.likelihood_counter
BMFGaussianModel.num_refinement_samples
BMFGaussianModel.likelihood_evals_for_refinement
BMFGaussianModel.build_approximation()
BMFGaussianModel.evaluate()
BMFGaussianModel.evaluate_from_output()
BMFGaussianModel.evaluate_mf_likelihood()
BMFGaussianModel.grad()
BMFGaussianModel.grad_log_pdf_d_ylf()
BMFGaussianModel.initialize_bmfia_iterator()
BMFGaussianModel.partial_grad_evaluate()
BmfiaInterface
BmfiaInterface.num_processors_multi_processing
BmfiaInterface.evaluate_method
BmfiaInterface.evaluate_and_gradient_method
BmfiaInterface.instantiate_probabilistic_mappings
BmfiaInterface.probabilistic_mapping_obj_lst
BmfiaInterface.update_mappings_method
BmfiaInterface.coord_labels
BmfiaInterface.time_vec
BmfiaInterface.coords_mat
BmfiaInterface.build_approximation()
BmfiaInterface.check_coordinates_return_dimensions()
BmfiaInterface.evaluate()
BmfiaInterface.evaluate_and_gradient()
BmfiaInterface.evaluate_and_gradient_per_coordinate()
BmfiaInterface.evaluate_and_gradient_per_time_step()
BmfiaInterface.evaluate_per_coordinate()
BmfiaInterface.evaluate_per_time_step()
BmfiaInterface.instantiate_per_coordinate()
BmfiaInterface.instantiate_per_time_step()
BmfiaInterface.iterate_over_time_steps()
BmfiaInterface.optimize_hyper_params()
BmfiaInterface.prepare_z_lf_for_time_steps()
BmfiaInterface.set_optimized_state_of_probabilistic_mappings()
BmfiaInterface.train_probabilistic_mappings_in_parallel()
BmfiaInterface.update_mappings_per_coordinate()
BmfiaInterface.update_mappings_per_time_step()
BmfiaInterface.valid_probabilistic_mappings_configurations
- queens.models.likelihood_models.gaussian_likelihood module
- queens.models.likelihood_models.likelihood_model module
- queens.models.surrogate_models package
- Subpackages
- Submodules
- queens.models.surrogate_models.bayesian_neural_network module
GaussianBayesianNeuralNetworkModel
GaussianBayesianNeuralNetworkModel.num_posterior_samples
GaussianBayesianNeuralNetworkModel.num_samples_statistics
GaussianBayesianNeuralNetworkModel.bnn_model
GaussianBayesianNeuralNetworkModel.num_epochs
GaussianBayesianNeuralNetworkModel.optimizer_seed
GaussianBayesianNeuralNetworkModel.verbosity_on
GaussianBayesianNeuralNetworkModel.model_realizations_lst
GaussianBayesianNeuralNetworkModel.adams_training_rate
GaussianBayesianNeuralNetworkModel.nodes_per_hidden_layer_lst
GaussianBayesianNeuralNetworkModel.activation_per_hidden_layer_lst
GaussianBayesianNeuralNetworkModel.grad()
GaussianBayesianNeuralNetworkModel.mean_field_variational_distribution()
GaussianBayesianNeuralNetworkModel.negative_log_likelihood()
GaussianBayesianNeuralNetworkModel.predict()
GaussianBayesianNeuralNetworkModel.predict_f()
GaussianBayesianNeuralNetworkModel.predict_f_samples()
GaussianBayesianNeuralNetworkModel.predict_y()
GaussianBayesianNeuralNetworkModel.predict_y_samples()
GaussianBayesianNeuralNetworkModel.prior_trainable()
GaussianBayesianNeuralNetworkModel.setup()
GaussianBayesianNeuralNetworkModel.train()
- queens.models.surrogate_models.gaussian_neural_network module
GaussianNeuralNetworkModel
GaussianNeuralNetworkModel.nn_model
GaussianNeuralNetworkModel.num_epochs
GaussianNeuralNetworkModel.optimizer_seed
GaussianNeuralNetworkModel.verbosity_on
GaussianNeuralNetworkModel.batch_size
GaussianNeuralNetworkModel.scaler_x
GaussianNeuralNetworkModel.scaler_y
GaussianNeuralNetworkModel.loss_plot_path
GaussianNeuralNetworkModel.num_refinements
GaussianNeuralNetworkModel.refinement_epochs_decay
GaussianNeuralNetworkModel.mean_function
GaussianNeuralNetworkModel.gradient_mean_function
GaussianNeuralNetworkModel.adams_training_rate
GaussianNeuralNetworkModel.nodes_per_hidden_layer
GaussianNeuralNetworkModel.activation_per_hidden_layer
GaussianNeuralNetworkModel.kernel_initializer
GaussianNeuralNetworkModel.nugget_std
GaussianNeuralNetworkModel.grad()
GaussianNeuralNetworkModel.negative_log_likelihood()
GaussianNeuralNetworkModel.predict()
GaussianNeuralNetworkModel.predict_and_gradient()
GaussianNeuralNetworkModel.predict_y()
GaussianNeuralNetworkModel.setup()
GaussianNeuralNetworkModel.train()
GaussianNeuralNetworkModel.update_training_data()
- queens.models.surrogate_models.gp_approximation_gpflow module
GPFlowRegressionModel
GPFlowRegressionModel.number_posterior_samples
GPFlowRegressionModel.number_restarts
GPFlowRegressionModel.number_training_iterations
GPFlowRegressionModel.number_input_dimensions
GPFlowRegressionModel.restart_min_value
GPFlowRegressionModel.restart_max_value
GPFlowRegressionModel.model
GPFlowRegressionModel.dimension_lengthscales
GPFlowRegressionModel.train_likelihood_variance
GPFlowRegressionModel.scaler_x
GPFlowRegressionModel.scaler_y
GPFlowRegressionModel.assign_hyperparameters()
GPFlowRegressionModel.get_dimension_hyperparameters()
GPFlowRegressionModel.grad()
GPFlowRegressionModel.predict()
GPFlowRegressionModel.setup()
GPFlowRegressionModel.train()
GPFlowRegressionModel.transform_hyperparameters()
- queens.models.surrogate_models.gp_approximation_gpflow_svgp module
GPflowSVGPModel
GPflowSVGPModel.number_posterior_samples
GPflowSVGPModel.mini_batch_size
GPflowSVGPModel.number_training_iterations
GPflowSVGPModel.training_data
GPflowSVGPModel.number_input_dimensions
GPflowSVGPModel.model
GPflowSVGPModel.scaler_x
GPflowSVGPModel.scaler_y
GPflowSVGPModel.dimension_output
GPflowSVGPModel.seed
GPflowSVGPModel.num_inducing_points
GPflowSVGPModel.dimension_lengthscales
GPflowSVGPModel.train_inducing_points_location
GPflowSVGPModel.train_likelihood_variance
GPflowSVGPModel.grad()
GPflowSVGPModel.predict()
GPflowSVGPModel.setup()
GPflowSVGPModel.train()
- queens.models.surrogate_models.gp_approximation_jitted module
GPJittedModel
GPJittedModel.k_mat_inv
GPJittedModel.cholesky_k_mat
GPJittedModel.k_mat
GPJittedModel.partial_derivatives_hyper_params
GPJittedModel.mean_function
GPJittedModel.gradient_mean_function
GPJittedModel.stochastic_optimizer
GPJittedModel.scaler_x
GPJittedModel.scaler_y
GPJittedModel.grad_log_evidence_value
GPJittedModel.hyper_params
GPJittedModel.noise_variance_lower_bound
GPJittedModel.plot_refresh_rate
GPJittedModel.kernel_type
GPJittedModel.get_state()
GPJittedModel.grad()
GPJittedModel.gradient_identity_multi_fidelity_mean_fun()
GPJittedModel.gradient_zero_mean_fun()
GPJittedModel.identity_multi_fidelity_mean_fun()
GPJittedModel.log_evidence()
GPJittedModel.predict()
GPJittedModel.set_state()
GPJittedModel.setup()
GPJittedModel.train()
GPJittedModel.valid_kernels_dict
GPJittedModel.zero_mean_fun()
- queens.models.surrogate_models.gp_heteroskedastic_gpflow module
HeteroskedasticGPModel
HeteroskedasticGPModel.num_posterior_samples
HeteroskedasticGPModel.num_inducing_points
HeteroskedasticGPModel.model
HeteroskedasticGPModel.optimizer
HeteroskedasticGPModel.adams_training_rate
HeteroskedasticGPModel.num_epochs
HeteroskedasticGPModel.random_seed
HeteroskedasticGPModel.posterior_cov_mat_y
HeteroskedasticGPModel.num_samples_stats
HeteroskedasticGPModel.grad()
HeteroskedasticGPModel.predict()
HeteroskedasticGPModel.predict_f_samples()
HeteroskedasticGPModel.predict_y()
HeteroskedasticGPModel.setup()
HeteroskedasticGPModel.train()
- queens.models.surrogate_models.surrogate_model module
SurrogateModel
SurrogateModel.training_iterator
SurrogateModel.testing_iterator
SurrogateModel.eval_fit
SurrogateModel.error_measures
SurrogateModel.is_trained
SurrogateModel.x_train
SurrogateModel.y_train
SurrogateModel.visualization
SurrogateModel.build_approximation()
SurrogateModel.compute_error()
SurrogateModel.compute_error_measures()
SurrogateModel.compute_nash_sutcliffe_efficiency()
SurrogateModel.cross_validate()
SurrogateModel.eval_surrogate_accuracy()
SurrogateModel.eval_surrogate_accuracy_cv()
SurrogateModel.evaluate()
SurrogateModel.grad()
SurrogateModel.predict()
SurrogateModel.setup()
SurrogateModel.train()
Submodules#
queens.models.bmfmc_model module#
Bayesian multi-fidelity Monte-Carlo model.
- class BMFMCModel(parameters, global_settings, probabilistic_mapping, features_config, predictive_var, BMFMC_reference, y_pdf_support_max, y_pdf_support_min, X_cols=None, num_features=None, hf_model=None, path_to_lf_mc_data=None, path_to_hf_mc_reference_data=None)[source]#
Bases:
Model
Bayesian multi-fidelity Monte-Carlo model.
Bayesian multi-fidelity Monte-Carlo model for uncertainty quantification, which is a probabilistic mapping between a high-fidelity simulation model (HF) and one or more low fidelity simulation models (LFs), respectively informative features \((\gamma)\) from the input space. Based on this mapping and the LF samples \(\mathcal{D}_{LF}^*=\\{Z^*,Y_{LF}^*\\}\), the BMFMC model computes the posterior statistics:
\(\mathbb{E}_{f^*}\left[p(y_{HF}^*|f^*,D_f)\right]\), equation (14) in [1] and
\(\mathbb{V}_{f^*}\left[p(y_{HF}^*|f^*,D_f)\right]\), equation (15) in [1]
of the HF model’s output uncertainty.
The BMFMC model is designed to be constructed upon the sampling data of a LF model \(\mathcal{D}_{LF}^*=\\{Z^*,Y_{LF}^*\\}\), that is provided by pickle or csv-files, and offers then different options to obtain the HF data:
Provide HF training data in a file (Attention: The user needs to make sure that this training set is representative and its input \(Z\) is a subset of the LF model sampling set: \(Z\subset Z^*\)).
Run optimal HF simulations based on LF data. This requires a suitable simulation sub-model and sub-iterator for the HF model.
Provide HF sampling data (as a file), calculated with same \(Z^*\) as the LF simulation runs and select optimal HF training set from this data. This is just helpful for scientific benchmarking when a ground-truth solution for the HF output uncertainty has been sampled before and this data exists anyway.
- parameters#
Parameters object
- Type:
obj
- interface#
Interface object.
- Type:
obj
- features_config#
strategy that will be used to calculate the low-fidelity features \(Z_{\text{LF}}\): opt_features, no_features or man_features
- Type:
str
- X_cols#
for man_features, columns of X-matrix that should be used as an informative feature
- Type:
list
- num_features#
for opt_features, number of features to be used
- Type:
int
- high_fidelity_model#
HF (simulation) model to run simulations that yield the HF training set \(\mathcal{D}_{HF}=\\{Z, Y_{HF}\\}\).
- Type:
obj
- X_train#
Matrix of simulation inputs corresponding to the training data-set of the multi-fidelity mapping.
- Type:
np.array
- Y_HF_train#
Vector or matrix of HF output that correspond to training input according to \(Y_{HF} = y_{HF}(X)\).
- Type:
np.array
- Y_LFs_train#
Output vector/matrix of one or multiple LF models that correspond to the training input according to \(Y_{LF,i}=y_{LF,i}(X)\).
- Type:
np.array
- X_mc#
Matrix of simulation inputs that were used in the Monte-Carlo sampling of the LF models. Each row is one input set for a simulation. Columns refer to different field_realizations of the same variable.
- Type:
np.array
- Y_LFs_mc#
Output vector/matrix for the LF models that correspond to the X_mc according to \(Y_{LF,i}^*=y_{LF,i}(X^*)\). At the moment Y_LF_mc contains in one row scalar results for different LF models. (In the future we will change the format to pandas dataframes to handle vectorized/functional outputs for different models more elegantly).
- Type:
np.array
- Y_HF_mc#
(optional for benchmarking) Output vector/matrix for the HF model that corresponds to the X_mc according to \(Y_{HF}^*=y_{HF}(X^*)\).
- Type:
np.array
- gammas_ext_mc#
Matrix of extended low-fidelity informative features \(\boldsymbol{\Gamma}^*\) corresponding to Monte-Carlo input \(X^*\).
- Type:
np.array
- gammas_ext_train#
Matrix of extended low-fidelity informative features \(\boldsymbol{\Gamma}\) corresponding to the training input \(X\).
- Type:
np.array
- Z_train#
Training matrix of low-fidelity features according to \(Z=\left[y_{LF,i}(X),\Gamma\right]\).
- Type:
np.array
- Z_mc#
Monte-Carlo matrix of low-fidelity features according to \(Z^*=\left[y_{LF,i}(X^*),\Gamma^*\right]\).
- Type:
np.array
- m_f_mc#
Vector of posterior mean values of multi-fidelity mapping corresponding to the Monte-Carlo input Z_mc according to \(\mathrm{m}_{f^*}(Z^*)\).
- Type:
np.array
- var_y_mc#
Vector of posterior variance of multi-fidelity mapping corresponding to the Monte-Carlo input Z_mc according to \(\mathrm{m}_{f^*}(Z^*)\).
- Type:
np.array
- p_yhf_mean#
Vector that contains the mean approximation of the HF output density defined on y_hf_support. The vector p_yhf_mean is defined as: \(\mathbb{E}_{f^*}\left[p(y_{HF}^*|f^*,D_f)\right]\) according to eq. (14) in [1].
- Type:
np.array
- p_yhf_var#
Vector that contains the variance approximation of the HF output density defined on y_hf_support. The vector p_yhf_var is defined as: \(\mathbb{V}_{f^*}\left[p(y_{HF}^*|f^*,D_f)\right]\) according to eq. (15) in [1].
- Type:
np.array
- predictive_var_bool#
Flag that determines whether p_yhf_var should be computed.
- Type:
bool
- p_yhf_mc#
(optional) Monte-Carlo based kernel-density estimate of the HF output.
- Type:
np.array
- p_ylf_mc#
(optional) Kernel density estimate for LF model output. Note: For BMFMC the explicit density is never required, only the \(\mathcal{D}_{LF}\) is used in the algorithm.
- Type:
np.array
- no_features_comparison_bool#
If flag is True, the result will be compared to a prediction that used no LF input features.
- Type:
bool
- eigenfunc_random_fields#
Matrix containing the discretized eigenfunctions of a underlying random field. Note: This is an intermediate solution and should be moved to the variables module! The current solution works so far only for one random field!
- Type:
np.array
- eigenvals#
Eigenvalues corresponding to the eigenfunctions.
- f_mean_train#
Vector of predicted mean values of multi-fidelity mapping corresponding to the training input Z_train according to \(\mathrm{m}_{f^*}(Z)\).
- Type:
np.array
- y_pdf_support#
Support grid for HF output density \(p(y_{HF})\).
- Type:
np.array
- lf_data_iterators#
Data iterators to load sampling data of low-fidelity models from a file.
- Type:
obj
- hf_data_iterator#
Data iterator to load the benchmark sampling data from a HF model from a file (optional and only for scientific benchmark).
- Type:
obj
- training_indices#
Vector with indices to select the training data subset from the larger data set of Monte-Carlo data.
- Type:
np.array
- uncertain_parameters#
UncertainParameters object.
- Type:
obj
- visualization#
BMFMC visualization object.
- Type:
- Returns:
Instance of BMFMCModel
References
- [1] Nitzler, J., Biehler, J., Fehn, N., Koutsourelakis, P.-S. and Wall, W.A. (2020),
“A Generalized Probabilistic Learning Approach for Multi-Fidelity Uncertainty Propagation in Complex Physical Simulations”, arXiv:2001.02892
- build_approximation(approx_case=True)[source]#
Train surrogate model.
Construct the probabilistic surrogate/mapping based on the provided training-data and optimize the hyper-parameters by maximizing the data’s evidence or its lower bound (ELBO).
- Parameters:
approx_case (bool) – Boolean that switches input features \(\boldsymbol{\gamma}\) off if set to False. If not specified or set to True informative input features will be used in the BMFMC framework
- calculate_extended_gammas()[source]#
Calculate extended input features.
Given the low-fidelity sampling data, calculate the extended input features \(\boldsymbol{\gamma_{\text{LF,ext}}}\). The informative input features \(\boldsymbol{\gamma}\) are calculated so that they would maximize the Pearson correlation coefficient between \(\boldsymbol{\gamma_i^*}\) and \(Y_{\text{LF}}^*\). Afterwards \(z_{\text{LF}}\) is composed by \(y_{\text{LF}}\) and \(\boldsymbol{\gamma_{LF}}\).
- compute_pyhf_statistics()[source]#
Compute high-fidelity output density prediction.
Calculate the high-fidelity output density prediction p_yhf_mean and its credible bounds p_yhf_var on the support y_pdf_support according to equation (14) and (15) in [1].
- compute_pymc_reference()[source]#
Compute reference kernel density estimate.
Given a high-fidelity Monte-Carlo benchmark dataset, compute the reference kernel density estimate for the quantity of interest and optimize the bandwidth of the kde.
- evaluate(samples)[source]#
Evaluate the Bayesian Multi-Fidelity Monte-Carlo (BMFMC) model.
This method evaluates the BMFMC routine by performing two main tasks:
- Probabilistic Mapping Evaluation: Evaluates the probabilistic mapping for both the LF
Monte Carlo points and the LF training points.
- BMFMC Posterior Statistics Evaluation: Uses the results from the probabilistic
mapping to compute the BMFMC posterior statistics.
- Parameters:
samples (any type) – Currently not used.
- Returns:
output (dict) –
- A dictionary containing the results of the evaluation. The dictionary
includes:
- Z_mc: (np.array) Low-fidelity features Monte-Carlo data used in the
evaluation.
- m_f_mc: (np.array) Posterior mean values of the probabilistic mapping
(denoted f) for LF Monte-Carlo inputs. These values represent the mean predictions based on the Monte-Carlo simulations.
- var_y_mc: (np.array) Posterior variance of the probabilistic mapping
(denoted f) for LF Monte-Carlo inputs. These values indicate the uncertainty associated with the mean predictions.
- y_pdf_support: (np.array) Support grid for the probability density function
(PDF) of the HF output used in this analysis. This provides the range over which the PDF is evaluated.
- p_yhf_mean: (np.array) Posterior mean prediction of the HF output PDF,
representing the expected values of the HF output given the model and data.
- p_yhf_var: (np.array) Posterior variance prediction of the HF output PDF,
indicating the uncertainty in the HF output predictions.
- p_yhf_mean_BMFMC: (np.array or None) Reference posterior mean prediction of
the HF output PDF without using informative features, for comparison purposes. This value is calculated if self.no_features_comparison_bool is True.
- p_yhf_var_BMFMC: (np.array or None) Reference posterior variance prediction of
the HF output PDF without using informative features, for comparison purposes. This value is calculated if self.no_features_comparison_bool is True.
- p_ylf_mc: (np.array) Kernel density estimate of the LF model output PDF, used
for illustration purposes.
- p_yhf_mc: (np.array) Kernel density estimate of the HF model output PDF based
on a full Monte-Carlo simulation. Used for benchmarking and comparison.
- Z_train: (np.array) LF feature vector used for training the probabilistic
mapping. This contains the features that were used to fit the model.
- X_train: (np.array) Input matrix for simulations used to train the
probabilistic mapping, corresponding to the training data.
- Y_HF_train: (np.array) Outputs of the high-fidelity model corresponding to the
training inputs X_train, representing the HF output values used for model fitting.
- get_hf_training_data()[source]#
Get high-fidelity training data.
Given the low-fidelity sampling data and the optimal training input \(X\), either simulate the high-fidelity response for \(X\) or load the corresponding high-fidelity response from the high-fidelity benchmark data provided by a pickle file.
- get_random_fields_and_truncated_basis(explained_var=95.0)[source]#
Get random fields and their truncated basis.
Get the random fields and their description from the data files (pickle-files) and return their truncated basis. The truncation is determined based on the explained variance threshold (explained_var).
- Parameters:
explained_var (float) – Threshold for truncation in percent.
- Returns:
random_fields_trunc_dict (dict) – Dictionary containing samples of the random fields as well as their truncated basis
x_uncorr (np.array) – Samples of remaining uncorrelated random variables.
- grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t. current set of input samples.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)
- input_dim_red()[source]#
Reduce dimensionality of the input space.
Unsupervised dimensionality reduction of the input space. The random are first expressed via a truncated Karhunen-Loeve expansion that still contains, e.g. 95 % of the field’s variance. Afterwards, input samples of the random fields get projected on the reduced basis and the coefficients of the projection sever as the new reduced encoding for the latter. Eventually the uncorrelated input samples and the reduced representation of random field samples get assembled to a new reduced input vector which is also standardized along each of the remaining dimensions.
- Returns:
X_red_test_stdizd (np.array) – Dimensionality reduced input matrix corresponding to
testing/sampling data for the probabilistic mapping
- load_sampling_data()[source]#
Load sampling data.
Load the low-fidelity sampling data from a pickle file into QUEENS. Check if high-fidelity benchmark data is available and load this as well.
- run_BMFMC_without_features()[source]#
Run BMFMC without further informative LF features.
- Returns:
p_yhf_mean_BMFMC (np.array) – Posterior mean function of the HF output density
p_yhf_var_BMFMC (np.array) – Posterior variance function of the HF output density
- set_feature_strategy()[source]#
Set feature strategy.
Depending on the method specified in the input file, set the strategy that will be used to calculate the low-fidelity features \(Z_{\text{LF}}\).
- update_probabilistic_mapping_with_features()[source]#
Update probabilistic mapping.
Given the number of additional informative features of the input and the extended feature matrix \(\Gamma_{LF,ext}\), assemble first the LF feature matrix \(Z_{LF}\). In a next step, update the probabilistic mapping with the LF-features. The former steps includes a training and prediction step. The determination of optimal training points is outsourced to the BMFMC iterator and the results get only called at this place.
- class BmfmcInterface(probabilistic_mapping)[source]#
Bases:
object
Interface for grouping outputs with inputs.
Interface for grouping the outputs of several simulations with identical input to one data point. The BmfmcInterface is basically a version of the approximation_interface class, that allows vectorized mapping and implicit function relationships.
- probabilistic_mapping#
Instance of the probabilistic mapping, which models the probabilistic dependency between high-fidelity model, low-fidelity models and informative input features.
- Type:
obj
- Returns:
BMFMCInterface (obj) – Instance of the BMFMCInterface
- build_approximation(Z_LF_train, Y_HF_train)[source]#
Build and train the probabilistic mapping.
Based on the training inputs \(\mathcal{D}_f={Y_{HF},Z_{LF}}\).
- Parameters:
Z_LF_train (np.array) – Training inputs for probabilistic mapping
Y_HF_train (np.array) – Training outputs for probabilistic mapping
- evaluate(samples, support='y', full_cov=False, gradient_bool=False)[source]#
Predict on probabilistic mapping.
Call the probabilistic mapping and predict the mean and variance for the high-fidelity model, given the inputs z_lf (called samples here).
- Parameters:
samples (np.array) – Low-fidelity feature vector z_lf that contains the corresponding Monte-Carlo points on which the probabilistic mapping should be evaluated
support (str) – Support/variable for which we predict the mean and (co)variance. For support=f the Gaussian process predicts w.r.t. the latent function f. For the choice of support=y we predict w.r.t. the simulation/experimental output y
full_cov (bool) – Boolean that specifies whether the entire posterior covariance matrix should be returned or only the posterior variance
gradient_bool (bool) – Flag to determine whether the gradient of the function at the evaluation point is expected (True) or not (False)
- Returns:
mean_Y_HF_given_Z_LF (np.array) – Vector of mean predictions \(\mathbb{E}_{f^*}[p(y_{HF}^*|f^*,z_{LF}^*, \mathcal{D}_{f})]\) for the HF model given the low-fidelity feature input
var_Y_HF_given_Z_LF (np.array) – Vector of variance predictions \(\mathbb{V}_{f^*}[p(y_{HF}^*|f^*,z_{LF}^*,\mathcal{D}_{f})]\) for the HF model given the low-fidelity feature input
- assemble_x_red_stdizd(x_uncorr, coef_mat)[source]#
Assemble and standardize the dimension-reduced input x_red.
- Parameters:
x_uncorr (np.array) – Samples of remaining uncorrelated random variables.
coef_mat (np.array) – Optional coefficient matrix to concatenate.
- Returns:
X_red_test_stdizd (np.array) – Standardized dimension-reduced input.
- linear_scale_a_to_b(data_a, data_b)[source]#
Scale linearly.
Scale a data vector ‘data_a’ linearly to the range of data vector ‘data_b’.
- Parameters:
data_a (np.array) – Data vector that should be scaled.
data_b (np.array) – Reference data vector that provides the range for scaling.
- Returns:
scaled_a (np.array) – Scaled data_a vector.
- project_samples_on_truncated_basis(truncated_basis_dict, num_samples)[source]#
Project samples on truncated basis.
Project the high-dimensional samples of the random field on the truncated bases to yield the projection coefficients of the series expansion that serve as a new reduced representation of the random fields.
- Parameters:
truncated_basis_dict (dic) – Dictionary containing random field samples and truncated bases.
num_samples (int) – Number of Monte-Carlo samples.
- Returns:
coefs_mat (np.array) – Matrix containing the reduced representation of all random fields stacked together along the columns.
queens.models.differentiable_simulation_model_adjoint module#
Adjoint model.
- class DifferentiableSimulationModelAdjoint(scheduler, driver, gradient_driver, adjoint_file='adjoint_grad_objective.csv')[source]#
Bases:
SimulationModel
Adjoint model.
- adjoint_file#
Name of the adjoint file that contains the evaluated derivative of the functional w.r.t. to the simulation output.
- Type:
str
- grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t. current set of input samples.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)
queens.models.differentiable_simulation_model_fd module#
Finite difference model.
- class DifferentiableSimulationModelFD(scheduler, driver, finite_difference_method, step_size=1e-05, bounds=None)[source]#
Bases:
SimulationModel
Finite difference model.
- finite_difference_method#
Method to calculate a finite difference based approximation of the Jacobian matrix:
‘2-point’: a one-sided scheme by definition
- ‘3-point’: more exact but needs twice as many function
evaluations
- Type:
str
- step_size#
Step size for the finite difference approximation
- Type:
float
- bounds#
Lower and upper bounds on independent variables. Defaults to no bounds meaning: [-inf, inf] Each bound must match the size of x0 or be a scalar, in the latter case the bound will be the same for all variables. Use it to limit the range of function evaluation.
- Type:
np.array
- evaluate(samples)[source]#
Evaluate model with current set of input samples.
- Parameters:
samples (np.ndarray) – Input samples
- Returns:
response (dict) – Response of the underlying model at input samples
- evaluate_finite_differences(samples)[source]#
Evaluate model gradient based on FDs.
- Parameters:
samples (np.array) – Current samples at which model should be evaluated.
- Returns:
response (np.array) – Array with model response for given input samples
gradient_response (np.array) – Array with row-wise model/objective fun gradients for given samples.
- grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t. current set of input samples.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)
queens.models.logpdf_gp_model module#
GPLogpdf model.
- class LogpdfGPModel(approx_type, num_hyper=100, num_optimizations=3, hmc_burn_in=1000, hmc_steps=2000, prior_rate=None, prior_gp_mean=-1.0, upper_bound=None, quantile=0.9, jitter=1e-16)[source]#
Bases:
Model
LogpdfGPModel Class.
- approx_type#
Approximation type (GPMAP-I’, ‘CGPMAP-II’ or ‘CFBGP’)
- Type:
str
- num_hyper#
Number of hyperparameter samples (if CFBGP)
- Type:
int
- num_optimizations#
Number of hyperparameter optimization restarts
- Type:
int
- hmc_burn_in#
Number of HMC burn-in steps (if CFBGP)
- Type:
int
- hmc_steps#
Number of HMC steps (if CFBGP)
- Type:
int
- prior_rate#
Rates of exponential priors for hyperparameters [lengthscales, signal_std, noise_var]
- Type:
np.ndarray
- prior_gp_mean#
Transformed prior GP mean. Range: [-1, 0]
- Type:
float
- upper_bound#
Transformed upper bound for Gaussian process. Range: [-1, 0]. If None provided, it is derived from the number of observations.
- Type:
float
- quantile#
Confidence quantile
- Type:
float
- jitter#
Nugget term for numerical stability of Cholesky decomposition
- Type:
float
- x_train#
Training input samples
- Type:
np.ndarray
- y_train#
Training likelihood output samples
- Type:
np.ndarray
- scaler_x#
Scaler for training input samples
- Type:
float
- scaler_y#
Scaler for training likelihood output samples
- Type:
float
- num_dim#
number of dimensions
- Type:
np.ndarray
- hyperparameters#
Hyperparameter (samples)
- Type:
np.ndarray
- chol_k_train_train#
Cholesky decomposition of Gram matrix evaluated at the training samples
- Type:
np.ndarray
- v_train#
Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
- Type:
np.ndarray
- jit_func_generate_output#
Jitted partial ‘generate_output_*’ function
- Type:
obj
- partial_hyperparameter_log_prob#
Jitted partial function of hyperparameter log posterior probability
- Type:
obj
- batch_size#
Batch size for concurrent prediction evaluations
- Type:
int
- calc_train_factor(hyperparameters)[source]#
Calculate training factors.
- Parameters:
hyperparameters (np.ndarray) – Hyperparameters
- Returns:
chol_k_train_train (np.ndarray) – Cholesky decomposition of Gram matrix evaluated at the training samples
v_train (np.ndarray) – Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
- evaluate(samples)[source]#
Evaluate model with current set of input samples.
- Parameters:
samples (np.ndarray) – Input samples
- Returns:
log_likelihood (np.ndarray) – Approximation of log-likelihood values at input samples
- static evaluate_mean_and_std(dists_test_train, hyperparameters, v_train, chol_k_train_train, prior_gp_mean, scaler_y)[source]#
Mean and standard deviation of unconstrained GP at test samples.
- Parameters:
dists_test_train (np.ndarray) – Distance Matrix between test and train samples
hyperparameters (np.ndarray) – Hyperparameters
v_train (np.ndarray) – Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
chol_k_train_train (np.ndarray) – Cholesky decomposition of Gram matrix evaluated at the training samples
prior_gp_mean (float, opt) – Transformed prior GP mean. Range: [-1, 0]
scaler_y (float) – Scaler for training likelihood output samples
- Returns:
mean (np.ndarray) – Mean of unconstrained GP at test samples
std (np.ndarray) – Standard deviation of unconstrained GP at test samples.
- static generate_output_cfbgp(x_test, x_train, hyperparameters, v_train, chol_k_train_train, eval_mean_and_std, upper_bound, quantile)[source]#
Approximation of log-likelihood using CGPMAP-II approach.
- Parameters:
x_test (np.ndarray) – Input test samples
x_train (np.ndarray) – Training input samples
hyperparameters (np.ndarray) – Hyperparameters
v_train (np.ndarray) – Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
chol_k_train_train (np.ndarray) – Cholesky decomposition of Gram matrix evaluated at the training samples
eval_mean_and_std (obj) – eval_mean_and_std function
upper_bound (float) – Transformed upper bound for Gaussian process
quantile (float) – Confidence quantile
- Returns:
log_likelihood (np.ndarray) – Approximation of log-likelihood values at x_test
- static generate_output_cgpmap_2(x_test, x_train, hyperparameters, v_train, chol_k_train_train, eval_mean_and_std, upper_bound, quantile)[source]#
Approximation of log-likelihood using CGPMAP-II approach.
- Parameters:
x_test (np.ndarray) – Input test samples
x_train (np.ndarray) – Training input samples
hyperparameters (np.ndarray) – Hyperparameters
v_train (np.ndarray) – Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
chol_k_train_train (np.ndarray) – Cholesky decomposition of Gram matrix evaluated at the training samples
eval_mean_and_std (obj) – eval_mean_and_std function
upper_bound (float) – Transformed upper bound for Gaussian process
quantile (float) – Confidence quantile
- Returns:
log_likelihood (np.ndarray) – Approximation of log-likelihood values at x_test
- static generate_output_gpmap_1(x_test, x_train, hyperparameters, v_train, prior_gp_mean, scaler_y)[source]#
Approximation of log-likelihood using CGPMAP-II approach.
- Parameters:
x_test (np.ndarray) – Input test samples
x_train (np.ndarray) – Training input samples
hyperparameters (np.ndarray) – Hyperparameters
v_train (np.ndarray) – Matrix product of inverse of Gram matrix evaluated at training samples and training output samples
prior_gp_mean (float, opt) – Transformed prior GP mean
scaler_y (float) – Scaler for training likelihood output samples
- Returns:
log_likelihood (np.ndarray) – Approximation of log-likelihood values at x_test
- grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t.
current set of input samples.
- static hyperparameter_log_likelihood(hyperparameters, x_train, y_train, jitter)[source]#
Log likelihood function for hyperparameters.
- Parameters:
hyperparameters (np.ndarray) – Hyperparameters
x_train (np.ndarray) – Training input samples
y_train (np.ndarray) – Training output samples
jitter (float) – Nugget term for numerical stability of Cholesky decomposition
- Returns:
log_likelihood (np.ndarray) – Log likelihood of data given hyperparameters
- static hyperparameter_log_prior(hyperparameters, prior_rate)[source]#
Log prior function for hyperparameters.
- Parameters:
hyperparameters (np.ndarray) – Hyperparameters
prior_rate (np.ndarray) – prior_rate (np.ndarray): Rates of exponential priors for hyperparameters
- Returns:
joint_log_prior (np.ndarray) – Joint log prior of hyperparameters
- static hyperparameter_log_prob(transformed_hyperparameters, x_train, y_train, jitter, log_likelihood_func, log_prior_func, prior_rate)[source]#
Log joint probability function for hyperparameters.
- Parameters:
transformed_hyperparameters (np.ndarray) – Transformed hyperparameters
x_train (np.ndarray) – Training input samples
y_train (np.ndarray) – Training output samples
jitter (float) – Nugget term for numerical stability of Cholesky decomposition
log_likelihood_func (obj) – Log likelihood function of hyperparameters
log_prior_func (obj) – log prior function of hyperparameters
prior_rate (np.ndarray) – prior_rate (np.ndarray): Rates of exponential priors for hyperparameters
- Returns:
np.ndarray – Log joint probability of hyperparameters
- initialize(x_train, y_train, num_observations)[source]#
Initialize Gaussian process model.
- Parameters:
x_train (np.ndarray) – Training input samples
y_train (np.ndarray) – Training likelihood output samples
num_observations (int) – Number of observations
- distances(x1, x2)[source]#
Distance Matrix between two sample sets.
- Parameters:
x1 (np.ndarray) – input samples
x2 (np.ndarray) – input samples
- Returns:
dists (np.ndarray) – Distance Matrix
queens.models.model module#
Model class.
- class Model[source]#
Bases:
object
Base Model class.
As with the Iterator hierarchy, the purpose of this base class is twofold. One, it defines a unified interface for all derived classes. Two, it acts as a factory for the instantiation of model objects.
- response#
Response of the underlying model at input samples.
- Type:
dict
- abstract evaluate(samples)[source]#
Evaluate model with current set of input samples.
- Parameters:
samples (np.ndarray) – Input samples
- final evaluate_and_gradient(samples, upstream_gradient=None)[source]#
Evaluate model output and gradient.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns the model output f(x) and the gradient \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array, opt) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
model_output (np.array) – Model output
model_gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)
- evaluate_and_gradient_bool = False#
- abstract grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t. current set of input samples.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)
queens.models.simulation_model module#
Simulation model class.
- class SimulationModel(scheduler, driver)[source]#
Bases:
Model
Simulation model class.
- evaluate(samples)[source]#
Evaluate model with current set of input samples.
- Parameters:
samples (np.ndarray) – Input samples
- Returns:
response (dict) – Response of the underlying model at input samples
- grad(samples, upstream_gradient)[source]#
Evaluate gradient of model w.r.t. current set of input samples.
Consider current model f(x) with input samples x, and upstream function g(f). The provided upstream gradient is \(\frac{\partial g}{\partial f}\) and the method returns \(\frac{\partial g}{\partial f} \frac{df}{dx}\).
- Parameters:
samples (np.array) – Input samples
upstream_gradient (np.array) – Upstream gradient function evaluated at input samples \(\frac{\partial g}{\partial f}\)
- Returns:
gradient (np.array) – Gradient w.r.t. current set of input samples \(\frac{\partial g}{\partial f} \frac{df}{dx}\)