queens.utils package#
Utils.
Modules containing utilities used throughout QUEENS.
Submodules#
queens.utils.ascii_art module#
ASCII art module.
- print_banner(output_width=101)[source]#
Print banner.
- Parameters:
output_width (int) – Terminal output width
- Return type:
None
- print_banner_and_description(output_width=101)[source]#
Print banner and the description.
- Parameters:
output_width (int) – Terminal output width
- Return type:
None
- print_bmfia_acceleration(output_width=101)[source]#
Print BMFIA rocket.
- Parameters:
output_width (int) – Terminal output width
- Return type:
None
- print_centered_multiline(string, output_width=101)[source]#
Center every line of a multiline text.
- Parameters:
string (str) – String to be printed
output_width (int) – Terminal output width
- Return type:
None
- print_centered_multiline_block(string, output_width=101)[source]#
Print a multiline text in the center as a block.
- Parameters:
string (str) – String to be printed
output_width (int) – Terminal output width
- Return type:
None
- print_classification()[source]#
Print like a sir as the iterator is classification.
- Return type:
None
queens.utils.classifier module#
Classifiers for use in convergence classification.
- class ActiveLearningClassifier[source]#
Bases:
ClassifierActive learning classifier wrapper.
- n_params#
number of parameters of the solver
- classifier_obj#
classifier, e.g. sklearn.svm.SVR
- active_sampler_obj#
query strategy from skactiveml.pool, e.g. UncertaintySampling
- __init__(n_params, classifier_obj, batch_size, active_sampler_obj=None)[source]#
Initialise active learning classifier.
- Parameters:
n_params (int) – number of parameters of the solver
classifier_obj (MLPClassifier) – classifier, e.g. sklearn.svm.SVR
active_sampler_obj (UncertaintySampling | None) – query strategy from skactiveml.pool, e.g. UncertaintySampling
batch_size (int) – Batch size to query the next samples.
- Return type:
None
- is_active = True#
- train(x_train, y_train)[source]#
Train the underlying _clf classifier.
- Parameters:
x_train (ndarray) – array with training samples, size: (n_samples, n_params)
y_train (ndarray) – vector with corresponding training labels, size: (n_samples)
- Returns:
sample indices in x_train to query next
- Return type:
ndarray
- class Classifier[source]#
Bases:
objectClassifier wrapper.
- n_params#
number of parameters of the solver
- classifier_obj#
classifier, e.g. sklearn.svm.SVR
- __init__(n_params, classifier_obj)[source]#
Initialise the classifier.
- Parameters:
n_params (int) – number of parameters
classifier_obj (SklearnClassifier) – classifier, e.g. sklearn.svm.SVR
- Return type:
None
- is_active = False#
- load(path, file_name)[source]#
Load pickled classifier.
- Parameters:
path (str) – Path to export the classifier
file_name (str) – File name without suffix
- Return type:
None
queens.utils.cli module#
Command Line Interface utils collection.
- cli_logging(func)[source]#
Decorator to create logger for CLI function.
- Parameters:
func (Callable) – Function that is to be decorated
- Return type:
Callable
- gather_metadata_and_write_to_csv(*args, **kwargs)[source]#
- Parameters:
args (Any)
kwargs (Any)
- Return type:
Any
- get_cli_options(args)[source]#
Get input file path, output directory and debug from args.
- Parameters:
args (Sequence[str]) – cli arguments
- Returns:
Path object to input file
Path object to the output directory
True if debug mode is to be used
- Return type:
tuple[Path, Path, bool]
- print_greeting_message(*args, **kwargs)[source]#
- Parameters:
args (Any)
kwargs (Any)
- Return type:
Any
queens.utils.collection module#
Utils to collect data during iterative processes.
- class CollectionObject[source]#
Bases:
objectCollection object which stores data.
This object can be indexed by iteration i: collection_object[i] but also using the collected fields collection_object.field1.
- __init__(*field_names)[source]#
Initialize the collection item.
- Parameters:
field_names (str) – Names of fields to be stored
- Return type:
None
- add(**field_names_and_values)[source]#
Add data to the object.
This function can be called with one or multiple fields, i.e.: collection_object.add(field1=value1) or collection_object.add(field1=value1, field2=value2). An error is raised if one tries to add data to a field for a new iteration before all fields are filled for the current iteration.
- Parameters:
field_names_and_values (dict)
- Return type:
None
- classmethod create_collection_object_from_dict(data_dict)[source]#
Create collection item from dict.
- Parameters:
data_dict (dict) – Dictionary with values to be stored in this object
- Returns:
Collection object created from dict
- Return type:
- items()[source]#
Items of the current object.
This allows to use the object like a dict.
- Returns:
Items of the collection object
- Return type:
Iterable
- keys()[source]#
Keys, i.e. field names of the current object.
This allows to use the object like a dict.
- Returns:
Keys of the collection object
- Return type:
Iterable
queens.utils.config_directories module#
Configuration of folder structure of QUEENS experiments.
- base_directory()[source]#
Holds all queens experiments.
The base directory holds individual folders for each queens experiment on the compute machine. Per default, it is located and structured as follows:
$HOME/queens-experiments ├── experiment_name_1 ├── experiment_name_2
For remote cluster test runs, a separate base directory structure is used:
$HOME/queens-tests ├── pytest-0 │ ├── test_name_1 │ └── test_name_2 ├── pytest-1 ├── test_name_1 └── test_name_2- Return type:
Path
- create_directory(dir_path)[source]#
Create a directory either local or remote.
- Parameters:
dir_path (str | Path) – Directory to create
- Return type:
None
- current_job_directory(experiment_dir, job_id)[source]#
Directory of the latest submitted job.
- Parameters:
experiment_dir (Path) – Experiment directory
job_id (int) – Job ID of the current job
- Returns:
Path to the current job directory.
- Return type:
Path
- experiment_directory(experiment_name, experiment_base_directory=None)[source]#
Directory for data of a specific experiment on the computing machine.
If no experiment_base_directory is provided, base_directory() is used as default.
- Parameters:
experiment_name (str) – Experiment name
experiment_base_directory (str | Path | None) – Base directory for the experiment directory
- Returns:
Experiment directory
Whether experiment directory already exists
- Return type:
tuple[Path, bool]
queens.utils.configure_tensorflow module#
Utils related to tensorflow and friends.
queens.utils.exceptions module#
Custom exceptions.
- exception CLIError[source]#
Bases:
QueensExceptionQUEENS exception for CLI input.
- exception FileTypeError[source]#
Bases:
QueensExceptionException for wrong file types.
- exception InvalidOptionError[source]#
Bases:
QueensExceptionCustom error class for invalid options during QUEENS runs.
- classmethod construct_error_from_options(valid_options, desired_option, additional_message='')[source]#
Construct invalid option error from the valid and desired options.
- Parameters:
valid_options (dict | list) – List of valid option keys
desired_option (str) – Key of desired option
additional_message (str) – Additional message to pass (default is None)
- Returns:
InvalidOptionError
- Return type:
- exception SubprocessError[source]#
Bases:
QueensExceptionCustom error class for the QUEENS subprocess wrapper.
- classmethod construct_error_from_command(command, command_output, error_message, additional_message='')[source]#
Construct a Subprocess error from a command and its outputs.
- Parameters:
command (str) – Command used that raised the error
command_output (str) – Command output
error_message (str) – Error message of the command
additional_message (str | None) – Additional message to pass
- Returns:
SubprocessError
- Return type:
queens.utils.experimental_data_reader module#
Module to read experimental data.
- class ExperimentalDataReader[source]#
Bases:
objectReader for experimental data.
- output_label#
Label that marks the output quantity in the csv file
- coordinate_labels#
List of column-wise coordinate labels in csv files
- time_label#
Name of the time variable in csv file
- file_name#
File name of experimental data
- Type:
str
- base_dir#
Path to base directory containing experimental data
- Type:
Path
- data_processor#
data processor for experimental data
- __init__(data_processor=None, output_label=None, coordinate_labels=None, time_label=None, file_name_identifier=None, csv_data_base_dir='')[source]#
Initialize ExperimentalDataReader.
- Parameters:
data_processor (DataProcessor | None) – data processor for experimental data
output_label (str | None) – Label that marks the output quantity in the csv file
coordinate_labels (list[str] | None) – List of column-wise coordinate labels in csv files
time_label (str | None) – Name of the time variable in csv file
file_name_identifier (str | None) – File name of experimental data
csv_data_base_dir (str | Path) – Path to base directory containing experimental data
- Return type:
None
- get_experimental_data()[source]#
Load experimental data.
- Returns:
Column-vector of model outputs which correspond row-wise to observation coordinates
Matrix with observation coordinates. One row corresponds to one coordinate point
Unique vector of observation times
Dictionary containing the experimental data
Name of the time variable in csv file
List of column-wise coordinate labels in csv files
Label that marks the output quantity in the csv file
- Return type:
tuple[ndarray, ndarray | None, ndarray | None, dict[str, Any], str | None, list[str] | None, str | None]
queens.utils.fd_jacobian module#
Calculate finite-difference-based approximation of Jacobian.
Note
Implementation is heavily based on the scipy.optimize._numdiff module. We do NOT support complex scheme ‘cs’ and sparsity.
The motivation behind this reimplementation is to enable the parallel computation of all function values required for the finite difference scheme.
In theory, when computing the Jacobian of function at a specific position via a specific finite difference scheme, all positions where the function needs to be evaluated (the perturbed positions) are known immediately/at once, because they do not depend on each other. The evaluation of the function at these perturbed positions may consequently be done “perfectly” (embarrassingly) parallel.
Most implementations of finite-difference-based approximations do not exploit this inherent potential for parallel evaluations because for cheap functions, the communication overhead is too high. For expensive functions, the exploitation ensures significant speed up.
- compute_step_with_bounds(x0, method, rel_step, bounds)[source]#
Compute step sizes of finite difference scheme adjusted to bounds.
- Parameters:
x0 (ndarray) – Point at which the derivative shall be evaluated
method (Literal['2-point', '3-point']) –
Finite difference method to use:
- 2-point:
use the first order accuracy forward or backward difference
- 3-point:
use central difference in interior points and the second order accuracy forward or backward difference near the boundary
rel_step (float | ndarray | None) – Relative step size to use. The absolute step size is computed as h = rel_step * sign(x0) * max(1, abs(x0)), possibly adjusted to fit into the bounds. For method=’3-point’ the sign of h is ignored. If None (default) then step is selected automatically, see Notes.
bounds (tuple | ndarray | None) – Lower and upper bounds on independent variables. Defaults to no bounds. Each bound must match the size of x0 or be a scalar, in the latter case the bound will be the same for all variables. Use it to limit the range of function evaluation.
- Returns:
Adjusted step sizes
Whether to switch to one-sided scheme due to closeness to bounds. Informative only for – 3-point method
- Return type:
tuple[ndarray, ndarray]
- fd_jacobian(f0, f_perturbed, dx, use_one_sided, method)[source]#
Calculate finite difference approximation of Jacobian of f at x0.
The necessary function evaluation have been pre-calculated and are supplied via f0 and the f_perturbed vector. Each row in f_perturbed corresponds to a function evaluation. The shape of f_perturbed depends heavily on the chosen finite difference scheme (method) and therefore the pre-calculation of f_perturbed and dx has to be consistent with the requested method.
Supported methods: * 2-point: a one sided scheme by definition * 3-point: more exact but needs twice as many function evaluations
Note: The implementation is supposed to remain very closed to scipy._numdiff.approx_derivative.
- Parameters:
f0 (ndarray) – Function value at x0, f0=f(x0)
f_perturbed (ndarray) – Perturbed function values
dx (ndarray) – Deltas of the input variables
use_one_sided (ndarray) – Whether to switch to one-sided scheme due to closeness to bounds; informative only for 3-point method
method (Literal['2-point', '3-point']) – Which scheme was used to calculate the perturbed function values and deltas
- Returns:
Jacobian of the underlying model at x0.
- Return type:
ndarray
- get_positions(x0, method, rel_step, bounds)[source]#
Compute all positions needed for the finite difference approximation.
The Jacobian is defined for a vector-valued function at a given position.
Note: The implementation is supposed to remain very closed to scipy._numdiff.approx_derivative.
- Parameters:
x0 (ndarray) – Position or sample at which the Jacobian shall be computed.
method (Literal['2-point', '3-point']) – Finite difference method that is used to compute the Jacobian.
rel_step (float | ndarray | None) – Finite difference step size.
bounds (tuple | ndarray | None) – Lower and upper bounds on independent variables. Defaults to no bounds. Each bound must match the size of x0 or be a scalar, in the latter case the bound will be the scalar, in the latter case the bound will be the same for all variables. Use it to limit the range of function evaluation.
- Returns:
List with additional stencil positions that are necessary to calculate the finite difference – approximation to the gradient
Delta between positions used to approximate Jacobian
- Return type:
tuple[ndarray, ndarray, ndarray]
queens.utils.gpflow_transformations module#
Utils for gpflow.
- init_scaler(unscaled_data)[source]#
Initialize StandardScaler and scale data.
Standardize features by removing the mean and scaling to unit variance: \(scaled\_data = \frac{unscaled\_data - mean}{std}\)
- Parameters:
unscaled_data (ndarray) – Unscaled data
- Returns:
Standard scaler
Scaled data
- Return type:
tuple[StandardScaler, ndarray]
queens.utils.imports module#
Import utils.
- class LazyLoader[source]#
Bases:
objectLazy loader for modules that take long to load.
Inspired from https://stackoverflow.com/a/78312617
- extract_type_checking_imports(file_path)[source]#
Extract imports inside TYPE_CHECKING blocks from file.
- Parameters:
file_path (str) – Path to the file
- Returns:
A dict mapping class names to their source modules.
- Return type:
dict
- get_module_attribute(path_to_module, function_or_class_name)[source]#
Load function from python file by path.
- Parameters:
path_to_module (str | Path) – Path to file
function_or_class_name (str) – Name of the function
- Returns:
Function or class from the module
- Return type:
Callable
- get_module_class(module_options, valid_types, module_type_specifier='type')[source]#
Return module class defined in config file.
- Parameters:
module_options (dict) – Module options
valid_types (dict) – Dict of valid types with corresponding module paths and class names
module_type_specifier (str) – Specifier for the module type
- Returns:
Class from the module
- Return type:
Any
- import_class_from_class_module_map(name, class_module_map, package=None)[source]#
Import class from class_module_map.
- Parameters:
name (str) – Name of the class.
class_module_map (dict) – Class to module mapping.
package (str | None) – Package name (only necessary if import path is relative).
- Returns:
Class object.
- Return type:
Any
queens.utils.injector module#
Injector module.
The module supplies functions to inject parameter values into a template text file.
- inject(params, template_path, output_file, strict=True)[source]#
Function to insert parameters into file template and write to file.
- Parameters:
params (dict) – Dict with parameters to inject
template_path (str | Path) – Path to template
output_file (str | Path) – Name of output file with injected parameters
strict (bool) – Raises exception if mismatch between provided and required parameters
- Return type:
None
- inject_in_template(params, template, output_file, strict=True)[source]#
Function to insert parameters into file template and write to file.
- Parameters:
params (dict) – Dict with parameters to inject
template (str) – Template (str)
output_file (str | Path) – Name of output file with injected parameters
strict (bool) – Raises exception if mismatch between provided and required parameters
- Return type:
None
- render_template(params, template, strict=True)[source]#
Function to insert parameters into a template.
- Parameters:
params (dict) – Dict with parameters to inject
template (str) – Template file as string
strict (bool) – Raises exception if required parameters from the template are missing
- Returns:
injected template
- Return type:
str
queens.utils.io module#
Utils for input/output handling.
- load_input_file(input_file_path)[source]#
Load inputs from file by path.
- Parameters:
input_file_path (Path) – Path to the input file
- Returns:
Options in the input file.
- Return type:
dict
- load_pickle(file_path)[source]#
Load a pickle file directly from path.
- Parameters:
file_path (Path) – Path to pickle-file
- Returns:
Data in the pickle file
- Return type:
dict
- load_result(path_to_result_file)[source]#
Load QUEENS results.
- Parameters:
path_to_result_file (Path) – Path to results
- Returns:
Results
- Return type:
Any
- print_pickled_data(file_path)[source]#
Print a table of the data within a pickle file.
Only goes one layer deep for dicts. This is similar to python -m pickle file_path but makes it a single command and fancy prints.
- Parameters:
file_path (Path) – Path to pickle-file
- Return type:
None
- read_file(file_path)[source]#
Function to read in a file.
- Parameters:
file_path (Path | str) – Path to file
- Returns:
Read-in file
- Return type:
str
- to_dict_with_standard_types(obj)[source]#
Convert dictionaries to dictionaries with python standard types only.
- Parameters:
obj (Any) – Dictionary to convert
- Returns:
Dictionary with standard types
- Return type:
Any
- write_to_csv(output_file_path, data, delimiter=',')[source]#
Write a simple csv file.
Write data out to a csv-file. Nothing fancy, at the moment, only now header line or index column is supported just pure data.
- Parameters:
output_file_path (Path) – Path to the file the data should be written to
data (ndarray) – Data that should be written to the csv file.
delimiter (str) – Delimiter to separate individual data. Defaults to comma delimiter.
- Return type:
None
queens.utils.iterative_averaging module#
Iterative averaging utils.
- class ExponentialAveraging[source]#
Bases:
IterativeAveragingExponential averaging.
\(x^{(0)}_{avg}=x^{(0)}\)
\(x^{(j)}_{avg}= \alpha x^{(j-1)}_{avg}+(1-\alpha)x^{(j)}\)
Is also sometimes referred to as exponential smoothing.
- coefficient#
Coefficient in (0,1) for the average.
- class IterativeAveraging[source]#
Bases:
objectBase class for iterative averaging schemes.
- current_average#
Current average value.
- new_value#
New value for the averaging process.
- rel_l1_change#
Relative change in L1 norm of the average value.
- rel_l2_change#
Relative change in L2 norm of the average value.
- class MovingAveraging[source]#
Bases:
IterativeAveragingMoving averages.
\(x^{(j)}_{avg}=\frac{1}{k}\sum_{i=0}^{k-1}x^{(j-i)}\)
where \(k-1\) is the number of values from previous iterations that are used
- num_iter_for_avg#
Number of samples in the averaging window
- data#
Data used to compute the average
- class PolyakAveraging[source]#
Bases:
IterativeAveragingPolyak averaging.
\(x^{(j)}_{avg}=\frac{1}{j}\sum_{i=0}^{j}x^{(j)}\)
- iteration_counter#
Number of samples.
- Type:
float
- sum_over_iter#
Sum over all samples.
- Type:
np.array
- l1_norm(vector, averaged=False)[source]#
Compute the L1 norm of the vector.
- Parameters:
vector (ndarray | floating | int | float) – Vector
averaged (bool) – If enabled, the norm is divided by the number of components
- Returns:
L1 norm of the vector
- Return type:
float | floating
- l2_norm(vector, averaged=False)[source]#
Compute the L2 norm of the vector.
- Parameters:
vector (ndarray | floating | int | float) – Vector
averaged (bool) – If enabled the norm is divided by the square root of the number of components
- Returns:
L2 norm of the vector
- Return type:
float | floating
- relative_change(old_value, new_value, norm)[source]#
Compute the relative change of the old and new value for a given norm.
- Parameters:
old_value (ndarray | floating | int | float) – Old values
new_value (ndarray | floating | int | float) – New values
norm (Callable) – Function to compute a norm
- Returns:
Relative change
- Return type:
float | floating
queens.utils.jax_minimize_wrapper module#
A collection of helper functions for optimization with JAX.
Taken from https://gist.github.com/slinderman/24552af1bdbb6cb033bfea9b2dc4ecfd
- minimize(fun, x0, method=None, args=(), bounds=None, constraints=(), tol=None, callback=None, options=None)[source]#
A simple wrapper for scipy.optimize.minimize using JAX.
- Parameters:
fun (Callable) – The objective function to be minimized, written in JAX code so that it is automatically differentiable. It is of type,
`fun: x, *args -> float`where x is a PyTree and args is a tuple of the fixed parameters needed to completely specify the function.x0 (Any) – Initial guess represented as a JAX PyTree.
args (tuple) – Extra arguments passed to the objective function and its derivative. Must consist of valid JAX types; e.g. the leaves of the PyTree must be floats. The remainder of the keyword arguments are inherited from scipy.optimize.minimize, and their descriptions are copied here for convenience.
method (str | None) –
Type of solver. Should be one of
’Nelder-Mead’
’Powell’
’CG’
’BFGS’
’Newton-CG’
’L-BFGS-B’
’TNC’
’COBYLA’
’SLSQP’
’trust-constr’
’dogleg’
’trust-ncg’
’trust-exact’
’trust-krylov’
custom - a callable object (added in version 0.14.0), see below for description.
If not given, chosen to be one of
BFGS,L-BFGS-B,SLSQP, depending on if the problem has constraints or bounds.bounds (Sequence | Bounds | None) –
Bounds on variables for L-BFGS-B, TNC, SLSQP, Powell, and trust-constr methods. There are two ways to specify the bounds:
Instance of Bounds class.
Sequence of
(min, max)pairs for each element in x.
None is used to specify no bounds. Note that in order to use bounds you will need to manually flatten them in the same order as your inputs x0.
constraints (dict | LinearConstraint | NonlinearConstraint | list[dict] | list[LinearConstraint] | list[NonlinearConstraint]) –
Constraints definition (only for COBYLA, SLSQP and trust-constr). Constraints for ‘trust-constr’ are defined as a single object or a list of objects specifying constraints to the optimization problem. Constraints for COBYLA, SLSQP are defined as a list of dictionaries. Each dictionary with fields:
- typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
- funcallable
The function defining the constraint.
- jaccallable, optional
The Jacobian of fun (only for SLSQP).
- argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints. Note that in order to use constraints you will need to manually flatten them in the same order as your inputs x0.
tol (float | None) – Tolerance for termination. For detailed control, use solver-specific options.
options (dict | None) –
A dictionary of solver options. All methods accept the following generic options:
- maxiterint
Maximum number of iterations to perform. Depending on the method each iteration may use several function evaluations.
- dispbool
Set to True to print convergence messages.
For method-specific options, see
show_options().callback (Callable | None) – Called after each iteration. For ‘trust-constr’ it is a callable with the signature:
callback(xk, OptimizeResult state) -> boolwherexkis the current parameter vector represented as a PyTree, andstateis an OptimizeResult object, with the same fields as the ones from the return. If callback returns True the algorithm execution is terminated. For all the other methods, the signature is:callback(xk)wherexkis the current parameter vector, represented as a PyTree.
- Returns:
The optimization result represented as an OptimizeResult object. – Important attributes are:
x: the solution array, represented as a JAX PyTreesuccess: a Boolean flag indicating if the optimizer exited successfullymessage: describes the cause of the termination.
See
scipy.optimize.OptimizeResultfor a description of other attributes.- Return type:
OptimizeResult
queens.utils.logger_settings module#
Logging in QUEENS.
- class LogFilter[source]#
Bases:
FilterFilters (lets through) all messages with level <= LEVEL.
- level#
Logging level
- class NewLineFormatter[source]#
Bases:
FormatterFormatter splitting multiline messages into single line messages.
A logged message that consists of more than one line - contains a new line char - is split into multiple single line messages that all have the same format. Without this the overall format of the logging is broken for multiline messages.
- log_init_args(method)[source]#
Log arguments of __init__ method.
- Parameters:
method (Callable[[~P], None]) – __init__ method
- Returns:
Decorated __init__ method
- Return type:
Callable[[~P], None]
- reset_logging()[source]#
Reset loggers.
This is only needed during testing, as otherwise the loggers are not destroyed resulting in the same output multiple time. This is taken from:
https://stackoverflow.com/a/56810619
- Return type:
None
- setup_basic_logging(log_file_path, logger=<Logger queens (DEBUG)>, debug=False)[source]#
Setup basic logging.
- Parameters:
log_file_path (Path) – Path to the log-file
logger (Logger) – Logger instance that should be set up
debug (bool) – Indicates debug mode and controls level of logging
- Return type:
None
- setup_cli_logging(debug=False)[source]#
Set up logging for CLI utils.
- Parameters:
debug (bool) – Indicates debug mode and controls level of logging
- Return type:
None
- setup_file_handler(logger, log_file_path)[source]#
Set up a file handler.
- Parameters:
logger (Logger) – Logger object to add the stream handler to
log_file_path (Path) – Path of the logging file
- Return type:
None
queens.utils.mcmc module#
Collection of utils for Markov Chain Monte Carlo algorithms.
- mh_select(log_acceptance_probability, current_sample, proposed_sample)[source]#
Perform Metropolis-Hastings selection.
The Metropolis-Hastings algorithm is used in Markov Chain Monte Carlo (MCMC) methods to accept or reject a proposed sample based on the log of the acceptance probability. This function compares the acceptance probability with a random number between 0 and 1 to decide if each proposed sample should replace the current sample. If the random number is smaller than the acceptance probability, the proposed sample is accepted. The function further checks whether the log_acceptance_probability is finite. If it is infinite or NaN, the function will not accept the respective proposed sample.
- Parameters:
log_acceptance_probability (ndarray) – Logarithm of the acceptance probability for each sample. This represents the log of the ratio of the probability densities of the proposed sample to the current sample.
current_sample (ndarray) – The current sample values from the MCMC chain.
proposed_sample (ndarray) – The proposed sample values to be considered for acceptance.
- Returns:
The sample values selected after the Metropolis-Hastings step. – If the proposed sample is accepted, it will be returned; otherwise, the current sample is returned.
A bool array indicating whether each proposed sample was accepted.
- Return type:
tuple[ndarray, ndarray]
- tune_scale_covariance(scale_covariance, accept_rate)[source]#
Adjust the covariance scaling factor based on the acceptance rate.
This function tunes the covariance scaling factor used in Metropolis-Hastings or similar MCMC algorithms based on the observed acceptance rate of proposed samples. The goal is to maintain an acceptance rate within the range of 20% to 50%, which is considered optimal for many MCMC algorithms. The covariance scaling factor is adjusted according to the following rules:
Acceptance Rate
Variance adaptation factor
<0.001
x 0.1
<0.05
x 0.5
<0.2
x 0.9
>0.5
x 1.1
>0.75
x 2
>0.95
x 10
Reference: [1]: pymc-devs/pymc
- Parameters:
scale_covariance (ndarray | float) – The current covariance scaling factor for the proposal distribution.
accept_rate (ndarray | float) – The observed acceptance rate of the proposed samples. This value should be between 0 and 1.
- Returns:
The updated covariance scaling factor adjusted according to the acceptance rate.
- Return type:
ndarray
queens.utils.metadata module#
Metadata objects.
- class SimulationMetadata[source]#
Bases:
objectSimulation metadata object.
This objects holds metadata, times code sections and exports them to yaml.
- job_id#
Id of the job
- inputs#
Parameters for this job
- file_path#
Path to export the metadata
- Type:
pathlib.Path
- timestamp#
Timestamp of the object creation
- Type:
str
- outputs#
Results obtain by the simulation
- Type:
tuple
- times#
Wall times of code sections
- Type:
dict
- __init__(job_id, inputs, job_dir)[source]#
Init simulation metadata object.
- Parameters:
job_id (int) – Id of the job
inputs (dict) – Parameters for this job
job_dir (Path) – Directory in which to write the metadata
- Return type:
None
queens.utils.numpy_array module#
Numpy array utils.
- at_least_2d(array)[source]#
View input array as array with at least two dimensions.
- Parameters:
array (_SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) – Input array
- Returns:
Input array with at least two dimensions
- Return type:
ndarray
queens.utils.numpy_linalg module#
Numpy linear algebra utils.
- add_nugget_to_diagonal(matrix, nugget_value)[source]#
Add a small value to diagonal of matrix.
The nugget value is only added to diagonal entries that are smaller than the nugget value.
- Parameters:
matrix (ndarray) – Matrix
nugget_value (generic | float) – Small nugget value to be added
- Returns:
Manipulated matrix
- Return type:
ndarray
- safe_cholesky(matrix, jitter_start_value=1e-10)[source]#
Numerically stable Cholesky decomposition.
Compute the Cholesky decomposition of a matrix. Numeric stability is increased by sequentially adding a small term to the diagonal of the matrix.
- Parameters:
matrix (ndarray) – Matrix to be decomposed
jitter_start_value (generic | float) – Starting value to be added to the diagonal
- Returns:
Lower-triangular Cholesky factor of matrix
- Return type:
ndarray
queens.utils.path module#
Path utilities for QUEENS.
- check_if_path_exists(path, error_message='')[source]#
Check if a path exists.
- Parameters:
path (Path) – Path to be checked
error_message (str) – If an additional message is desired
- Returns:
`True` if the path exists, `False` otherwise.
- Raises:
FileNotFoundError – If the path does not exist.
- Return type:
bool
- create_folder_if_not_existent(path)[source]#
Create folder if not existent.
- Parameters:
path (Path | str) – Path to be created
- Returns:
Path object
- Return type:
Path
- is_empty(paths)[source]#
Check whether paths is empty.
- Parameters:
paths (str | Path | Sequence) – (List of) path-like objects
- Return type:
bool
- relative_path_from_queens_source(relative_path)[source]#
Create relative path from src/queens/.
For example, to create src/queens/folder/file.A, call relative_path_from_queens_source(“folder/file.A”) .
- Parameters:
relative_path (str) – Path starting from src/queens/
- Returns:
Absolute path to the file
- Return type:
Path
- relative_path_from_root(relative_path)[source]#
Create relative path from root directory.
As an example to create: src/queens/folder/file.A .
Call relative_path_from_root(“src/queens/folder/file.A”) .
- Parameters:
relative_path (str) – Path starting from the root directory
- Returns:
Absolute path to the file
- Return type:
Path
queens.utils.pdf_estimation module#
Kernel density estimation (KDE).
Estimation of the probability density function based on samples from the distribution.
- estimate_bandwidth_for_kde(samples, min_samples, max_samples, kernel='gaussian')[source]#
Estimate optimal bandwidth for kde of pdf.
- Parameters:
samples (ndarray) – Samples for which to estimate pdf
min_samples (float) – Smallest value
max_samples (float) – Largest value
kernel (str) – Kernel type
- Returns:
Estimate for optimal kernel bandwidth
- Return type:
generic
- estimate_pdf(samples, kernel_bandwidth, support_points=None, kernel='gaussian')[source]#
Estimate pdf using kernel density estimation.
- Parameters:
samples (ndarray) – Samples for which to estimate pdf
kernel_bandwidth (float) – Kernel width to use in kde
support_points (ndarray | None) – Points where to evaluate pdf
kernel (str) – Kernel type
- Returns:
PDF estimate at support points
- Return type:
tuple[ndarray, ndarray]
queens.utils.plot_outputs module#
Collection of plotting capabilities for probability distributions.
- plot_cdf(cdf_estimate, support_points, bayes=False)[source]#
Create cdf plot based on passed data.
- Parameters:
cdf_estimate (dict) – Estimate of cdf at supporting points
support_points (ndarray) – Supporting points
bayes (bool) – Do we want to plot confidence intervals
- Return type:
None
queens.utils.pool module#
Pool utils.
queens.utils.printing module#
Print utils.
- get_str_table(name, print_dict, use_repr=False)[source]#
Function to get table to be used in __str__ methods.
- Parameters:
name (str) – Object name
print_dict (dict) – Dict containing labels and values to print
use_repr (bool) – If true, use repr() function to obtain string representations of objects
- Returns:
Table to print
- Return type:
str
queens.utils.process_outputs module#
Collection of utility functions for post-processing.
- do_processing(output_data, output_description)[source]#
Do actual processing of output.
- Parameters:
output_data (dict) – Dictionary containing model output
output_description (dict) – Dictionary describing desired output quantities
- Returns:
Dictionary with processed results
- Return type:
dict
- estimate_bandwidth_for_kde(samples, min_samples, max_samples)[source]#
Estimate optimal bandwidth for kde of pdf.
- Parameters:
samples (ndarray) – Samples for which to estimate pdf
min_samples (float) – Smallest value
max_samples (float) – Largest value
- Returns:
Estimate for optimal kernel bandwidth
- Return type:
float
- estimate_cdf(output_data, support_points, bayesian)[source]#
Compute estimate of CDF based on provided sampling data.
- Parameters:
output_data (dict) – Dictionary with output data
support_points (ndarray) – Points where to evaluate cdf
bayesian (bool) – Compute confidence intervals etc.
- Returns:
Dictionary with cdf estimates
- Return type:
dict
- estimate_cov(output_data)[source]#
Estimate covariance based on standard unbiased estimator.
- Parameters:
output_data (dict) – Dictionary with output data
- Returns:
Unbiased covariance estimate
- Return type:
ndarray
- estimate_icdf(output_data, bayesian)[source]#
Compute estimate of inverse CDF based on provided sampling data.
- Parameters:
output_data (dict) – Dictionary with output data
bayesian (bool) – Compute confidence intervals etc.
- Returns:
Dictionary with icdf estimates
- Return type:
dict
- estimate_mean(output_data)[source]#
Estimate mean based on standard unbiased estimator.
- Parameters:
output_data (dict) – Dictionary with output data
- Returns:
Unbiased mean estimate
- Return type:
ndarray
- estimate_pdf(output_data, support_points, bayesian)[source]#
Compute estimate of PDF based on provided sampling data.
- Parameters:
output_data (dict) – Dictionary with output data
support_points (ndarray) – Points where to evaluate pdf
bayesian (bool) – Compute confidence intervals etc.
- Returns:
Dictionary with pdf estimates
- Return type:
dict
- estimate_result_interval(output_data)[source]#
Estimate interval of output data.
Estimate interval of output data and add small margins.
- Parameters:
output_data (dict) – Dictionary with output data
- Returns:
Output interval
- Return type:
list
- estimate_var(output_data)[source]#
Estimate variance based on standard unbiased estimator.
- Parameters:
output_data (dict) – Dictionary with output data
- Returns:
Unbiased variance estimate
- Return type:
ndarray
- perform_kde(samples, kernel_bandwidth, support_points)[source]#
Estimate pdf using kernel density estimation.
- Parameters:
samples (ndarray) – Samples for which to estimate pdf
kernel_bandwidth (float) – Kernel width to use in kde
support_points (ndarray) – Points where to evaluate pdf
- Returns:
PDF estimate at support points
- Return type:
ndarray
- process_outputs(output_data, output_description, input_data=None)[source]#
Process output from QUEENS models.
- Parameters:
output_data (dict) – Dictionary containing model output
output_description (dict) – Dictionary describing desired output quantities
input_data (ndarray | None) – Array containing model input
- Returns:
Dictionary with processed results
- Return type:
dict
queens.utils.remote_build module#
Utils to build queens on remote resource.
queens.utils.remote_operations module#
Module supplies functions to conduct operation on remote resource.
- class RemoteConnection[source]#
Bases:
ConnectionThis is class wrapper around the Connection class of fabric.
- remote_python#
Path to Python with installed (editable) QUEENS (see remote_queens_repository)
- remote_queens_repository#
Path to the QUEENS source code on the remote host
- __init__(host, remote_python, remote_queens_repository, user=None, gateway=None)[source]#
Initialize RemoteConnection object.
- Parameters:
host (str) – address of remote host
remote_python (str | Path) – Path to Python with installed (editable) QUEENS (see remote_queens_repository)
remote_queens_repository (str | Path) – Path to the QUEENS source code on the remote host
user (str | None) – Username on remote machine
gateway (dict | Connection | None) – An object to use as a proxy or gateway for this connection. See docs of Fabric’s Connection object for details.
- build_remote_environment(package_manager='mamba')[source]#
Build remote QUEENS environment.
- Parameters:
package_manager (str) – Package manager used for the creation of the environment (“mamba” or “conda”)
- Return type:
None
- copy_to_remote(source, destination, verbose=True, exclude=None, filters=None)[source]#
Copy files or folders to remote.
- Parameters:
source (str | Path | Sequence) – Paths to copy
destination (Path | str) – Destination relative to host
verbose (bool) – True for verbose
exclude (str | Sequence | None) – Options to exclude
filters (str | None) – Filters for rsync
- Return type:
None
- create_remote_directory(remote_directory)[source]#
Make a directory (including parents) on the remote host.
- Parameters:
remote_directory (str | Path) – Path of the directory that will be created
- Return type:
None
- open_port_forwarding(local_port=None, remote_port=None)[source]#
Open port forwarding.
- Parameters:
local_port (int | None) – Free local port
remote_port (int | None) – Free remote port
- Returns:
Used local port
Used remote port
- Return type:
tuple[int, Any]
- run_function(func, *func_args, wait=True, **func_kwargs)[source]#
Run a python function remotely using an ssh connection.
- Parameters:
func (Callable) – Function that is executed
func_args (Any) – Additional arguments for the functools.partial function
wait (bool) – Flag to decide whether to wait for result of function
func_kwargs (Any) – Additional keyword arguments for the functools.partial function
- Returns:
Return value of function
- Return type:
Any
- start_cluster(workload_manager, dask_cluster_kwargs, dask_cluster_adapt_kwargs, experiment_dir)[source]#
Start a Dask Cluster remotely using an ssh connection.
- Parameters:
workload_manager (str) – Workload manager (“pbs” or “slurm”) on cluster
dask_cluster_kwargs (dict) – Collection of keyword arguments to be forwarded to DASK Cluster
dask_cluster_adapt_kwargs (dict) – Collection of keyword arguments to be forwarded to DASK Cluster adapt method
experiment_dir (str) – Directory holding all data of QUEENS experiment on remote
- Returns:
Return value of function
- Return type:
tuple[Any, Any]
queens.utils.rsync module#
Rsync utils.
- assemble_rsync_command(source, destination, archive=False, exclude=None, filters=None, verbose=True, rsh=None, host=None, rsync_options=None)[source]#
Assemble rsync command.
- Parameters:
source (str | Path | Sequence) – Paths to copy
destination (Path | str) – Destination relative to host
archive (bool) – Use the archive option
exclude (str | Sequence | None) – Options to exclude
filters (str | None) – Filters for rsync
verbose (bool) – True for verbose
rsh (str | None) – Remote ssh command
host (str | None) – Host to which to copy the files
rsync_options (Sequence | None) – Additional rsync options
- Returns:
Command to run rsync
- Return type:
str
- rsync(source, destination, archive=True, exclude=None, filters=None, verbose=True, rsh=None, host=None, rsync_options=None)[source]#
Run rsync command.
- Parameters:
source (str | Path | Sequence) – Paths to copy
destination (str | Path) – Destination relative to host
archive (bool) – Use the archive option
exclude (str | Sequence | None) – Options to exclude
filters (str | None) – Filters for rsync
verbose (bool) – True for verbose
rsh (str | None) – Remote ssh command
host (str | None) – Host where to copy the files to
rsync_options (Sequence | None) – Additional rsync options
- Return type:
None
queens.utils.run_subprocess module#
Wrapped functions of subprocess stdlib module.
- run_subprocess(command, raise_error_on_subprocess_failure=True, additional_error_message=None, allowed_errors=None)[source]#
Run a system command outside of the Python script.
- Parameters:
command (str) – Command that will be run in subprocess
raise_error_on_subprocess_failure (bool) – Raise or warn error defaults to True
additional_error_message (str | None) – Additional error message to be displayed
allowed_errors (list[str] | None) – List of strings to be removed from the error message
- Returns:
Code for success of subprocess
Unique process ID that was assigned to the subprocess on computing machine
Standard output content
Standard error content
- Return type:
tuple[int, int, str, str]
queens.utils.scaling module#
Utils for data scaling.
- class IdentityScaler[source]#
Bases:
ScalerThe identity scaler.
- fit(x_mat)[source]#
Fit/calculate the scaling based on the input samples.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Return type:
None
- inverse_transform_grad_mean(grad_mean, *_args)[source]#
Conduct the inverse scaling of the mean gradient.
- Parameters:
grad_mean (ndarray) – Gradient of the transformed mean function
_args (Any)
- Returns:
Inversely transformed gradient of the mean function
- Return type:
ndarray
- inverse_transform_grad_var(grad_var, *_args)[source]#
Conduct the inverse scaling of the variance gradient.
- Parameters:
grad_var (ndarray) – Gradient of the transformed variance function
_args (Any)
- Returns:
Inversely transformed gradient of the variance function
- Return type:
ndarray
- inverse_transform_mean(x_mat)[source]#
Conduct the inverse scaling transformation on the data matrix.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Returns:
Transformed data-array
- Return type:
ndarray
- class Scaler[source]#
Bases:
objectBase class for general scaling classes.
The purpose of these classes is the scaling of data.
- abstract fit(x_mat)[source]#
Fit/calculate the scaling based on the input samples.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Return type:
None
- abstract inverse_transform_mean(x_mat)[source]#
Conduct the inverse transformation for the mean.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Return type:
ndarray
- class StandardScaler[source]#
Bases:
ScalerScaler for standardization of data.
In case a stochastic process is trained on the scaled data, inverse rescaling is implemented to recover the correct mean and standard deviation prediction for the posterior process.
- mean#
Mean-values of the data-matrix (column-wise).
- standard_deviation#
Standard deviation of the data-matrix (per column).
- fit(x_mat)[source]#
Fit/calculate the scaling based on the input samples.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Return type:
None
- inverse_transform_grad_mean(grad_mean, standard_deviation_input)[source]#
Conduct the inverse scaling of the mean gradient.
- Parameters:
grad_mean (ndarray) – Gradient of the transformed mean function
standard_deviation_input (float) – Standard deviation of the input data
- Returns:
Inversely transformed gradient of the mean function
- Return type:
ndarray
- inverse_transform_grad_var(grad_var, var, trans_var, input_standard_deviation)[source]#
Conduct the inverse scaling of the variance gradient.
- Parameters:
grad_var (ndarray) – Gradient of the transformed variance
var (ndarray) – Variance of the untransformed data
trans_var (ndarray) – Variance of the transformed data
input_standard_deviation (float) – Standard deviation of the input data
- Returns:
Inversely transformed gradient of the variance function
- Return type:
ndarray
- inverse_transform_mean(x_mat)[source]#
Conduct the inverse scaling transformation on the data matrix.
- Parameters:
x_mat (ndarray) – Data matrix that should be standardized
- Returns:
Transformed data-array
- Return type:
ndarray
queens.utils.sequential_monte_carlo module#
Collection of utility functions and classes for SMC algorithms.
- class StaticStateSpaceModel[source]#
Bases:
StaticModelModel needed for the particles library implementation of SMC.
- likelihood_model#
Log-likelihood function.
- __init__(likelihood_model, data=None, prior=None)[source]#
Initialize Static State Space model.
- Parameters:
likelihood_model (Callable) – Model for the log-likelihood function.
data (None) – Optional data to define state space model.
prior (StructDist | None) – Model for the prior distribution.
- Return type:
None
- loglik(theta, t=None)[source]#
Log-likelihood function for particles SMC implementation.
- Parameters:
theta (ndarray) – Samples at which to evaluate the likelihood
t (None) – Time (if set to None, the full log-likelihood is returned)
- Returns:
The log likelihood
- Return type:
ndarray
- logpyt(theta, t)[source]#
Log-likelihood of Y_t, given parameter and previous datapoints.
- Parameters:
theta (Any) – theta[‘par’] is a ndarray containing the N values for parameter par
t (Any) – Time
- Return type:
None
- numpy_to_particles_array(samples)[source]#
Convert numpy arrays to particles objects.
The particles library uses np.ndarrays with homemade variable dtypes. This method converts it back to the particles library type.
- Parameters:
samples (ndarray) – Samples
- Returns:
*Particle* variables object
- Return type:
ndarray
- particles_array_to_numpy(theta)[source]#
Convert particles objects to numpy arrays.
The particles library uses np.ndarrays with homemade variable dtypes. We need to convert this into numpy arrays to work with queens.
- Parameters:
theta (ndarray) – Particle variables object
- Returns:
Numpy array of the particles
- Return type:
ndarray
- calc_ess(weights)[source]#
Calculate the Effective Sample Size (ESS) from the given weights.
The Effective Sample Size (ESS) is a measure used to assess the quality of a set of weights by indicating how many independent samples would be required to achieve the same level of information as the current weighted samples. This is computed using the exp-log trick to improve numerical stability.
- Parameters:
weights (ndarray) – An array of weights, typically representing the importance weights of samples in a weighted sampling scheme.
- Returns:
The Effective Sample Size (ESS)
- Return type:
generic
- temper_factory(temper_type)[source]#
Return the appropriate tempering function based on the specified type.
The tempering function can be used for transitioning between different log-probability density functions in various probabilistic models.
- Parameters:
temper_type (Literal['bayes', 'generic']) –
Type of the tempering function to return. Valid options are:
bayes: Returns the Bayes tempering function.
generic: Returns the generic tempering function.
- Returns:
The corresponding tempering function based on `temper_type`.
- Raises:
ValueError – If temper_type is not one of the valid options (“bayes”, “generic”).
- Return type:
Callable
- temper_logpdf_bayes(log_prior, log_like, tempering_parameter=1.0)[source]#
Bayesian tempering function.
It phases from the prior to the posterior = like * prior. Special cases are:
- tempering parameter = 0.0:
We interpret this as “disregard contribution of the likelihood”. Therefore, return just log_prior.
- log_prior or log_like = +inf:
Prohibit this case. The reasoning is that (+inf + -inf) is ambiguous. We know that -inf is likely to occur, e.g. in uniform priors. On the other hand, +inf is rather unlikely to be a reasonable value. Therefore, we chose to exclude it here.
- Parameters:
log_prior (ndarray) – Array containing the values of the log-prior distribution at sample points
log_like (ndarray) – Array containing the values of the log-likelihood at sample points
tempering_parameter (float) – Tempering parameter for resampling
- Return type:
ndarray
- temper_logpdf_generic(logpdf0, logpdf1, tempering_parameter=1.0)[source]#
Perform generic tempering between two log-probability density functions.
This function performs a linear interpolation between two log-probability density functions based on a tempering parameter. The tempering parameter determines the weight given to each log-probability density function in the transition from the initial distribution (logpdf0) to the goal distribution (logpdf1).
The function handles the following scenarios:
- tempering parameter = 0.0:
We interpret this as “disregard contribution of the goal pdf”. Therefore, return logpdf0.
- tempering parameter = 1.0:
We interpret this as “we are fully transitioned.” Therefore, ignore the contribution of the initial distribution. Therefore, return logpdf1.
- logpdf0 or logpdf1 = +inf:
Prohibit this case. The reasoning is that (+inf + -inf) is ambiguous. We know that -inf is likely to occur, e.g., in uniform distributions. On the other hand, +inf is rather unlikely to be a reasonable value. Therefore, we chose to exclude it here.
- Parameters:
logpdf0 (ndarray) – Logarithm of the probability density function of the initial distribution.
logpdf1 (ndarray) – Logarithm of the probability density function of the goal distribution.
tempering_parameter (float) – Parameter between 0 and 1 that controls the interpolation between logpdf0 and logpdf1. A value of 0.0 corresponds to logpdf0, while a value of 1.0 corresponds to logpdf1.
- Returns:
The tempered log-probability density function based on the `tempering_parameter`.
- Raises:
ValueError – If either logpdf0 or logpdf1 is positive infinity (+inf).
- Return type:
ndarray
queens.utils.sobol_sequence module#
Collection of utility functions and classes for Sobol sequences.
- sample_sobol_sequence(dimension, number_of_samples, parameters, randomize=False, seed=None)[source]#
Generate samples from Sobol sequence.
- Parameters:
dimension (int) – Dimensionality of the sequence. Max dimensionality is 21201.
number_of_samples (int) – Number of samples to generate in the parameter space
parameters (Parameters) – Parameters object defining the true distribution of the samples
randomize (bool) – If True, use LMS+shift scrambling, i.e. randomize the sequence. Otherwise, no scrambling is done.
seed (int | Generator | None) – If seed is an int or None, a new numpy.random.Generator is created using
np.random.default_rng(seed). If seed is already aGeneratorinstance, then the provided instance is used.
- Returns:
Sobol sequence quasi Monte Carlo samples for the parameter distribution
- Return type:
ndarray
queens.utils.start_dask_cluster module#
Main module to start a dask jobqueue cluster.
queens.utils.valid_options module#
Helper functions for valid options and switch analogy.
- check_if_valid_options(valid_options, desired_options, error_message='')[source]#
Check if the desired option(s) is/are in valid_options.
- Parameters:
valid_options (list | dict) – List of valid option keys or dict with valid options as keys
desired_options (str | dict[str, int] | list[str]) – Key(s) of desired options
error_message (str) – Error message in case the desired option can not be found
- Raises:
InvalidOptionError – If any of the desired options is in invalid options
- Return type:
None
- get_option(options_dict, desired_option, error_message='')[source]#
Get option desired_option from options_dict.
The options_dict consists of the keys and their values. Note that the value can also be functions. In case the option is not found an error is raised.
- Parameters:
options_dict (dict[str, Any]) – Dictionary with valid options and their value
desired_option (str) – Desired method key
error_message (str) – Custom error message to be used if the desired_option is not found.
- Returns:
Value of the desired option
- Return type:
Any