{ "cells": [ { "cell_type": "markdown", "id": "0", "metadata": {}, "source": [ "# 1. Parameter Study Using the Grid Iterator\n", "## Introduction to QUEENS\n", "QUEENS is a versatile software framework offering a wide range of cutting-edge algorithms for deterministic and probabilistic analyses, including parameter studies, sensitivity analysis, surrogate modeling, uncertainty quantification, and Bayesian inverse analysis. Built with a modular architecture, QUEENS enables efficient parallel queries of large-scale computational models, robust handling of data and resources, seamless switching between analysis types, and smooth scalability from laptops to high-performance computing clusters. To learn more, visit the QUEENS [website](https://www.queens-py.org), explore the source code on [GitHub](https://github.com/queens-py), or check out the [documentation](https://queens-py.github.io/queens).\n", "\n", "### Overview\n", "Over the course of the four tutorials you will learn\n", "* how to use QUEENS: the general structure of a QUEENS script, a short Python script used to define and run experiments with QUEENS\n", "* how to conduct various analysis types with QUEENS, including parameter studies, optimization, and uncertainty quantification\n", "* how to run analyses with different models from analytical test functions to advanced numerical solvers like the 4C multiphysics framework" ] }, { "cell_type": "markdown", "id": "1", "metadata": {}, "source": [ "## Tutorial: analysis of the Rosenbrock function\n", "\n", "In this tutorial, you will learn how to translate your planned multi-query analysis into a QUEENS experiment in the form of a Python script.\n", "The structure of these scripts is always similar, where the common feature of a multi-query analysis is that you want to evaluate a single computational model\n", "at many different input locations.\n", "Thus, the main ingredients you need to define for your experiment are:\n", "* the model\n", "* the analysis method (defines the input points where to evaluate the model)\n", "* the compute resource to evaluate the model\n", "\n", "### Content\n", "The example analyses we want to conduct in this tutorial is the following:\n", "1. visualise the Rosenbrock function.\n", "2. find the minimum of the Rosenbrock function.\n", "\n", "___\n", "#### **Task:** Run the following code cell.\n", "\n", "You don't need to understand the code but you should run it once in order to setup the logging feature of QUEENS correctly for Jupyter notebooks." ] }, { "cell_type": "code", "execution_count": null, "id": "2", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [], "source": [ "# Suppress excessive logging output\n", "import logging\n", "import os\n", "import numpy as np\n", "import plotly.graph_objects as go\n", "from plotly.subplots import make_subplots\n", "\n", "# logging.basicConfig(level=logging.INFO, format='%(message)s')\n", "os.environ[\"DASK_DISTRIBUTED__LOGGING__DISTRIBUTED\"] = \"CRITICAL\"\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n", "logging.getLogger(\"distributed\").setLevel(level=logging.CRITICAL)\n", "\n", "\n", "def visualize_grid_and_surface(X1, X2, Z, min_point=None):\n", " \"\"\"Visualize grid and surface with optional minimum point.\n", "\n", " Parameters\n", " _________\n", " X1, X2 : 2D arrays\n", " Meshgrid arrays for x and y.\n", " Z : 2D array\n", " Function values f(X1, X2).\n", " min_point : tuple (x, y, z) or None\n", " If provided, highlights this point with a big red cross\n", " in both plots.\n", " \"\"\"\n", " fig = make_subplots(\n", " rows=1,\n", " cols=2,\n", " specs=[[{\"type\": \"scene\"}, {\"type\": \"scene\"}]],\n", " subplot_titles=(\"3D Scatter colored by z\", \"Surface + grid points (z=0 plane)\"),\n", " )\n", "\n", " # ___ Left: 3D scatter (share colorscale, no colorbar) ___\n", " fig.add_trace(\n", " go.Scatter3d(\n", " x=X1.flatten(),\n", " y=X2.flatten(),\n", " z=Z.flatten(),\n", " mode=\"markers\",\n", " marker=dict(\n", " size=3,\n", " color=Z.flatten(),\n", " colorscale=\"Viridis\",\n", " showscale=False, # no duplicate colorbar\n", " ),\n", " showlegend=False,\n", " ),\n", " row=1,\n", " col=1,\n", " )\n", "\n", " # ___ Right: Surface (with single colorbar) ___\n", " fig.add_trace(\n", " go.Surface(\n", " x=X1,\n", " y=X2,\n", " z=Z,\n", " colorscale=\"Viridis\",\n", " colorbar=dict(\n", " title=\"f(x1,x2)\",\n", " x=1.05, # push colorbar outside plot area\n", " len=0.75, # shorten to avoid overlapping legend\n", " ),\n", " ),\n", " row=1,\n", " col=2,\n", " )\n", "\n", " # ___ Right: black grid points on x-y plane (z=0) ___\n", " fig.add_trace(\n", " go.Scatter3d(\n", " x=X1.flatten(),\n", " y=X2.flatten(),\n", " z=np.zeros_like(Z).flatten(),\n", " mode=\"markers\",\n", " marker=dict(size=2, color=\"black\"),\n", " showlegend=False,\n", " ),\n", " row=1,\n", " col=2,\n", " )\n", "\n", " # ___ Highlight user-provided minimum point ___\n", " if min_point is not None:\n", " min_x, min_y, min_z = min_point\n", " for col in [1, 2]:\n", " fig.add_trace(\n", " go.Scatter3d(\n", " x=[min_x],\n", " y=[min_y],\n", " z=[min_z],\n", " mode=\"markers\",\n", " marker=dict(size=5, color=\"red\"),\n", " name=\"Minimum\",\n", " ),\n", " row=1,\n", " col=col,\n", " )\n", "\n", " # Layout / axes\n", " fig.update_layout(\n", " title=\"3D Scatter vs. Surface with Grid Projection\",\n", " height=500,\n", " legend=dict(\n", " x=0.95, y=0.9, bgcolor=\"rgba(255,255,255,0.6)\" # move legend away from colorbar\n", " ),\n", " scene=dict(xaxis_title=\"x1\", yaxis_title=\"x2\", zaxis_title=\"f(x1,x2)\", aspectmode=\"cube\"),\n", " scene2=dict(xaxis_title=\"x1\", yaxis_title=\"x2\", zaxis_title=\"f(x1,x2)\", aspectmode=\"cube\"),\n", " )\n", "\n", " fig.show()" ] }, { "cell_type": "markdown", "id": "3", "metadata": {}, "source": [ "## Model Setup\n", "\n", "We begin by setting up our model.\n", "Here, we want to implement the (two-dimensional) Rosenbrock function.\n", "\n", "The **Rosenbrock function** is a classic test problem in optimization, defined as\n", "\n", "$\n", "f(x_1, x_2) = (a - x_1)^2 + b \\, (x_2 - x_1^2)^2 ,\n", "$\n", "\n", "where typically its parameters are\n", "$\n", "a = 1, \\quad b = 100 .\n", "$\n", "\n", "As we will see this function is non-convex and features a narrow, curved valley, making it a challenging benchmark for optimization algorithms.\n", "\n", "### **Task:** Implement the Rosenbrock function as a Python function that takes two arguments $x_1$ and $x_2$." ] }, { "cell_type": "code", "execution_count": null, "id": "4", "metadata": {}, "outputs": [], "source": [ "def rosenbrock(x1, x2):\n", " a = 1\n", " b = 100\n", " f = (a - x1) **2 + b* (x2 - x1 **2)**2\n", " return f" ] }, { "cell_type": "markdown", "id": "5", "metadata": {}, "source": [ "## Visualising is a multi-query analysis task\n", "\n", "To visualise the Rosenbrock function as a 3D surface, we need to evaluate it on a grid (or mesh) of points.\n", "This is a *multi-query* scenario because creating a surface plot requires computing the function value at many different input locations across the domain.\n", "With such an analytical model, this is straightforward—and you may already be familiar with doing this.\n", "Nevertheless, let’s break it down step by step to see clearly how the process works.\n", "\n", "### Generate mesh points with NumPy\n", "\n", "To generate a regular mesh in NumPy,\n", " you can use `numpy.linspace` to create evenly spaced points along each axis and then pass these arrays to `numpy.meshgrid`,\n", " which combines them into two 2D arrays representing all coordinate pairs.\n", "This allows you to build a full rectangular grid of points, which can be used to evaluate functions across the domain.\n", "\n", "### **Task:** Create evenly spaced points\n", "Create $10$ points per parameter on the following intervall $x_1 \\in [-2.0, 2.0]$ and $ x_2 \\in [-3.0, 3.0]$.\n", "\n", "> For example, `np.linspace(-2, 2, 5)` gives five points between −2 and 2." ] }, { "cell_type": "code", "execution_count": null, "id": "6", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "# Create grid\n", "x1 = np.linspace(-2.0, 2.0, 10)\n", "x2 = np.linspace(-3.0, 3.0, 10)\n", "X1, X2 = np.meshgrid(x1, x2)" ] }, { "cell_type": "markdown", "id": "7", "metadata": {}, "source": [ "## Evaluate the Model\n", "\n", "So far, we have:\n", "1. Defined our model (the Rosenbrock function).\n", "2. Built a grid of input points $(x_1, x_2)$ using `numpy.linspace` and `numpy.meshgrid`.\n", "\n", "The next step is to evaluate the model at every point of the grid.\n", "This means: for each coordinate pair $(x_1, x_2)$, we want to compute the function value $f(x_1, x_2)$.\n", "\n", "For our function, we can directly exploit the **vectorization feature of NumPy** and run the following code.\n", "In short: this line evaluates the Rosenbrock function at all grid points in one shot, turning our mesh of input coordinates into a mesh of output values.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8", "metadata": {}, "outputs": [], "source": [ "# Evaluate function on grid\n", "Z = rosenbrock(X1, X2)" ] }, { "cell_type": "markdown", "id": "9", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Visualise the Rosenbrock function\n", "The following code block introduces a function to visualise the grid points as well as the function values of the Rosenbrock function.\n", "We introduce the function in a separate file `visualize_grid_and_surface.py` to keep this notebook lean. \n", "We will reuse the visualisation function several times, but you don't need to go through the lines in detail.\n", "\n", "Running the next cell, you will see:\n", "* On the **left**, a scatter plot of all grid points in 3D space, colored by their function values.\n", "* On the **right**, the characteristic *banana-shaped* valley of the Rosenbrock function shown as a smooth surface, with the grid points projected onto the base plane for reference." ] }, { "cell_type": "code", "execution_count": null, "id": "10", "metadata": {}, "outputs": [], "source": [ "visualize_grid_and_surface(X1, X2, Z)" ] }, { "cell_type": "markdown", "id": "11", "metadata": {}, "source": [ "## Congratulations, you have finished your first multi-query analysis!\n", "\n", "The direct NumPy approach works perfectly for simple analytical functions, but it quickly becomes impractical for complex computational models—such as those involving Finite-Element solvers—where each evaluation is computationally expensive. In these cases, vectorized evaluation is no longer possible. Instead, we view the task as an embarrassingly parallel problem: each input point can be evaluated independently of the others.\n", "\n", "This is exactly where QUEENS comes into play. It provides the parallelization and workflow management needed to move from simple Python functions to large-scale simulations. Under the hood, QUEENS applies the same core principle you just saw with NumPy—evaluating the model at many input points—but in a way that is scalable, robust, and designed for demanding computational models." ] }, { "cell_type": "markdown", "id": "12", "metadata": {}, "source": [ "## Visualise the Rosenbrock function with QUEENS\n", "We will now redo the multi-query task of visualising the Rosenbrock function using **QUEENS**.\n", "This helps you learn the structure of a QUEENS experiment on a very simple setup.\n", "The structure will remain the same later on, e.g., when we find the minimum of the Rosenbrock function with QUEENS.\n", "For a simple analytic model, this may feel like overhead,\n", "but it pays off as soon as you switch to another analysis method with the same model (a key feature of QUEENS) or when the model becomes more complex and computationally demanding.\n", "\n", "Note that the approach shown here works with any Python function that encodes your model of interest.\n", "If you already have functions from your research or application, it’s straightforward to make them work with QUEENS: write a thin wrapper that exposes exactly the input parameters you want to vary. For example, you might keep a general Rosenbrock implementation with parameters `a` and `b`, and provide a small wrapper that fixes `a` and `b` while varying only `x1` and `x2`. This pattern generalizes to more sophisticated models, enabling you to reuse the same QUEENS experiment structure across different analyses.\n", "\n", "### Global settings for a QUEENS experiment: name and output directory\n", "\n", "Every QUEENS experiment starts by defining the **name of the experiment** (which automatically created files and directories will be named after) and **output directory** (where the QUEENS output file will be placed).\n", "\n", "#### **Task**: define a variable `experiment_name` to label your QUEENS run.\n", "This name will be used automatically for generated output files and folders, so choosing something sensible makes it much easier to identify your results later.\n", "\n", "> Important general note: if another experiment with the same name already exists, existing data will potentially be overwritten. To avoid this, you can change the experiment name.\n", "\n", "Your experiment name should follow a few simple rules (similar to naming files or directories in Linux):\n", "* Use a short, descriptive string that helps you remember what the experiment did.\n", "* Avoid spaces — use underscores _ instead.\n", "* Do not begin with a number.\n", "* Do not use special characters.\n", "* Keep it concise but meaningful.\n", "\n", "##### Examples\n", "\n", "##### Good names\n", "* grid_iterator_rosenbrock\n", "* rosenbrock_grid\n", "* optimize_rosenbrock\n", "* grid_test_x1x2\n", "* sensitivity_analysis_demo\n", "\n", "##### Bad names\n", "* 1st_experiment → starts with a number\n", "* grid iterator rosenbrock → contains spaces\n", "* this_is_a_very_long_and_confusing_name_for_experiment → too long\n", "* test!rosenbrock → contains special characters\n", "\n", "#### Output directory\n", "You don’t need to change the `output_dir` variable here — by default, we will write results into the current directory of this notebook.\n", "\n", "If you do decide to set a different path, make sure the directory already exists, as QUEENS requires the output directory to be created beforehand.\n", "\n", "#### Global Settings\n", "In the code block, we also create a `GlobalSettings` object, which gathers the general information about the QUEENS experiment.\n", "This ensures that the correct values (such as experiment name, output directory, and debug settings) are consistently used throughout the workflow.\n", "Later on, `GlobalSettings` will also act as a Python context manager, making sure that everything is properly set up and cleaned up when running the experiment.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "13", "metadata": {}, "outputs": [], "source": [ "from queens.global_settings import GlobalSettings\n", "\n", "# Define name of QUEENS experiment and directory for output\n", "experiment_name = \"grid_iterator_rosenbrock\"\n", "output_dir = \"./\"\n", "\n", "# Global settings\n", "global_settings = GlobalSettings(\n", " experiment_name=experiment_name, output_dir=output_dir, debug=False\n", ")" ] }, { "cell_type": "markdown", "id": "14", "metadata": {}, "source": [ "## QUEENS Model Setup\n", "\n", "We can reuse our implementation of the Rosenbrock function from above.\n", "However, QUEENS needs additional information about the input parameters $x_1, x_2$.\n", "In particular, QUEENS must know:\n", "\n", "- **The name** of the parameter (e.g., `x1`, `x2`)\n", "- **The type** of the parameter (deterministic variable, random variable, or random field)\n", "- **Its dimensionality** (scalar or vector-valued)\n", "- **Its distribution** (e.g., uniform, normal) and associated properties (bounds, mean, variance, etc.)\n", "- ...\n", "\n", "Only with this information can QUEENS properly treat the parameters, generate samples, and propagate them through the Rosenbrock function.\n", "\n", "### Model Parameters\n", "\n", "Again, we restrict the two parameters to certain regions:\n", "\n", "- $x_1 \\in [-2.0, 2.0]$\n", "- $x_2 \\in [-3.0, 3.0]$\n", "\n", "Such restrictions can be expressed in **QUEENS** by using the `Uniform` parameter type, which has a lower and an upper bound.\n", "\n", "#### **Task**: Create a Uniform object for the input $ x_2 \\in [-3.0, 3.0]$\n", "> Hint: the first parameter is already defined, you only need to add the second one.\n", "\n", "Finally, all parameter definitions are collected into a `Parameters` object. This container keeps track of all variable namesx and their properties.\n", "Most importantly, it allows creating samples from the input space according to its properties.\n", "\n", "#### **Task**: Add the second input $x_2$ as a keyword argument to the `Parameters` object.\n", "\n", "The first parameter `x1` has already been defined for you.\n", "Now we add `x2` to the Parameters object.\n", "\n", "> Important: Make sure to use the correct variable name as the keyword, in this case `x2` (see definition of the Rosenbrock Python function).\n", "> Specifically, if your function has the following signature:\n", " ```python\n", " def f(x1, x2, my_parameter, ...):\n", " ...\n", " ```\n", "> then you must define the parameters object like this:\n", " ```python\n", "Parameters(x1=..., x2=..., my_parameter=..., ...)\n", " ```\n" ] }, { "cell_type": "code", "execution_count": null, "id": "15", "metadata": {}, "outputs": [], "source": [ "from queens.distributions import Uniform\n", "from queens.parameters import Parameters\n", "\n", "# Model parameters\n", "x1 = Uniform(lower_bound=-2.0, upper_bound=2.0)\n", "x2 = Uniform(lower_bound=-3.0, upper_bound=3.0)\n", "parameters = Parameters(x1=x1, x2=x2)" ] }, { "cell_type": "markdown", "id": "16", "metadata": {}, "source": [ "At this point, the `parameters` object contains both $x_1$ and $x_2$,\n", "each with their respective domains, and is ready to be used for building the QUEENS model.\n", "\n", "In order to finalize the QUEENS model, we still require multiple things:\n", "1. A `scheduler`. It requests the compute resource and manages the execution of your model on that compute.\n", "\n", " For this tutorial, we choose a `Local` dask scheduler with a single worker. So your model is evaluated on the local machine. With the `num_jobs` parameter, you can choose the number of parallel model evaluations.\n", "\n", "2. A `driver`. This object coordinates the evaluation of the forward model itself.\n", "\n", " The `Function` driver can be used to evaluate any Python function with QUEENS.\n", " Note that it takes both the `parameters` object as well as the callable Python function `rosenbrock` as inputs.\n", " In setting up the `parameters` and `function` it is important the arguments have the same keywords.\n", " In our case `parameters` was created with `x1` and `x2` keyword arguments and the `rosenbrock` functione expects exactly two argument with the same name.\n", " > Background: we are exploiting that we can always call a positional argument as keyword arguments in Python.\n", "\n", "3. And finally, a QUEENS `model`, which takes both a `driver` (model evaluation routine) and a `scheduler` (compute resource).\n", "\n", " The `Simulation` model is the standard type of model in QUEENS.\n", " The fact that it takes both a `scheduler` and a `driver` reflects that in QUEENS a model always combines both the instruction of how to evaluate (`driver`) as well as the compute resources to actually execute and manage the evaluation (`scheduler`).\n", "\n", "### **Task**: Run the following code cell to define a QUEENS model for the subsequent analyses." ] }, { "cell_type": "code", "execution_count": null, "id": "17", "metadata": {}, "outputs": [], "source": [ "from queens.drivers import Function\n", "from queens.models.simulation import Simulation\n", "from queens.schedulers import Local\n", "\n", "#### Model setup ####\n", "scheduler = Local(global_settings.experiment_name, num_jobs=1)\n", "driver = Function(parameters=parameters, function=rosenbrock)\n", "model = Simulation(scheduler, driver)" ] }, { "cell_type": "markdown", "id": "18", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "### Analysis method: Grid (visualizing Rosenbrock as a 3D surface)\n", "\n", "To visualize the Rosenbrock function, we will evaluate it on a **grid** of points inside the domain specified by our `Parameters` object.\n", "In **QUEENS**, an **iterator** is responsible for generating the input points for multi-query analyses.\n", "Here we will use the `Grid` iterator, which lays out points on a mesh over the ranges of the parameters.\n", "It can generate rectilinear grids on both linear and logarithmic scales. \n", "\n", "**Key idea:**\n", "- The **iterator** chooses *where* to evaluate the model.\n", "\n", "#### Defining the grid design for the `Grid` iterator\n", "\n", "The `Grid` iterator in QUEENS requires a **grid layout/resolution** for each parameter:\n", "- **`num_grid_points`**: how many grid points to generate in this dimension \n", "- **`axis_type`**: type of axis spacing (e.g., `\"lin\"` for linear, `\"log10\"` for logarithmic) \n", "- **`data_type`**: the parameter data type (e.g., `FLOAT`) \n", "\n", "For example:\n", "```python\n", "{\"x1\": {\"num_grid_points\": 5, \"axis_type\": \"lin\", \"data_type\": \"FLOAT\"}}\n", "```\n", "means: generate 5 grid points, linearly spaced, for a float-valued parameter x1.\n", "\n", "___\n", "#### **Task**: Extend the grid design to include x2\n", "We want to evaluate x2 at 10 points on a linear scale, also as a FLOAT." ] }, { "cell_type": "code", "execution_count": null, "id": "19", "metadata": {}, "outputs": [], "source": [ "grid_design = {\n", " \"x1\": {\"num_grid_points\": 10, \"axis_type\": \"lin\", \"data_type\": \"FLOAT\"},\n", " \"x2\": {\"num_grid_points\": 10, \"axis_type\": \"lin\", \"data_type\": \"FLOAT\"},\n", " }" ] }, { "cell_type": "markdown", "id": "20", "metadata": {}, "source": [ "#### Define the iterator object\n", "\n", "Every iterator in QUEENS requires some **common arguments** regardless of its type:\n", "\n", "- **`model`**: the QUEENS model to work with (here, the Rosenbrock model) \n", "- **`parameters`**: the `Parameters` object defining the variable input parameters \n", "- **`global_settings`**: general settings of the experiment, including the experiment name and output directory \n", "- **`result_description`**: whether to write result files and what details to include \n", "\n", "Each iterator type can also have **unique properties**. \n", "For the **`Grid` iterator**, the key unique property is the **`grid_design`** dictionary, which specifies the resolution and layout per parameter as described and defined above.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "21", "metadata": {}, "outputs": [], "source": [ "from queens.iterators.grid import Grid\n", "# The method of the analysis is defined by the iterator type:\n", "grid = Grid(\n", " grid_design=grid_design,\n", " model=model,\n", " parameters=parameters,\n", " global_settings=global_settings,\n", " result_description={\"write_results\": True},\n", ")" ] }, { "cell_type": "markdown", "id": "22", "metadata": { "tags": [ "nbsphinx-hide-output" ] }, "source": [ "### Running the experiment\n", "\n", "Now that we have fully defined our `Grid` iterator, we can finally **run the experiment**. \n", "In QUEENS, we always execute experiments using the **`run_iterator`** function, wrapped inside the **`global_settings`** context manager. \n", "This ensures that all outputs (results, plots, logs) are properly handled according to the experiment configuration.\n", "\n", "___\n", "\n", "#### **Task**: Run your QUEENS experiment\n", "\n", "> Note: You might have to restart the Jupyter kernel to rerun the experiment!" ] }, { "cell_type": "code", "execution_count": null, "id": "23", "metadata": {}, "outputs": [], "source": [ "from queens.main import run_iterator\n", "from queens.utils.io import load_result\n", "\n", "with global_settings:\n", " #### Analysis ####\n", " run_iterator(grid, global_settings=global_settings)" ] }, { "cell_type": "markdown", "id": "24", "metadata": {}, "source": [ "## QUEENS output\n", "\n", "If activated in the result description, QUEENS output is written to a file called `.pickle` in the directory defined by the argument `output_dir` of the global settings object: `/.pickle`.\n", "In the output folder, you also get a log file that contains the console output. It is called `.log`.\n", "\n", "In the case of the grid iterator, you get a dictionary including the raw input-output data and some statistics." ] }, { "cell_type": "code", "execution_count": null, "id": "25", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [], "source": [ "#### Load Results ####\n", "result_file = global_settings.result_file(\".pickle\")\n", "results = load_result(result_file)\n", "results" ] }, { "cell_type": "markdown", "id": "26", "metadata": {}, "source": [ "### Postprocessing\n", "\n", "You can now interact with these results as with any other Python object.\n", "For example, you can plot the results.\n", "\n", "Here, we use the same plotting function as before. But in order to do so we first reshape the QUEENS input-output-data into the same structure that we saw in the initial minimal example.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "27", "metadata": {}, "outputs": [], "source": [ "input_data = results[\"input_data\"]\n", "output_data = results[\"raw_output_data\"][\"result\"]\n", "\n", "X1_QUEENS = input_data[:,0].reshape(grid_design[\"x2\"][\"num_grid_points\"],grid_design[\"x1\"][\"num_grid_points\"])\n", "X2_QUEENS = input_data[:,1].reshape(grid_design[\"x2\"][\"num_grid_points\"],grid_design[\"x1\"][\"num_grid_points\"])\n", "\n", "Z_QUEENS = output_data.reshape(grid_design[\"x2\"][\"num_grid_points\"],grid_design[\"x1\"][\"num_grid_points\"])\n", "\n", "visualize_grid_and_surface(X1_QUEENS,X2_QUEENS, Z_QUEENS)" ] }, { "cell_type": "markdown", "id": "28", "metadata": {}, "source": [ "### Validating QUEENS results against manual mesh generation\n", "\n", "As you can see in the plot above, we obtain **visually indistinguishable**. \n", "This happens because the **generation of grid points under the hood inside the `Grid` iterator** is implemented in the same way as in our **minimal NumPy example** (i.e., using `numpy.linspace` to generate linearly spaced points before using `numpy.meshgrid`).\n", "\n", "To check that the grid generated by QUEENS matches your own manually constructed mesh (using `numpy.linspace` / `numpy.meshgrid`), you can use `numpy.allclose`. \n", "\n", "This function verifies that two arrays are numerically close to each other, up to floating-point tolerances." ] }, { "cell_type": "code", "execution_count": null, "id": "29", "metadata": {}, "outputs": [], "source": [ "print(f\"X1 and X1_QUEENS are identical: {np.allclose(X1, X1_QUEENS)}\")\n", "print(f\"X2 and X2_QUEENS are identical: {np.allclose(X2, X2_QUEENS)}\")\n", "print(f\"Z and Z_QUEENS are identical: {np.allclose(Z, Z_QUEENS)}\")" ] }, { "cell_type": "markdown", "id": "30", "metadata": {}, "source": [ "Thus, both approaches (`np.meshgrid` vs. `Grid` iterator) yield the same grid coordinates.\n", "\n", "This confirms that the QUEENS `Grid` iterator behaves consistently with the standard NumPy approach, but integrates seamlessly into the QUEENS workflow with result tracking, logging, reproducibility, and generalisability.\n", "\n", "___\n", "### Generalisability of the QUEENS workflow\n", "\n", "One of the main design ideas of QUEENS is: **set up your model once, then reuse it**. \n", "You can then run different multi-query analyses with the same model setup, gradually increasing the analysis complexity.\n", "\n", "___\n", "\n", "#### Example\n", "\n", "- We already visualised the Rosenbrock function and saw its **banana-shaped valley**. \n", "- The valley looks flat in the plot, but the actual **minimum** is hidden inside it. \n", "- Finding this minimum is an **optimisation problem**. \n", "\n", "We can solve it in QUEENS by simply switching from a `Grid` iterator to an **`Optimization` iterator** — (almost) no need to change the model setup.\n", "The only thing that we have to change in the model is the scheduler, since we need to request new compute resources, so we have to create a new scheduler; all other aspects of the model can stay the same, though!\n", "Since this we are conducting a new experiment, we are also introducing a new global setting with a new experiment name.\n", "\n", "One of the most important parameters specific to the `Optimization` iterator is the initial guess." ] }, { "cell_type": "code", "execution_count": null, "id": "31", "metadata": {}, "outputs": [], "source": [ "from queens.iterators import Optimization\n", "\n", "global_settings_optimization = GlobalSettings(experiment_name=\"optimization_rosenbrock\", output_dir=\"./\")\n", "\n", "scheduler_optimization = Local(global_settings_optimization.experiment_name, num_jobs=1)\n", "model_optimization = Simulation(scheduler_optimization, driver)\n", "\n", "\n", "optimization = Optimization(\n", " algorithm=\"L-BFGS-B\",\n", " initial_guess=[-2.0, 3.0],\n", " bounds=[float(\"-inf\"), float(\"inf\")],\n", " max_feval=1e4,\n", " objective_and_jacobian=True,\n", " model=model_optimization,\n", " parameters=parameters,\n", " global_settings=global_settings_optimization,\n", " result_description={\"write_results\": True},\n", ")\n", "\n", "with global_settings_optimization:\n", " # Actual analysis\n", " run_iterator(optimization, global_settings=global_settings_optimization)\n", "\n", " # Load results\n", " results = load_result(global_settings_optimization.result_file(\".pickle\"))\n", "\n", " optimal_x = results.x\n", " optimal_fun = results.fun\n" ] }, { "cell_type": "markdown", "id": "32", "metadata": {}, "source": [ "You have successfully identified the minimum of the Rosenbrock function with a gradient based optimisation algorithm:" ] }, { "cell_type": "code", "execution_count": null, "id": "33", "metadata": {}, "outputs": [], "source": [ "print(f\"The minimum of the Rosenbrock function {optimal_fun} is at {optimal_x}.\")" ] }, { "cell_type": "markdown", "id": "34", "metadata": {}, "source": [ "Let's visualise the minimum together with the surface plot." ] }, { "cell_type": "code", "execution_count": null, "id": "35", "metadata": {}, "outputs": [], "source": [ "visualize_grid_and_surface(X1, X2, Z, min_point=(optimal_x[0], optimal_x[0], optimal_fun))" ] }, { "cell_type": "markdown", "id": "36", "metadata": {}, "source": [ "## Optional tasks: time for some individualisation\n", "1. The resolution of the grid is relatively coarse.\n", "Increase the resolution by increasing the `num_grid_points` in the grid design and repeat the QUEENS experiment.\n", "\n", "1. You can adjust the bounds of the grid per variable via the keywords `lower_bound` and `upper_bound` of the Uniform parameter objects. \n", "Go ahead and see what happens if you change the bound.\n", "\n", "1. We are executing the study on our local machine, so we are using the\n", "`Local` scheduler. Nevertheless, you can also run the model evaluations in parallel by increasing `num_jobs`.\n", "See how the time for the calculation of an experiment changes by increasing `num_jobs`. Warning: don't go beyond the maximum number of CPUs your machine has.\n", "\n", "___\n", "\n", "## Advanced optional task: Try another test function for optimisation\n", "\n", "To see how flexible QUEENS is, let’s try a different **test function** from the [list of optimisation test functions](https://en.wikipedia.org/wiki/Test_functions_for_optimization).\n", "\n", "### Steps\n", "\n", "1. **Pick a function** \n", " Choose any test function you like (e.g. Rastrigin, Ackley, Himmelblau, …).\n", "\n", "1. **Write it in Python** \n", " Implement the function with the same variable names (`x1`, `x2`) so that QUEENS can recognise them.\n", "\n", "1. **Define the parameters** \n", " Create a new `Parameters` object with `x1` and `x2` in the correct domain for the chosen function.\n", "\n", "1. **Wrap it in a QUEENS model** \n", " - Create a new `Function` driver from your Python function \n", " - Wrap it into a `Simulation` model \n", "\n", "1. **Run the analysis** \n", " - First, use a `Grid` iterator to **visualise the surface** of your new function. \n", " - Then, switch to an `Optimisation` iterator to **search for the minimum**. \n", "\n", "With these steps, you can quickly test different benchmark functions without changing the overall QUEENS workflow." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.0" } }, "nbformat": 4, "nbformat_minor": 5 }