{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "0", "metadata": { "nbsphinx": "hidden" }, "outputs": [], "source": [ "# Suppress detailed logging output, Not to be shown\n", "\n", "import logging\n", "import os\n", "\n", "# logging.basicConfig(level=logging.INFO, format='%(message)s')\n", "os.environ[\"DASK_DISTRIBUTED__LOGGING__DISTRIBUTED\"] = \"CRITICAL\"\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n", "logging.getLogger(\"distributed\").setLevel(level=logging.CRITICAL)" ] }, { "cell_type": "markdown", "id": "1", "metadata": {}, "source": [ "# Grid iterator on Rosenbrock function\n", "\n", "\n", "In this example, we will use the grid iterator on the [Rosenbrock](https://academic.oup.com/comjnl/article/3/3/175/345501) function.\n", "The Rosenbrock function is a well-known test function from optimization. It serves as the \"computational model\" for this example.\n", "The grid iterator evaluates a model based on a grid of the input space.\n", "\n", "You can use QUEENS by writing a small Python script that you use to run QUEENS.\n", "\n", "In this tutorial, we will follow the aforementioned approach and work with the script-based workflow.\n", "This allows you to run and interact with QUEENS directly in Jupyter notebooks." ] }, { "cell_type": "code", "execution_count": null, "id": "2", "metadata": { "tags": [ "nbsphinx-hide-output" ] }, "outputs": [], "source": [ "from queens.distributions.uniform import Uniform\n", "from queens.drivers import Function\n", "from example_simulator_functions.rosenbrock60 import rosenbrock60\n", "from queens.global_settings import GlobalSettings\n", "from queens.iterators.grid import Grid\n", "from queens.main import run_iterator\n", "from queens.models.simulation import Simulation\n", "from queens.parameters.parameters import Parameters\n", "from queens.schedulers import Local\n", "from queens.utils.io import load_result" ] }, { "cell_type": "markdown", "id": "3", "metadata": {}, "source": [ "The following parameters define the experiment name (which output files would be named after) and output directory (where output files will be placed):" ] }, { "cell_type": "code", "execution_count": null, "id": "4", "metadata": {}, "outputs": [], "source": [ "\n", "# Global settings\n", "experiment_name = \"grid_iterator_rosenbrock\"\n", "output_dir = \"./\"" ] }, { "cell_type": "markdown", "id": "5", "metadata": {}, "source": [ "Here we define two parameters for our experiment: $x_1$ and $x_2$, both as a uniform distribution, and then grouped into a QUEENS `Parameter` container:" ] }, { "cell_type": "code", "execution_count": null, "id": "6", "metadata": {}, "outputs": [], "source": [ "#### Model setup ####\n", "# Model parameters\n", "x1 = Uniform(lower_bound=-2.0, upper_bound=2.0)\n", "x2 = Uniform(lower_bound=-2.0, upper_bound=2.0)\n", "parameters = Parameters(x1=x1, x2=x2)" ] }, { "cell_type": "markdown", "id": "7", "metadata": {}, "source": [ "In order to actually construct a model we require multiple things:\n", "- A `scheduler`. It manages the execution of your model on your local machine.\n", "- A `driver`. This object coordinates the evaluation of the forward model itself.\n", "- And finally, a QUEENS `model` itself:" ] }, { "cell_type": "code", "execution_count": null, "id": "8", "metadata": { "tags": [ "nbsphinx-hide-output" ] }, "outputs": [], "source": [ "\n", "scheduler = Local(experiment_name)\n", "driver = Function(parameters=parameters, function=rosenbrock60)\n", "model = Simulation(scheduler, driver)" ] }, { "cell_type": "markdown", "id": "9", "metadata": {}, "source": [ "With the model set up, we now design the analysis parameters. In this case we do a linear grid design." ] }, { "cell_type": "code", "execution_count": null, "id": "10", "metadata": {}, "outputs": [], "source": [ "grid_design = {\n", " \"x1\": {\"num_grid_points\": 5, \"axis_type\": \"lin\", \"data_type\": \"FLOAT\"},\n", " \"x2\": {\"num_grid_points\": 5, \"axis_type\": \"lin\", \"data_type\": \"FLOAT\"},\n", " }" ] }, { "cell_type": "markdown", "id": "11", "metadata": { "tags": [ "nbsphinx-hide-output" ] }, "source": [ "We finally set up our `method` and run the iterator, then print our the results of the run:" ] }, { "cell_type": "code", "execution_count": null, "id": "12", "metadata": {}, "outputs": [], "source": [ "\n", "result_description ={\n", " \"write_results\": True,\n", " \"plot_results\": False,\n", " }\n", "with GlobalSettings(\n", " experiment_name=experiment_name, output_dir=output_dir, debug=False\n", ") as gs: \n", " # The method of the analysis is defined by the iterator type:\n", " method = Grid(\n", " grid_design=grid_design,\n", " result_description=result_description,\n", " global_settings=gs,\n", " model=model,\n", " parameters=parameters,\n", " )\n", "\n", " #### Analysis ####\n", " run_iterator(method, gs)\n", "\n", " #### Load Results ####\n", " result_file = gs.output_dir / f\"{gs.experiment_name}.pickle\"\n", " results = load_result(result_file)" ] }, { "cell_type": "markdown", "id": "13", "metadata": {}, "source": [ "## QUEENS output\n", "\n", "QUEENS output is written to a file called `.pickle` in the directory defined by the argument `output_dir` of the global settings object. \n", "There also the `experiment_name` has to be set. In the output folder, you also get a log file that contains the console output. It is called `.log`.\n", "\n", "For this example, the experiment name is set to `grid_iterator_rosenbrock`. \n", "Therefore, the output can be found in the following directory: `/grid_iterator_rosenbrock.pickle`\n", "\n", "In the case of the grid iterator, you get a dictionary including the raw input-output data and some statistics." ] }, { "cell_type": "code", "execution_count": null, "id": "14", "metadata": {}, "outputs": [], "source": [ "results" ] }, { "cell_type": "markdown", "id": "15", "metadata": {}, "source": [ "### Postprocessing\n", "\n", "You can now interact with these results as with any other Python object.\n", "For example, you can plot the results:" ] }, { "cell_type": "code", "execution_count": null, "id": "16", "metadata": {}, "outputs": [], "source": [ "import plotly.graph_objects as go\n", "import numpy as np\n", "\n", "input_data = results[\"input_data\"]\n", "output_data = results[\"raw_output_data\"][\"result\"]\n", "\n", "x = np.unique(input_data[:,0])\n", "y = np.unique(input_data[:,1])\n", "\n", "shape_x = x.shape[0]\n", "shape_y = y.shape[0]\n", "\n", "z = output_data.reshape((shape_x, shape_y))\n", "fig = go.Figure(data=[go.Surface(x=x,y=y, z=z)])\n", "\n", "fig.show()" ] }, { "cell_type": "markdown", "id": "17", "metadata": {}, "source": [ "## Time for some individualization\n", "1. The resolution of the grid is relatively coarse.\n", "Increase the resolution by increasing the `num_grid_points` and repeat the simulation.\n", "\n", "1. You can adjust the bounds of the grid per variable via the keywords: `lower_bound` and `upper_bound`. \n", "Go ahead and see what happens if you change the bound.\n", "\n", "1. Since we are evaluating a Python function we are using the `Function` Driver. \n", "Moreover, we are executing the study on our local machine, so we are using the\n", "`Local` Scheduler. You can also run the model evaluations in parallel by \n", "adjusting its `num_jobs` and `num_procs` parameters." ] } ], "metadata": { "kernelspec": { "display_name": "queens", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.0" } }, "nbformat": 4, "nbformat_minor": 5 }