{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Uncertainty Quantification Using the MonteCarlo Iterator\n", "\n", "In fields such as engineering, physics, and applied mathematics, simulation models serve as crucial tools for predicting real-world phenomena. These models typically rely on precise parameterization, including aspects like constitutive models and boundary conditions. However, in practical applications, these parameters are often unknown due to insufficient experimental data. To ensure accurate predictions despite this uncertainty, it is essential to incorporate these unknowns into the modeling process. This is precisely the focus of the field of uncertainty quantification, which aims to systematically address and manage the uncertainties inherent in simulation models.\n", "\n", "In this tutorial, we'll have a look at forward uncertainty quantification. Here the goal is to model the uncertainty in quantity of interest $y$ by propagating uncertainties in input $\\theta$ through the model $f(\\theta)$:\n", "$$u = f(\\theta)$$\n", "\n", "Here, we assume the input $\\theta$ is uncertain, which we describe with a random variable using a probability distribution $p(\\theta)$. As a consequence, the outputs of the model $m$ are random variables as well, following a distribution $p(u)$. *Random in, random out.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## An example\n", "\n", "Let's look at an example, where the function $f$ is the solution of a partial differential equation.\n", "\n", "### The model\n", "\n", "On a domain $\\Omega=[0,1] \\times [0,1]$, the Poisson equation is given by:\n", "\n", "$$-\\Delta u = s$$\n", "\n", "where $u$ is the solution field and $s$ the heterogenous source term. To make the problem well-posed, we'll apply the boundary conditions:\n", "$$u = 0 \\text{ for } x \\in \\partial \\Omega$$ \n", "\n", "To solve the partial differential equation, we employ a finite element approach with linear elements using [scikit-fem](https://github.com/kinnala/scikit-fem/tree/master) (this tutorial is inspired by their [tutorial](https://scikit-fem.readthedocs.io/en/latest/listofexamples.html#poisson-equation))\n", "\n", "\n", "## The uncertainties\n", "For this system, we assume the source term $s$ is modelled by\n", "$$ s(x,y,x_s,y_s) = \\exp\\left( -\\frac{1}{2}\\frac{(x-x_s)^2 + (y-y_s)^2}{0.1^2}\\right)$$\n", "where the coordinates $x_s$ and $y_s$ are uncertain. The source center is defined via the joint distribution\n", "$$p(x_s,y_s) = \\mathcal{B}(x_s|2,5) \\mathcal{B}(y_s|4,3) $$\n", "where $\\mathcal{B}(\\circ|a,b)$ is a beta distribution with shape parameters $a$ and $b$. Here, we assume the parameters are independent of each other, i.e., $p(x_s,y_s) = p(x_s)p(y_s)$, where $p(x_s)$ and $p(y_s)$ are called marginal distributions.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from skfem import Basis, BilinearForm, ElementTriP1, LinearForm, MeshTri, enforce, solve\n", "from skfem.helpers import dot, grad\n", "\n", "mesh = MeshTri().refined(6)\n", "\n", "\n", "def poisson_pde(source_x, source_y, source_term):\n", "\n", " # Set discretization\n", " e = ElementTriP1()\n", " basis = Basis(mesh, e)\n", "\n", " @BilinearForm\n", " def laplace(u, v, _):\n", " return dot(grad(u), grad(v))\n", "\n", " @LinearForm\n", " def rhs(v, w):\n", " # Source term\n", " return source_term(w.x[0], w.x[1], source_x, source_y) * v\n", "\n", " # Stiffness matrix\n", " A = laplace.assemble(basis)\n", "\n", " # Right-hand side\n", " b = rhs.assemble(basis)\n", "\n", " # Enforce Dirichlet boundary conditions\n", " A, b = enforce(A, b, D=mesh.boundary_nodes())\n", "\n", " # Solve\n", " solution = solve(A, b)\n", "\n", " return solution\n", "\n", "\n", "def plot_to_axis(field, ax):\n", " mesh.plot(field, ax=ax)\n", " ax.axis(\"equal\")\n", " ax.set_aspect(\"equal\", \"box\")\n", " ax.set_xlim([0, 1])\n", " ax.set_ylim([0, 1])\n", " ax.set_xlabel(\"$x$\")\n", " ax.set_ylabel(\"$y$\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's define the distributions\n", "\n", "from queens.distributions import Beta\n", "\n", "x_s = Beta(0, 1, 2, 5)\n", "y_s = Beta(0, 1, 4, 3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's plot the probability density functions (PDF)\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "# x, y in [0,1]\n", "\n", "coordinate = np.linspace(0, 1, 100)\n", "\n", "plt.plot(coordinate, x_s.pdf(coordinate), \"r-\", label=\"$p(x_s)$\")\n", "plt.plot(coordinate, y_s.pdf(coordinate), \"b-\", label=\"$p(y_s)$\")\n", "\n", "plt.xlabel(\"$x_s$ or $y_s$\")\n", "plt.ylabel(\"Probability density functions\")\n", "plt.title(\"Marginal distributions\")\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's define the parameters using QUEENS and plot the joint distribution\n", "\n", "from queens.parameters import Parameters\n", "\n", "parameters = Parameters(x_s=x_s, y_s=y_s)\n", "\n", "xx, yy = np.meshgrid(coordinate, coordinate)\n", "\n", "joint = np.exp(\n", " parameters.joint_logpdf(np.hstack((xx.reshape(-1, 1), yy.reshape(-1, 1))))\n", ").reshape(xx.shape)\n", "\n", "fig, ax = plt.subplots()\n", "contour = ax.contourf(xx, yy, joint, levels=20)\n", "fig.colorbar(contour, ax=ax, label=\"$p(x_s, y_s)$\")\n", "\n", "ax.set_xlabel(\"$x_s$\")\n", "ax.set_ylabel(\"$y_s$\")\n", "ax.set_aspect(\"equal\", \"box\")\n", "ax.set_title(\"Joint distribution\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nice, so we defined a joint distribution for the parameters uncertainty source position! Let us visualize some source samples! We can generate source fields through samples of $p(x_s, y_s)$:\n", "1. Generate samples $\\left(x_s^{(s)},y_s^{(s)}\\right) \\sim p(x_s,y_s)$\n", "2. Compute $s(x,y,x_s^{(s)},y_s^{(s)})$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's define the source term\n", "\n", "\n", "def source_term(x, y, x_s, y_s):\n", " return np.exp(-0.5 * ((x - x_s) ** 2 + (y - y_s) ** 2) / (0.1) ** 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def source_field_on_mesh(source_position):\n", " # Returns nodal value of the source term on the domain Omega\n", " return source_term(mesh.p[0], mesh.p[1], source_position[0], source_position[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Fix the random seed\n", "np.random.seed(42)\n", "\n", "\n", "n_samples = 5\n", "source_position_samples = parameters.draw_samples(n_samples)\n", "\n", "fig, ax = plt.subplots(1, n_samples)\n", "\n", "for i, sample in enumerate(source_position_samples):\n", " # Compute the source on the domain\n", " source_sample = source_field_on_mesh(sample)\n", "\n", " # Plot the source field\n", " plot_to_axis(source_sample, ax[i])\n", " ax[i].set_title(f\"Source sample ${i+1}$\")\n", "\n", "fig.set_size_inches(15, 4)\n", "fig.suptitle(\"Source field samples\")\n", "fig.tight_layout()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we are now able to generate samples of the source field! Now let's have a look at the model outputs, i.e., the solution fields, for these samples. \n", "\n", "\n", "To ease notation, we define $u(x_s, y_s)$ to be the solution of the Poisson equation for a given source sample $s\\left(x,y,x_s,y_s\\right)$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(2, n_samples)\n", "\n", "for i, sample in enumerate(source_position_samples):\n", " # Compute the source on the domain\n", " source_sample = source_field_on_mesh(sample)\n", "\n", " text_sample = \"x_s^{(\" + f\"{i+1}\" + \")}\" + \", y_s^{(\" + f\"{i+1}\" + \")}\"\n", " # Plot the source field\n", " plot_to_axis(source_sample, ax[0, i])\n", " ax[0, i].set_title(f\"Source sample $({text_sample})$\")\n", "\n", " # Solve the Poisson equation\n", " solution_field = poisson_pde(sample[0], sample[1], source_term)\n", " plot_to_axis(solution_field, ax[1, i])\n", " ax[1, i].set_title(\"Solution $u(\" + text_sample + \")$\")\n", "\n", "\n", "fig.set_size_inches(15, 6)\n", "fig.suptitle(\n", " \"Source field samples $s(x,y,x_s^{(s)},y_s^{(s)})$ (top row) and their respective solutions $u^{(s)}$ (bottom row)\"\n", ")\n", "fig.tight_layout()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the source position strongly dictates the resulting solution field, indicating a strong dependency between input and outputs.\n", "\n", "Studying this dependency is a form of uncertainty quantification. Congrats!\n", "\n", "As it becomes tedious to look at individual samples, we're looking for representative values or descriptions of the sample set. As it is common in statistics, we can look at the mean value of the solution field $u$:\n", "\n", "$$\\mu = \\int u p(u) du$$\n", "\n", "Since the distribution $p(u)$ is unknown, we employ the law of the unconscious statistician (LOTUS) to rewrite the integral as\n", "\n", "$$\\mu = \\int u(x_s, y_s) p(x_s, y_s) dx_sdy_s$$\n", "\n", "As we can see, we don't need to know $p(u)$, hence unconscious in LOTUS. However, one difficulty remains: evaluating the integral, in particular, since the solution field depends nonlinearly on the source positions. Instead, we'll have a look at numerical integration, specifically Monte Carlo integration.\n", "\n", "## Monte Carlo integration\n", "*Some theory.*\n", "\n", "Monte Carlo integration approximates integrals of the form\n", "$$ \\mathbb{E}_{p(\\theta)}[h] = \\int h(\\theta) p(\\theta) d\\theta$$\n", "by \n", "$$ \\mathbb{E}_{p(\\theta)}[h] \\approx \\frac{1}{N} \\sum_{i=1}^{N} h(\\theta^{(i)})$$\n", "where $\\theta^{(i)}$ are independent and identically distributed (iid) samples of the probability distribution $p(\\theta)$. Monte Carlo approaches differ from quadrature rules in two major ways:\n", "1. A Monte Carlo estimator is a sum of random variables and, therefore, itself a random variable. Hence, the repeating Monte Carlo estimation with different samples will yield different results!\n", "2. The expected error $\\epsilon_\\text{MC}$ in Monte Carlo estimation is $\\epsilon_\\text{MC} \\propto \\frac{1}{\\sqrt(N)}$. Although a convergence rate of $\\frac{1}{2}$ is not desirable, the expected error is independent of the dimension of the integral! This allows the employment of Monte Carlo approaches for high-dimensional integration.\n", "\n", "*Application to our example.*\n", "\n", "For our example, the Monte Carlo estimation of the mean value is given by\n", "$$\\mu =\\mathbb{E}_{p(x_s, y_s)}[u] \\approx \\frac{1}{N} \\sum_{i=1}^{N} u(x_s^{(i)},y_s^{(i)})$$\n", "\n", "with $x_s^{(i)}, y_s^{(i)} \\sim p(x_s, y_s)$. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Only a wrapper as the source term is constant\n", "def solve_poisson(x_s, y_s):\n", " return poisson_pde(x_s, y_s, source_term)\n", "\n", "\n", "from queens.global_settings import GlobalSettings\n", "from queens.drivers import Function\n", "from queens.schedulers import Local\n", "from queens.models import Simulation\n", "from queens.iterators import MonteCarlo\n", "from queens.main import run_iterator\n", "from queens.utils.io import load_result\n", "import pathlib\n", "\n", "def monte_carlo_queens(n_samples, experiment_type, seed=42):\n", " with GlobalSettings(\n", " experiment_name=f\"{experiment_type}_{n_samples}_seed_{seed}\",\n", " output_dir=\"./output/poisson_example\",\n", " ) as gs:\n", " # Driver: calls the solve_poisson function\n", " driver = Function(parameters, solve_poisson)\n", "\n", " # Scheduler: Start simulations in parallel 4 at once\n", " scheduler = Local(gs.experiment_name, num_jobs=4, verbose=True)\n", "\n", " # Model: The interface for QUEENS iterators\n", " model = Simulation(scheduler, driver)\n", "\n", " # Iterator: i.e., the analysis, in this case Monte Carlo integration\n", " iterator = MonteCarlo(\n", " model=model,\n", " parameters=parameters,\n", " global_settings=gs,\n", " result_description={\"write_results\": True, \"plot_results\": False},\n", " num_samples=n_samples,\n", " seed=seed,\n", " )\n", "\n", " # Run the analysis\n", " run_iterator(iterator, gs)\n", "\n", " # Load the results\n", " results = load_result(gs.result_file(\".pickle\"))\n", "\n", " # Return the results dict\n", " return results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot the mean value estimate with different samples\n", "\n", "fig, ax = plt.subplots(1, 4)\n", "n_samples = 10\n", "\n", "for seed in range(4):\n", " results = monte_carlo_queens(n_samples, \"monte_carlo_poisson_samples\", seed)\n", " mean = results[\"mean\"]\n", "\n", " plot_to_axis(mean, ax[seed])\n", " ax[seed].set_title(f\"MC run {seed+1}\")\n", "\n", "fig.suptitle(\n", " f\"Mean value for $\\mu$ using Monte Carlo integration with {n_samples} samples\"\n", ")\n", "fig.set_size_inches(15, 4)\n", "fig.tight_layout()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As can be seen, with 10 samples, the mean value estimations vary quite a lot. This is the first major difference mentioned above. As mentioned, this problem can be mitigated by increasing the number of samples. Let's try:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# plot the mean value for different number of samples\n", "\n", "fig, ax = plt.subplots(4, 4)\n", "\n", "for log10_samples in range(4):\n", " for seed in range(4):\n", " results = monte_carlo_queens(int(10**log10_samples), \"monte_carlo_poisson_log_samples\",seed)\n", " mean = results[\"mean\"]\n", "\n", " plot_to_axis(mean, ax[log10_samples, seed])\n", " if log10_samples == 0:\n", " ax[log10_samples, seed].set_title(f\"MC run {seed}\")\n", " ax[log10_samples, seed].text(\n", " 0.5,\n", " 0.08,\n", " f\"{int(10**log10_samples)} samples\",\n", " ha=\"center\",\n", " size=\"large\",\n", " backgroundcolor=\"white\",\n", " )\n", "\n", "fig.suptitle(\"Mean value for $\\mu$ using Monte Carlo integration\")\n", "fig.set_size_inches(16, 16)\n", "fig.tight_layout()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With an increasing number of samples the variance between MC estimate of the mean value is reduced!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Beyond mean value\n", "Depending on the underlying distribution, the mean value might not be a quantifier of the underlying data.\n", "\n", "> Note: As an example, think about a disease that mostly affects babies (up to 4 years) and the elderly (above 80 years); the average age of the patients observed by a hospital will be around middle-aged, which is outside of both groups.\n", "\n", "Using the expectations, we can also obtain the probability distribution of $u_c$ at different locations $x_c, y_c$:\n", "$$p(u_c)= \\mathbb{E}_{p(x_s,y_s)}[\\delta(u_c - u(x_c,y_c,x_s,y_s))]$$\n", "where $\\delta(\\circ)$ is the Dirac mass delta. Numerically, this can be achieved via a kernel density estimation (KDE). The latter example can be seen here:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import stats\n", "\n", "# Indices in mesh at approximatelty x=0.2, 0.5, 0.8 and y = 0.5\n", "index = [1969, 6, 2011]\n", "monte_carlo_sample_outputs = results[\"raw_output_data\"][\"result\"][:, index]\n", "\n", "fig, axes = plt.subplots(1, 2)\n", "plot_to_axis(mean, axes[0])\n", "axes[0].set_title(\"Mean estimate with 1000 samples\")\n", "axes[1].set_title(\"PDF KDE with 1000 samples\")\n", "\n", "axes[1].set_xlabel(f\"$u_c$\")\n", "axes[1].set_ylabel(f\"$p(u_c)$\")\n", "\n", "min_u = monte_carlo_sample_outputs.min()\n", "max_u = monte_carlo_sample_outputs.max()\n", "u = np.linspace(min_u, max_u, 1000)\n", "\n", "colors = [\"r\", \"b\", \"k\"]\n", "for i, samples_at_location in enumerate(monte_carlo_sample_outputs.T):\n", " kde = stats.gaussian_kde(samples_at_location)\n", " pdf = kde.pdf(u)\n", " pdf[0] = 0\n", " pdf[-1] = 0\n", " pdf /= np.trapz(pdf, u)\n", " axes[0].plot(mesh.p[0, index[i]], mesh.p[1, index[i]], \"ks\")\n", " c_text = f\"({mesh.p.T[index[i]][0]}, {mesh.p.T[index[i]][1]})\"\n", " axes[0].text(mesh.p[0, index[i]], mesh.p[1, index[i]] + 0.02, c_text, ha=\"center\")\n", " axes[1].plot(u, pdf, colors[i], label=f\"$p(u_c)$ for $(x_c,y_c)={c_text}$\")\n", " axes[1].plot([mean[index[i]]] * 2, [0, max(pdf) + 5], colors[i] + \":\")\n", " axes[1].text(mean[index[i]], max(pdf) + 10, f\"$\\mu$ at ${c_text}$\", color=colors[i])\n", "\n", "axes[1].legend()\n", "\n", "fig.suptitle(\"Mean value for $\\mu$ using Monte Carlo integration\")\n", "fig.set_size_inches(16, 8)\n", "fig.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the solution field is generated based on samples of $p(x_s, y_s)$, *random in, random out*, it can be interpreted as a random variable indexed by a spatial location $x,y$. This is known as a random field! \n", "\n", "Here, for each location $x_c, y_c$ we can compute a probability distribution. This is what we see on the right plot. The distribution of the solution value $u$ depends on the location $x_c, y_c$. As can be observed, the distribution shows a large variance for $x_c \\approx 0.2$ and $x_c \\approx 0.5$, indicating that the solution at these locations is strongly affected by the source term. In contrast, for $x_c \\approx 0.8$, the probability mass, i.e., where $p(u_c)>0$, tends to concentrate around smaller values! Consequently, the variance of the solution at this location is much smaller." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Why QUEENS?\n", "\n", "- Even though Monte Carlo is a straightforward algorithm, utilizing QUEENS offers several advantages that enhance its functionality and usability. One significant benefit is that QUEENS provides various convenience functions for handling results and logging. For instance, if you check the folder `output/poisson_example`, you will find log files and result pickle files generated from the QUEENS runs. This organized output makes it easier to track and analyze the results of your simulations.\n", "\n", "- Another key advantage of QUEENS is its support for parallelism. The Monte Carlo algorithm is inherently parallelizable, meaning each model evaluation can be conducted independently of the others. In the context of these examples, setting `num_jobs=4` in the scheduler allows for the execution of four simulations simultaneously. This capability significantly speeds up the computation process, making it more efficient and effective for handling large-scale simulations.\n", "\n", "- Additionally, QUEENS offers model independence, a crucial feature for users working with various modeling frameworks. QUEENS does not require knowledge of the inner workings of the model or the specifics of libraries like scikit-fem. Instead, it seamlessly manages all the evaluation processes, allowing users to integrate their models without needing to modify QUEENS itself. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's play around\n", "\n", "Let your creativity flow and try out stuff.\n", "\n", "### Inspirations\n", "- Change the parameters of the marginal distribution\n", "- Change the distributions $p(x_s, y_s)$\n", "- Change the source definition\n", "- Change the number of samples\n", "- Plot the variance of $u$ (hint, $results[\"var\"]$)\n", "\n", "### Some questions\n", "- What are the downsides of Monte Carlo?\n", "- What is the limiting factor?\n", "- What's the variance of the outputs at the boundary?" ] } ], "metadata": { "kernelspec": { "display_name": "queens", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.0" } }, "nbformat": 4, "nbformat_minor": 4 }