{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "%matplotlib notebook\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading the data of the experiments" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_1 = np.load('data_1.npy')\n", "data_x_1, data_y_1, data_z_1 = data_1[0], data_1[1], data_1[2]\n", "data_2 = np.load('data_1.npy')\n", "data_x_2, data_y_2, data_z_2 = data_2[0], data_2[1], data_2[2]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_x, data_y, data_z = data_x_1, data_y_1, data_z_1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Direct Inversion" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "'''\n", "Implement here the direct inversion method developed in part 3 of the exercise.\n", "'''\n", "\n", "x_d, y_d, z_d = ...\n", "\n", "r_d = np.array([x_d, y_d, z_d])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Nxu = np.count_nonzero(data_x == 1)\n", "Nxd = Nx - Nxu\n", "Nyu = np.count_nonzero(data_y == 1)\n", "Nyd = Ny - Nyu\n", "Nzu = np.count_nonzero(data_z == 1)\n", "Nzd = Nz - Nzu" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def C(r):\n", " ''' \n", " This fuction implements a homogenious prior.\n", " '''\n", " return np.where(np.linalg.norm(r,axis = 0) < 1, 1, 0)\n", "\n", "\n", "from scipy.special import comb\n", "def P(x,y,z, Nxu, Nxd, Nyu, Nyd, Nzu, Nzd):\n", " ''' \n", " Fill in here the probability of measuring the results Nxu, Nxd, Nyu, Nyd, Nzu, Nzd given a density matrix defined\n", " by the Bloch vector r = (x, y, z)\n", " '''\n", " \n", " return 0\n", "\n", "def L(x,y,z, Nxu, Nxd, Nyu, Nyd, Nzu, Nzd):\n", " ''' \n", " Implement here the likelihood as defined in the exercise.\n", " '''\n", " return 0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import plotly.graph_objects as go\n", "import numpy as np\n", "\n", "\n", "'''\n", "Here the likelihood is plotted. One can see that the cloud is \n", "'''\n", "\n", "X, Y, Z = np.mgrid[-1:1:40j, -1:1:40j, -1:1:40j]\n", "Ls = L(X,Y,Z, Nxu, Nxd, Nyu, Nyd, Nzu, Nzd)\n", "\n", "fig = go.Figure(data=go.Volume(\n", " x=X.flatten(),\n", " y=Y.flatten(),\n", " z=Z.flatten(),\n", " value=Ls.flatten(),\n", " isomin=Ls.max()/100.,\n", " opacity=0.1, # needs to be small to see through all surfaces\n", " surface_count=17, # needs to be a large number for good volume rendering\n", " ))\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "'''\n", "Implement here a Metropolis-Hasings algorithm to efficiently evaluate the Baysian mean integral. \n", "Help can be found here https://people.duke.edu/~ccc14/sta-663/MCMC.html and here\n", "https://en.wikipedia.org/wiki/Monte_Carlo_integration\n", "\n", "You can also look at the following paper: \n", "Blume-Kohout, Robin. \"Optimal, reliable estimation of quantum states.\" New Journal of Physics 12.4 (2010): 043034.\n", "\n", "Make sure that the efficientcy of the algorithm is about 30%\n", "'''\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "'''\n", "The functions below plot the kernel density estimate of the Bloch vector samples rs. Unfortunatly, this function is very slow...\n", "'''\n", "\n", "kde = stats.gaussian_kde(rs.T)\n", "\n", "KDEs = kde(np.vstack([X.flatten(),Y.flatten(),Z.flatten()]))\n", "\n", "fig = go.Figure(data=go.Volume(\n", " x=X.flatten(),\n", " y=Y.flatten(),\n", " z=Z.flatten(),\n", " value=KDEs,\n", " isomin=KDEs.max()/100.,\n", " opacity=0.1, # needs to be small to see through all surfaces\n", " surface_count=17, # needs to be a large number for good volume rendering\n", " ))\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r_BME = rs.mean(axis = 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparison of the two Algorithms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r_d" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r_BME" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }