diff --git a/4.1 Interpolation.ipynb b/4.1 Interpolation.ipynb
deleted file mode 100644
index d115aa4..0000000
--- a/4.1 Interpolation.ipynb
+++ /dev/null
@@ -1,338 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# 1 Interpolation et approximation de données\n",
- "\n",
- "## 1.1 Position du problème"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Interpolation de données\n",
- "\n",
- "Soit $n \\geq 0$ un nombre entier. Etant donnés $(n+1)$ noeuds \n",
- "distincts $t_0$, $t_1$,$\\dots$ $t_n$ et $(n+1)$ valeurs $y_0$,\n",
- "$y_1$,$\\dots$ $y_n$, on cherche un polynôme $p$\n",
- " de degré $n$, tel que\n",
- "\n",
- "$$p(t_j)=y_j \\qquad \\text{ pour } \\, 0\\leq j\\leq n.$$\n",
- "\n",
- "**Exemple** On cherche le polynôme $\\Pi_n$ de degré $n=4$ tel que $\\Pi_n(t_j) = y_j, j =1,...,5$ avec \n",
- "les données suivantes : \n",
- "\n",
- "| $t_k$ | $y_k$ |\n",
- "| --- | --- |\n",
- "| 1 | 3 |\n",
- "| 1.5 | 4 |\n",
- "| 2 | 2 |\n",
- "| 2.5 | 5 |\n",
- "| 3 | 1 |\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "slideshow": {
- "slide_type": "slide"
- }
- },
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "\n",
- "# Some data given: t=1, 1.5, 2, 2.5, 3 and y = 3,4,2,5,1 \n",
- "t = np.linspace(1, 3, 5) # equivalent to np.array([ 1, 1.5, 2, 2.5, 3 ])\n",
- "y = np.array([3, 4, 2, 5, 1])\n",
- "\n",
- "# Plot the points using matplotlib\n",
- "plt.plot(t, y, 'ro')\n",
- "\n",
- "plt.xlabel('t'); plt.ylabel('y'); #plt.title('data')\n",
- "plt.legend(['data'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Si ce polynôme existe, on le note $p=\\Pi_n$. On appelle $\\Pi_n$ le\n",
- "polynôme d'interpolation des valeurs $y_j$ aux noeuds $t_j,\\, j=0,\\dots,n$."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the interpolating function\n",
- "\n",
- "# Defining the polynomial function \n",
- "def p(t):\n",
- " # coefficients of the interpolating polynomial\n",
- " a = np.array([-140.0, 343.0, -872./3., 104.0, -40./3.])\n",
- " \n",
- " # value of the polynomial in all the points t\n",
- " return a[0] + a[1]*t + a[2]*(t**2) + a[3]*(t**3) + a[4]*(t**4)\n",
- "\n",
- "\n",
- "# points used to plot the graph \n",
- "z = np.linspace(1, 3, 100)\n",
- "\n",
- "plt.plot(t, y, 'ro', z, p(z))\n",
- "plt.xlabel('t'); plt.ylabel('y'); #plt.title('data')\n",
- "plt.legend(['data','$\\Pi_2(t)$'])\n",
- "plt.show()\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Interpolation de fonctions\n",
- "\n",
- "\n",
- "Soit $f\\in C^0(I)$ et $t_0,\\ldots,t_n\\in I$. \n",
- "Si on prend $$y_j=f(t_j),\\quad 0\\leq j\\leq n,$$ \n",
- "alors le polynôme d'interpolation\n",
- "$\\Pi_n(t)$ est noté $\\Pi_n f(t)$ et est appelé l'interpolant de $f$ aux\n",
- "noeuds $t_0$,$\\dots$ $t_n$.\n",
- "\n",
- "**Exemple** Soient $$t_1=1, t_2=1.75, t_3=2.5, t_4=3.25, t_5=4$$ les points d'interpolation et $$f(t) = t \\sin(2\\pi t).$$ On cherche l'interpolant $\\Pi_n f$ de degré $n=4$\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# defining the fonction that we want to interpolate\n",
- "def f(t):\n",
- " return t*np.sin(t*2.*np.pi)\n",
- "\n",
- "# The interpolation must occour at points t=1, 1.75, 2.5, 3.25, 4\n",
- "t = np.linspace(1, 4, 5)\n",
- "\n",
- "# points used to plot the graph \n",
- "z = np.linspace(1, 4, 100)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the interpolating function\n",
- "\n",
- "# Defining the polynomial function \n",
- "def p(t):\n",
- " # coefficients of the interpolating polynomial\n",
- " a = np.array([0, 7.9012, -13.037, 5.9259, -0.79012])\n",
- " \n",
- " # value of the polynomial in all the points t\n",
- " return a[0] + a[1]*t + a[2]*(t**2) + a[3]*(t**3) + a[4]*(t**4)\n",
- "\n",
- "\n",
- "# points where to evaluate the polynomial\n",
- "z = np.linspace(1, 4, 100)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Une méthode naïve\n",
- "\n",
- "Il est possible d'écrire un système d'équations et de trouver les coefficients de manière directe.\n",
- "Comme expliqué dans le MOOC, ce n'est pas toujours la meilleure solution.\n",
- "\n",
- "Nous cherchons les coefficients du polynôme $p(t) = a_0 + a_1 t + ... + a_n t^n$ qui satisfait les $(n+1)$ équations $p(t_k) = y_k, k=0,...,n$, c'est-à-dire\n",
- "\n",
- "$$a_0 + a_1 t_k + ... + a_n t_k^n = y_k, k=0,...,n$$\n",
- "\n",
- "Ce système s'écrit sous forme matricielle\n",
- "\n",
- "$$\\begin{pmatrix}\n",
- "1 & t_0 & t_0^2 & \\cdots & t_0^n \\\\\n",
- "\\vdots & & & & \\vdots \\\\\n",
- "1 & t_n & t_n^2 & \\cdots & t_n^n\n",
- "\\end{pmatrix}\n",
- "\\begin{pmatrix} a_0 \\\\ \\vdots \\\\ a_n \\end{pmatrix}\n",
- "=\\begin{pmatrix} y_0 \\\\ \\vdots \\\\ y_n \\end{pmatrix}$$\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Exemple** On cherche les coefficients du polynôme d'interpolation de degré $n=4$ \n",
- "des valeurs suivantes \n",
- "\n",
- "| $t_k$ | $y_k$ |\n",
- "| --- | --- |\n",
- "| 1 | 3 |\n",
- "| 1.5 | 4 |\n",
- "| 2 | 2 |\n",
- "| 2.5 | 5 |\n",
- "| 3 | 1 |\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Some data given: t=1, 1.5, 2, 2.5, 3 and y = 3,4,2,5,1 \n",
- "t = np.linspace(1, 3, 5)\n",
- "y = np.array([3, 4, 2, 5, 1])\n",
- "n = t.size - 1\n",
- "\n",
- "# The following is a trick to put toghether the matrix A. Don't need to learn by heart ...\n",
- "# np.tile(t, (n+1, 1)) : repete n+1 times of the array t \n",
- "# .T : transpose\n",
- "# np.linspace(0,n,n+1) : [0,...,n+1]\n",
- "# np.tile : again: repete n+1 times the array [0,...,n+1]\n",
- "# np.power : element by element power funcion\n",
- "A = np.power( np.tile(t, (n+1, 1)).T , np.tile(np.linspace(0,n,n+1), (n+1, 1)))\n",
- "# print(A)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Now we can define the polynomial\n",
- "p = lambda t : a[0] + a[1]*t + a[2]*(t**2) + a[3]*(t**3) + a[4]*(t**4)\n",
- "\n",
- "# points used to plot the graph \n",
- "z = np.linspace(1, 3, 100)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Alternatives : polyfit et polyval\n",
- "\n",
- "Les fonctions `polyfit` et `polyval` de `numpy` font essentiellement la même chose que les paragraphes ci-dessous. Plus tard nous verrons des méthodes plus performantes.\n",
- "\n",
- "`a = numpy.polyfit(t, y, n, ... )` :\n",
- "\n",
- " * input : $t,y$ les données à interpoler, $n$ le degré du polynôme recherché\n",
- " * outut : les coefficients du polynôme _dans l'ordre inverse_ de ce que nous avons vu !\n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Some data given: t=1, 1.5, 2, 2.5, 3 and y = 3,4,2,5,1 \n",
- "t = np.linspace(1, 3, 5)\n",
- "y = np.array([3, 4, 2, 5, 1])\n",
- "n = t.size - 1\n",
- "\n",
- "a = np.polyfit(t,y,n)\n",
- "\n",
- "# Now we can define the polynomial, with coeffs in the reverse order !\n",
- "p = lambda t : a[4] + a[3]*t + a[2]*(t**2) + a[1]*(t**3) + a[0]*(t**4)\n",
- "\n",
- "# We can also use polyval instead !\n",
- "# np.polyval(a,t)\n",
- "\n",
- "# points used to plot the graph \n",
- "z = np.linspace(1, 3, 100)\n",
- "\n",
- "plt.plot(t, y, 'ro', z, p(z), '.', z, np.polyval(a,z))\n",
- "plt.xlabel('t'); plt.ylabel('y'); #plt.title('data')\n",
- "plt.legend(['data','p(t)','polyval'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.1 Methodes directes.ipynb b/4.1 Methodes directes.ipynb
deleted file mode 100644
index 2720ce7..0000000
--- a/4.1 Methodes directes.ipynb
+++ /dev/null
@@ -1,475 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Méthodes Directes\n",
- "\n",
- "D'abord quelques exemples d'utilisations de la factorisation LU et de Choleski, ensuite on passe aux methodes itéreatives"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Exercice \n",
- "\n",
- "On considère le système linéaire $A\\mathbf{x} = \\mathbf{b}$ où :\n",
- "\n",
- "$$A = \\left(\n",
- " \\begin{array}{ccc}\n",
- " 3 & 6 & 7 \\\\[0.5em]\n",
- " 1 & 1 & 4 \\\\[0.5em]\n",
- " 2 & 4 & 8\n",
- " \\end{array} \\right), \\quad\n",
- " \\mathbf{b} = \\left( \\begin{array}{c} 4 \\\\[0.5em] 5 \\\\[0.5em] 6 \\end{array} \n",
- "\\right) \\, .$$\n",
- " \n",
- "1. Calculer la factorisation $LU$ de la matrice $A$ avec Python:\n",
- "\n",
- "2. Résoudre le système linéaire $A \\mathbf{x} = \\mathbf{b}$ en utilisant la factorisation trouvée au point\n",
- "précédent (Ne plus utiliser Python. Mais ici on va le faire)\n",
- "\n",
- "3. Calculer le déterminant de la matrice $A$ en utilisant sa factorisation $LU$."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt\n",
- "\n",
- "# scipy.linalg.eigh : eigenvalues for symmetric matric\n",
- "from scipy.linalg import eigh\n",
- "# scipy.linalg.eig : eigenvalues of a matrix\n",
- "from scipy.linalg import eig\n",
- "# scipy.linalg.det : determinant of a matrix\n",
- "from scipy.linalg import det\n",
- "# scipy.linalg.lu : LU decomposition\n",
- "from scipy.linalg import lu\n",
- "# scipy.linalg.cholesky : Cholesky decomposition\n",
- "from scipy.linalg import cholesky\n",
- "from scipy.linalg import hilbert\n",
- "\n",
- "import pprint"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 1"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "A = np.array([[3, 6, 7],\n",
- " [1, 1, 4],\n",
- " [2, 4, 8] ])\n",
- "\n",
- "# LU factorisation with pivoting\n",
- "P, L, U = lu(A)\n",
- "\n",
- "print(\"A = P L U\")\n",
- "pprint.pprint(P.dot(L.dot(U)) )\n",
- "\n",
- "print(\"P:\")\n",
- "pprint.pprint(P)\n",
- "\n",
- "print (\"L:\")\n",
- "pprint.pprint(L)\n",
- "\n",
- "print (\"U:\")\n",
- "pprint.pprint(U)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 2\n",
- "\n",
- "Si $P$ est l'identité, $A = LU$.\n",
- "\n",
- "1. résoudre pour $\\mathbf y$ tel que $L \\mathbf y = \\mathbf{b}$ par substitution progressive\n",
- "\n",
- "2. résoudre pour $\\mathbf x$ tel que $U \\mathbf x = \\mathbf{y}$ par substitution retrograde\n",
- "\n",
- "Si $P$ n'est pas l'identité, $A = PLU$. \n",
- "\n",
- "Attention, une matrice de permutation est une matrice orthogonale car les colonnes sont orthonormée.\n",
- "Donc $P^{-1} = P^T$. Du coup : $P^TA = LU$ et \n",
- "$$A\\mathbf x = \\mathbf b \\Leftrightarrow P^TA\\mathbf x = P^T \\mathbf b\\Leftrightarrow LU\\mathbf x = P^T \\mathbf b.$$\n",
- "Alors il faut modifier les calculs précedants comme suit:\n",
- "\n",
- "1. résoudre pour $\\mathbf y$ tel que $L \\mathbf y = P^T\\mathbf{b}$ par substitution progressive\n",
- "\n",
- "2. résoudre pour $\\mathbf x$ tel que $U \\mathbf x = \\mathbf{y}$ par substitution retrograde\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# to complete\n",
- "\n",
- "# check the residual of the equation\n",
- "print(b - A.dot(x))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 3\n",
- "\n",
- "$$\\det A = \\det P \\, \\det L \\, \\det U$$"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# to complete\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Les mineurs principaux** d'une matrice $A \\in \\mathbb{R}^{n \\times n}$ \n",
- "sont les déterminants des matrices $A_p = (a_{i,j})_{1 \\leq i, j \\leq p}$, $p = 1, ..., n$.\n",
- "\n",
- "**Critère de Sylvester:** une matrice symétrique $A \\in \\mathbb{R}^{n\\times n}$ \n",
- "est définie positive si et seulement si les mineurs\n",
- "principaux de $A$ sont tous positifs."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Exercice \n",
- "\n",
- "On considère le système linéaire $A\\mathbf{x} = \\mathbf{b}$\n",
- "où\n",
- "\n",
- "$$A = \\left(\n",
- " \\begin{array}{ccc}\n",
- " \\varepsilon & 1 & 2\\\\\n",
- " 1 & 3 & 1 \\\\\n",
- " 2 & 1 & 3 \\\\\n",
- " \\end{array}\n",
- " \\right).$$\n",
- " \n",
- "1. Déterminer pour quelles valeurs du paramètre réel\n",
- "$\\varepsilon \\in \\mathbb{R}$, la matrice $A$ est symétrique\n",
- "définie positive \n",
- "\n",
- "2. Soit maintenant $\\varepsilon=0$. On veut résoudre le système\n",
- "$A\\mathbf{x}=\\mathbf{b}$ par une méthode directe; quelle\n",
- "factorisation de la matrice $A$ envisageriez-vous? Justifier\n",
- "votre réponse.\n",
- "\n",
- "3. En considérant $\\varepsilon = 2$, vérifier que dans ce cas la\n",
- "matrice $A$ est définie positive et calculer sa\n",
- "factorisation de Cholesky $A = L L^T$.\n",
- "\n",
- "4. En supposant que $\\mathbf{b} = (1,1,1)^T$, résoudre le\n",
- "système linéaire $A \\mathbf{x} = \\mathbf{b}$ en utilisant la\n",
- "factorisation de Cholesky calculée au point c).\n",
- "\n",
- "Reference Python pour la factorisation de Cholesky :\n",
- "[scipy.linalg.cholesky](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.linalg.cholesky.html)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 1 et 2\n",
- "\n",
- "cf corrigé de la série"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 3"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "epsilon = 2\n",
- "A = np.array([[epsilon, 1, 2],\n",
- " [1, 3, 1],\n",
- " [2, 1, 3] ])\n",
- "\n",
- "# Is A symmetric ?\n",
- "print(f'max(abs(A-AT)) = {np.max(np.abs(A-A.T))} ')\n",
- "\n",
- "# What are the eigenvlues of $A$ ? (usually, use eig, but here A is symmetric, we can use eigh )\n",
- "lk, v = eigh(A)\n",
- "print(f'The eigenvalues of A are {lk}')\n",
- "\n",
- "# Cholesky factorisation: lower : return lower-triangular matrix, A = L L^T\n",
- "L = cholesky(A, lower=True) \n",
- "print(f\"\\n A = L^T L = {L.dot(L.T)}\\n\")\n",
- "\n",
- "print (f\"L = {L}\")\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 4"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# to complete\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Problèmes de précision\n",
- "\n",
- "Les erreurs d'arrondis peuvent causer des différences importantes\n",
- "entre la solution calculée par la méthode d'élimination de Gauss (MEG)\n",
- "et la solution exacte. Cela arrive si le\n",
- "*conditionnement* de la matrice du système est très grand.\n",
- "\n",
- "La matrice de Hilbert de taille $n\\times n$ est une matrice\n",
- "symétrique définie par\n",
- "\n",
- "$$a_{ij} = \\frac{1}{i+j-1}, \\qquad i,j=1,\\ldots,n$$\n",
- "\n",
- "On peut construire une matrice de Hilbert de taille $n$\n",
- "quelconque en utilisant la commande `A = scipy.linalg.hilbert(n)`. Par\n",
- "example, pour $n = 4$, on a:\n",
- "$$A = \\begin{bmatrix}\n",
- " 1 & \\frac{1}{2} & \\frac{1}{3} & \\frac{1}{4} \\\\\n",
- " \\frac{1}{2} & \\frac{1}{3} & \\frac{1}{4} & \\frac{1}{5} \\\\\n",
- " \\frac{1}{3} & \\frac{1}{4} & \\frac{1}{5} & \\frac{1}{6} \\\\\n",
- " \\frac{1}{4} & \\frac{1}{5} & \\frac{1}{6} & \\frac{1}{7}\n",
- "\\end{bmatrix}$$\n",
- "\n",
- "On considère les systèmes linéaires $A_n \\mathbf {x}_n=\\mathbf {b}_n$ ou $A_n$ est la matrice de Hilbert de taille\n",
- "$n$ avec $n=4,6,8,10,12,14,\\ldots, 20$ tandis que $\\mathbf {b}_n$ est choisi de sorte que la solution exacte soit $\\mathbf{x}_n= (1,1,\\cdots, 1)^T$.\n",
- "\n",
- "1. Pour chaque $n$, calculez le conditionnement de la matrice\n",
- "\n",
- "2. Résolvez le système linéaire par la factorisation $LU$ et notez $\\vec{x}^{LU}_n$ la solution calculée.\n",
- "\n",
- "3. Dessinez le graphique avec le conditionnement obtenu ainsi que l'erreur rélatif\n",
- " $\\Vert \\mathbf x_n -\\mathbf x^{LU}_n\\Vert/\\Vert \\mathbf x_n\\Vert$ (où $\\Vert\\cdot\\Vert$\n",
- " est la norme euclidienne d'un vecteur, \n",
- " $\\Vert\\mathbf {x}\\Vert = \\sqrt{\\mathbf {x}^T\\cdot\\mathbf {x}}$). Utilisez ue échelle logarithmique pour l'axe $y$.\n",
- " \n",
- "4. Sur le même graphique, reportez le conditionnement de la matrice $A$, `np.linalg.cond(A)`\n",
- "\n",
- "Repetez la même chose avec la factorisation de Cholesky `L = cholesky(A, lower=True)` pour $n=4,6,8,10,12$. Que se passe-t-il si $n=14$\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "Nrange = range(2,20,2)\n",
- "err = []\n",
- "cond = []\n",
- "\n",
- "for n in Nrange :\n",
- " A = hilbert(n)\n",
- " P, L, U = lu(A)\n",
- "\n",
- " x = np.ones([n,1])\n",
- "\n",
- " b = A.dot(x)\n",
- "\n",
- " # to complete (2-3 lines)\n",
- " \n",
- "\n",
- " err.append( np.linalg.norm(x-xLU) / np.linalg.norm(x) )\n",
- " cond.append( np.linalg.cond(A) )\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "\n",
- "plt.plot(Nrange, err, 'b:.',Nrange, cond, 'g:.')\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('err');\n",
- "# plt.xscale('log') \n",
- "plt.yscale('log')\n",
- "plt.grid(True)\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "Nrange = range(2,13,2)\n",
- "err = []\n",
- "cond = []\n",
- "\n",
- "for n in Nrange :\n",
- " A = hilbert(n)\n",
- " L = cholesky(A, lower=True)\n",
- "\n",
- " x = np.ones([n,1])\n",
- "\n",
- " b = A.dot(x)\n",
- "\n",
- " # to complete (2 lines)\n",
- "\n",
- " err.append( np.linalg.norm(x-xCho) / np.linalg.norm(x) )\n",
- " cond.append( np.linalg.cond(A) )\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "\n",
- "plt.plot(Nrange, err, 'b:.',Nrange, cond, 'g:.')\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('err');\n",
- "# plt.xscale('log') \n",
- "plt.yscale('log')\n",
- "plt.grid(True)\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "n = 14\n",
- "A = hilbert(n)\n",
- "# L = cholesky(A, lower=True)\n",
- "lk, v = eigh(A)\n",
- "\n",
- "print(f'lk[0] = {lk[0]:0.5e} , the matrix is not positive definite. Let''s verify with the first eigenvector')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "La première valeur propre est négative, cela signifie que $(\\mathbf v, A\\mathbf v) = (\\mathbf v, \\lambda\\mathbf v) = \\lambda (\\mathbf v, \\mathbf v) = \\lambda \\|\\mathbf v\\|^2<0$, où $\\mathbf v$ est le vecteur propre associé à cette valeur propre $\\lambda$ négative.\n",
- "\n",
- "Vérifions:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "print ( A.dot(v[0]).T.dot(v[0]) )"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Pourquoi ? En vérité, le calcul des valeurs et vecteurs propres de $A$ a aussi un problème d'arrondi à cause du conditionnement. On peut en effet vérifier que le rapport entre les composantes de $\\mathbf v$ et $A\\mathbf v$ n'est ni égale à $\\lambda$, ni une autre constante :"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "print(v[0]/A.dot(v[0]) )"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.2 Interpolation de Lagrange.ipynb b/4.2 Interpolation de Lagrange.ipynb
deleted file mode 100644
index 3356dad..0000000
--- a/4.2 Interpolation de Lagrange.ipynb
+++ /dev/null
@@ -1,190 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 1.2 Interpolation de Lagrange\n",
- "\n",
- "### Base de Lagrange\n",
- "\n",
- "On considère les polynômes $\\varphi_k$, $k=0,\\dots, n$ de\n",
- "degré $n$ tels que\n",
- "\n",
- "$$\\varphi_k(t_j)=\\delta_{jk}, \\qquad k,j=0,\\dots, n,$$\n",
- "\n",
- "où $\\delta_{jk}=1$ si $j=k$ et $\\delta_{jk}=0$ si $j\\neq k$.\n",
- "Explicitement, on a\n",
- "\n",
- "$$\\varphi_k(t)=\\prod_{j=0,j\\ne k}^n\\dfrac{(t-t_j)}{(t_k-t_j)}.$$"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Exercice (théorique)** Vérifiez que\n",
- "\n",
- "1. $B = \\{\\varphi_k, k=0,\\dots, n\\}$ est une base de $P_n(\\mathbb R)$\n",
- "2. Chaque polynôme $\\varphi_k$ est de degré $n$\n",
- "3. $\\varphi_k(t_j)=\\delta_{jk}, \\qquad k,j=0,\\dots, n$\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Defining the Lagrange basis functions\n",
- "def phi(t,k,z):\n",
- " # the input variables are:\n",
- " # t : the interpolatory points\n",
- " # k : which basis function\n",
- " # z : where to evaluate the function\n",
- " \n",
- "\n",
- " # careful, there are n+1 interpolation points!\n",
- " n = t.size - 1 \n",
- "\n",
- " # init result to one, of same type and size as z\n",
- " result = np.zeros_like(z) + 1\n",
- "\n",
- " # first few checks on k:\n",
- " if (type(k) != int) or (t.size < 1) or (k > n) or (k < 0):\n",
- " raise ValueError('Lagrange basis needs a positive integer k, smaller than the size of t')\n",
- " \n",
- " # loop on n to compute the product\n",
- " for j in range(0,n+1) :\n",
- " if (j == k) :\n",
- " continue\n",
- " if (t[k] == t[j]) :\n",
- " raise ValueError('Lagrange basis: all the interpolation points need to be distinct')\n",
- "\n",
- "# (Complete the code below)\n",
- " result = \n",
- "\n",
- " return result\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exemple\n",
- "\n",
- "Pour $n=2$, $t_0=-1$, $t_1=0$, $t_2=1$, les polynômes de la base de Lagrange\n",
- "sont\n",
- "\n",
- "\\begin{equation*}\n",
- "\\begin{array}{lcl}\n",
- "\\varphi_0(t)&=&\\displaystyle\\dfrac{(t-t_1)(t-t_2)}{(t_0-t_1)(t_0-t_2)}=\n",
- "\\dfrac{1}{2}t(t-1),\\\\[2mm]\n",
- "\\varphi_1(t)&=&\\displaystyle\\dfrac{(t-t_0)(t-t_2)}{(t_1-t_0)(t_1-t_2)}=\n",
- "-(t+1)(t-1),\\\\[2mm]\n",
- "\\varphi_2(t)&=&\\displaystyle\\dfrac{(t-t_0)(t-t_1)}{(t_2-t_0)(t_2-t_1)}=\n",
- "\\dfrac{1}{2}t(t+1).\n",
- "\\end{array}\n",
- "\\end{equation*}\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# plot the Lagrange Basis functions \n",
- "t = np.linspace(-1., 1, 3)\n",
- "z = np.linspace(-1.1, 1.1, 100)\n",
- "\n",
- "plt.plot(z, phi(t,0,z), 'g', z, phi(t,1,z), 'r', z, phi(t,2,z),':')\n",
- "\n",
- "plt.xlabel('t'); plt.ylabel('$\\\\varphi_{k}(t)$'); plt.title('Lagrange basis functions')\n",
- "plt.legend(['$\\\\varphi_{0}$','$\\\\varphi_{1}$','$\\\\varphi_{2}$'])\n",
- "plt.grid(True)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exercice\n",
- "\n",
- "Visualisez la base de Lagrange associée aux points $t_k = 1, 1.5, 2, 2.5, 3$.\n",
- "Evaluez sur le graphique les valeurs de $\\varphi_k(t_j)$\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# plot the Lagrange Basis functions \n",
- "t = np.linspace(1., 3, 5)\n",
- "z = np.linspace(0.9, 3.1, 100)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Polynôme d'interpolation\n",
- "\n",
- "Le polynôme d'interpolation $\\Pi_n$ des\n",
- "valeurs $y_j$ aux noeuds $t_j$, $j=0,\\dots,n$, s'écrit\n",
- "\\begin{equation}\\label{e:int_lagr}\n",
- " \\Pi_n(t)=\\sum_{k=0}^n y_k \\varphi_k(t),\n",
- "\\end{equation}\n",
- "car il vérifie $\\Pi_n(t_j)=\\sum_{k=0}^n y_k \\varphi_k(t_j)=y_j$.\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.2 Methodes iteratives.ipynb b/4.2 Methodes iteratives.ipynb
deleted file mode 100644
index 240af56..0000000
--- a/4.2 Methodes iteratives.ipynb
+++ /dev/null
@@ -1,646 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Méthodes itératives"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt\n",
- "\n",
- "\n",
- "# scipy.linalg.eig : eigenvalues of a matric\n",
- "from scipy.linalg import eig\n",
- "# scipy.linalg.lu : LU decomposition\n",
- "from scipy.linalg import lu\n",
- "# scipy.linalg.cholesky : Cholesky decomposition\n",
- "from scipy.linalg import cholesky\n",
- "from scipy.linalg import hilbert"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Exemple\n",
- "\n",
- "$$A=\\begin{pmatrix}\n",
- "1 & 2 & 3 & 4 \\\\\n",
- "5 & 6 & 7 & 8\\\\\n",
- "9 & 10 & 11 & 12\\\\\n",
- "13 & 14 & 15 & 16\n",
- "\\end{pmatrix}.$$\n",
- "\n",
- "On a alors\n",
- "$$D=\\begin{pmatrix}\n",
- "1 & 0 & 0 & 0 \\\\\n",
- "0 & 6 & 0 & 0\\\\\n",
- "0 & 0 & 11 & 0\\\\\n",
- "0 & 0 & 0 & 16\n",
- "\\end{pmatrix};$$\n",
- "la matrice d'itération de la méthode de Jacobi est\n",
- "$$B_J = D ^{-1}(D - A) = I - D^{-1}A =\n",
- "\\begin{pmatrix}\n",
- "0 & -2 & -3 & -4 \\\\\n",
- "-5/6 & 0 & -7/6 & -4/3\\\\\n",
- "-9/11 & -10/11 & 0 & -12/11\\\\\n",
- "-13/16 & -14/16 & -15/16 & 0\n",
- "\\end{pmatrix}.$$"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "A = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Jacobi\n",
- "# first diag extract the diagonal of A, the second builds a diagonal matrix starting from a vector\n",
- "D = np.diag(np.diag(A)) \n",
- "\n",
- "Bj = np.linalg.solve(D,(D-A))\n",
- "# What are the eigenvlues of $A$ ? \n",
- "lk, v = eig(Bj)\n",
- "print(f'The eigenvalues of Bj are {lk}\\n')\n",
- "print(f'The spectral radius of Bj is {np.max(np.abs(lk)):.4f} > 1, therefore Jacobi does not converge')\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Gauss-Seidel\n",
- "# Return a copy of an array with elements above the k-th diagonal zeroed. \n",
- "Pgs = np.tril(A,k=0)\n",
- "\n",
- "Bgs = np.linalg.solve(Pgs,(Pgs-A))\n",
- "# What are the eigenvlues of $A$ ? \n",
- "lk, v = eig(Bgs)\n",
- "print(f'The eigenvalues of Bgs are {lk}\\n')\n",
- "print(f'The spectral radius of Bgs is {np.max(np.abs(lk)):.4f} > 1, therefore Gauss-Seidel does not converge')\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": []
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exercice \n",
- "reprenez cet exemple avec la matrice \n",
- "```python\n",
- "A = np.array([[3,2,1,0],[3,4,3,2],[3,2,5,2],[1,2,3,4]])\n",
- "```\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Exemple\n",
- "\n",
- "$$A = \\left(\n",
- " \\begin{array}{ccc}\n",
- " 3 & 2 & 1 \\\\[0.5em]\n",
- " 1 & 4 & 1 \\\\[0.5em]\n",
- " 2 & 4 & 8\n",
- " \\end{array} \\right), \\quad\n",
- " \\mathbf{b} = \\left( \\begin{array}{c} 4 \\\\ 5 \\\\ 6 \\end{array} \n",
- "\\right) \\, .$$\n",
- "\n",
- "Exprimez (sans Calculer) $P$, $B$, et $\\rho(B)$ dans le cas de Jacobi et Gauss-Seidel.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "A = np.array([[3, 2, 1],\n",
- " [1, 4, 1],\n",
- " [2, 4, 8] ])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Jacobi\n",
- "\n",
- "# To complete\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Gauss-Seidel\n",
- "\n",
- "# To complete\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Méthode de Richardson\n",
- "\n",
- "$A$ et $\\mathbf b$ donnés, on cherche à approximer la solution $\\mathbf x$ de $A\\mathbf x = \\mathbf b$.\n",
- "\n",
- "Soit $\\mathbf x^{(0)}$ donné, $\\mathbf r^{(0)}= \\mathbf b - A\\mathbf x^{(0)}$ pour $k=0,1,2,...$ :\n",
- "\n",
- "\\begin{eqnarray*}\n",
- "\\text{trouver }\\mathbf{z}^{(k)} \\text{ tel que } && P \\mathbf{z}^{(k)} = \\mathbf{r}^{(k)}\\\\\n",
- "\\text{choisir }\\alpha_k && \\\\\n",
- "&& \\mathbf{x}^{(k+1)} = \\mathbf{x}^{(k)} + \\alpha_k \\mathbf{z}^{(k)} \\\\\n",
- "&& \\mathbf{r}^{(k+1)} = \\mathbf{r}^{(k)} - \\alpha_k A \\mathbf{z}^{(k)}.\n",
- "\\end{eqnarray*}\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Exemples: Jacobi et Gauss-Seidel avec relaxation\n",
- "\n",
- "Nous avons vu que les méthodes de Jacobi appliquée à la matrice\n",
- "$$A=\\begin{pmatrix}\n",
- "3 &2&1&0\\\\3&4&3&2\\\\3&2&5&2\\\\1&2&3&4\n",
- "\\end{pmatrix}$$\n",
- "ne converge pas, alors que Gauss-Seidel converge.\n",
- "\n",
- "Nous allons utiliser la méthode de Richardson stationnaire avec un paramètre de relaxation $\\alpha$ (donc constant) constant pour voir si on peut améliorer la convergence.\n",
- "\n",
- "#### Partie 1 matrices d'itérations\n",
- "\n",
- "La matrice d'itérations pour la méthod de Richardson est $B(\\alpha_k) = I - P^{-1}A$. En particulier, pour $\\alpha$ constant, nous pouvons faire un graphique du rayon spectral $\\rho(B(\\alpha))$ dans les cas de Jacobi et Gauss-Seidel:\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "A = np.array([[3,2,1,0],[3,4,3,2],[3,2,5,2],[1,2,3,4]])\n",
- "D = np.diag(np.diag(A)) \n",
- "Pgs = np.tril(A,k=0)\n",
- "\n",
- "Alphas = np.linspace(0, 2, 3001)\n",
- "\n",
- "rhoBj = []\n",
- "rhoBgs = []\n",
- "\n",
- "\n",
- "for alpha in Alphas :\n",
- " Bgs = np.linalg.solve(Pgs,(Pgs-alpha*A))\n",
- " Bj = np.linalg.solve(D,(D-alpha*A))\n",
- "\n",
- " lk, v = eig(Bj)\n",
- " rhoBj.append( np.max(np.abs(lk)) )\n",
- "\n",
- " lk, v = eig(Bgs)\n",
- " rhoBgs.append( np.max(np.abs(lk)) )\n",
- "\n",
- " \n",
- "plt.plot(Alphas, rhoBj, 'b:', Alphas, rhoBgs, 'g:')\n",
- "\n",
- "plt.xlabel('alpha'); plt.ylabel('rho');\n",
- "plt.title('Spectral radius')\n",
- "plt.grid(True)\n",
- "plt.legend(['rho(B_J(alpha))','rho(B_GS(alpha))'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "On constate que:\n",
- "\n",
- "1. En utilisant $P=D$, Richardson converge pour $\\alpha\\lessapprox 0.8$ and the optimal one is for $\\alpha \\approx 0.75$\n",
- "2. En utilisant $P$ égale au triangle inférieur de $A$, Richardson converge pour $\\alpha\\lessapprox 2$ and the optimal one is for $\\alpha \\approx 1.5$\n",
- "\n",
- "#### Partie 2 Implémentation de la Méthode de Richardson\n",
- "\n",
- "Définissez une fonction Python qui implémente la méthode de Richardson stationnaire et qui a la structure suivante:\n",
- "```python\n",
- "def Richardson(A, b, x0, P, alpha, maxIterations, tolerance) :\n",
- " # Richardson method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # alpha : relaxiation parameter\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- "```\n",
- "\n",
- "La méthode de Richardson, comme toute méthode iterative a besoin d'un critere d'arrêt. Dans ce cas, le plus simple est de poser les critères suivants:\n",
- "\n",
- "1. On fixe le nombre maximal d'itérations\n",
- "2. Le résidu rélatif est plus petit qu'une tolérance $\\epsilon$ : \n",
- " $$ \\frac{\\| \\mathbf b - A \\mathbf x^{(k)} \\|}{\\|\\mathbf b \\|} < \\epsilon$$\n",
- "On s'arrête dès que l'un des ces critières est rempli\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def Richardson(A, b, x0, P, alpha, maxIterations, tolerance) :\n",
- " # Richardson method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # alpha : relaxiation parameter\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- " # Outputs :\n",
- " # xk : approximate solution\n",
- " # vector of relative norm of the residuals\n",
- " \n",
- "\n",
- " # To complete\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 3 Utilisation de la Méthode de Richardson\n",
- "\n",
- "Uliliser la fonction `Richardson` pour approcher la solution de $A\\mathbf x = \\mathbf b$ pour $\\mathbf b = (0, 1, -1, 1)^T$ avec un tolerance sur le residu rélatif de $10^{-6}$ et\n",
- "le vecteur null comme point initial, et un nombre maximum d'itération de $200$\n",
- "\n",
- "1. En utilisant $P=D$, avec : $\\alpha = 0.7, 0.75, 0.79,0.81, 0.9$\n",
- "2. En utilisant $P$ égale au triangle inférieur de $A$, avec : $\\alpha = 1, 1.5, 1.95,2.05$ \n",
- "\n",
- "Comment intérprétez=vous ces résultats ?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "b = np.array([0,1,-1,1])\n",
- "# Define the initial guess\n",
- "x0 = 0*b # trick to have te right size for x0\n",
- "\n",
- "tolerance = 1e-6\n",
- "maxIter = 200"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Jacobi-like precondtioner\n",
- "P = np.diag(np.diag(A))\n",
- "legend = []\n",
- "\n",
- "for alpha in [0.7, 0.75, 0.79,0.81, 0.9] :\n",
- " x,relRes = Richardson(A,b,x0,P,alpha,maxIter,tolerance)\n",
- " print(relRes[-1])\n",
- " plt.plot( relRes, ':')\n",
- " legend.append(str(alpha))\n",
- "\n",
- "plt.legend(legend)\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('res');\n",
- "plt.title('Relative residuals')\n",
- "plt.yscale('log')\n",
- "plt.show()\n",
- "\n",
- " \n",
- " "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Gauss-Seidel-like precondtioner\n",
- "P = np.tril(A,k=0)\n",
- "legend = []\n",
- "\n",
- "for alpha in [1, 1.5, 1.95,2.05] :\n",
- " x,relRes = Richardson(A,b,x0,P,alpha,maxIter,tolerance)\n",
- " print(relRes[-1])\n",
- " plt.plot( relRes, ':')\n",
- " legend.append(str(alpha))\n",
- "\n",
- "plt.legend(legend)\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('res');\n",
- "plt.title('Relative residuals')\n",
- "plt.yscale('log')\n",
- "plt.show()\n",
- "\n",
- " \n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 4 Implémentation de la Méthode du Gradient Préconditionné\n",
- "\n",
- "Définissez une fonction Python qui implémente la méthode de Gradient Préconditionné et qui a la structure suivante:\n",
- "```python\n",
- "def PrecGradient(A, b, x0, P, maxIterations, tolerance) :\n",
- " # Preconditioned Gradient method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- "```\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def PrecGradient(A, b, x0, P, maxIterations, tolerance) :\n",
- " # Preconditioned Gradient method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- " # Outputs :\n",
- " # xk : approximate solution\n",
- " # vector of relative norm of the residuals\n",
- " \n",
- " # To complete. Start from Richardson, it is easier\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 5 Implémentation de la Méthode du Gradient Conjugué Préconditionné\n",
- "\n",
- "Définissez une fonction Python qui implémente la méthode de Gradient Préconditionné et qui a la structure suivante:\n",
- "```python\n",
- "def PrecConjugateGradient(A, b, x0, P, maxIterations, tolerance) :\n",
- " # Preconditionate Conjugate Gradient method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- "```\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def PrecConjugateGradient(A, b, x0, P, maxIterations, tolerance) :\n",
- " # Preconditionate Conjugate Gradient method to approximate the solution of Ax=b\n",
- " # x0 : initial guess\n",
- " # P : préconditioner\n",
- " # maxIterations : maximum number of iterations\n",
- " # tolerance : tolerance for relative residual\n",
- " # Outputs :\n",
- " # xk : approximate solution\n",
- " # vector of relative norm of the residuals\n",
- " \n",
- " # we do not keep track of all the sequence, just the last two entries\n",
- " xk = x0\n",
- " rk = b - A.dot(xk)\n",
- " zk = np.linalg.solve(P,rk)\n",
- " pk = zk\n",
- " \n",
- " # To complete. Start from PrecGradient, it is easier"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Partie 6 Utilisation de la Méthode de Gradient Preconditioné et du Gradient Conjugué Préconditionné\n",
- "\n",
- "Uliliser les fonctions `PrecGradient` et `PrecConjugateGradient` pour approcher la solution de $A\\mathbf x = \\mathbf b$ pour $\\mathbf b = (0, 1, -1, 1)^T$ avec un tolerance sur le residu rélatif de $10^{-6}$ et\n",
- "le vecteur null comme point initial, et un nombre maximum d'itération de $200$\n",
- "\n",
- "1. En utilisant $P=D$\n",
- "2. En utilisant $P$ égale au triangle inférieur de $A$\n",
- "\n",
- "Comment intérprétez=vous ces résultats ?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "legend = []\n",
- "\n",
- "# Jacobi-like precondtioner\n",
- "P = np.diag(np.diag(A))\n",
- "\n",
- "x,relRes = PrecGradient(A,b,x0,P,maxIter,tolerance)\n",
- "print(relRes[-1])\n",
- "plt.plot( relRes, ':')\n",
- "legend.append('Gradient, P=D')\n",
- "\n",
- "repeat the same for\n",
- "# PrecConjugateGradient, then\n",
- "# Gauss-Seidel-like precondtioner (both Gradient and Conjugate Gradient)\n",
- "\n",
- "\n",
- "\n",
- "plt.legend(legend)\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('res');\n",
- "plt.title('Relative residuals')\n",
- "plt.yscale('log')\n",
- "plt.show()\n",
- "\n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Autres Exemples\n",
- "\n",
- "Repetez les expériences numériques ci dessous pour approcher les solution de\n",
- "\n",
- "$$\\left(\n",
- " \\begin{array}{ccc}\n",
- " 2 & 1 \\\\[0.5em]\n",
- " 1 & 3 \n",
- " \\end{array} \\right)\n",
- " \\mathbf x = \n",
- " \\left( \\begin{array}{c} 1 \\\\ 0 \\end{array} \\right) \\quad,\\quad \n",
- " \\left(\n",
- " \\begin{array}{ccc}\n",
- " 2 & 2 \\\\[0.5em]\n",
- " -1 & 3 \n",
- " \\end{array} \\right)\n",
- " \\mathbf x = \n",
- " \\left( \\begin{array}{c} 1 \\\\ 0 \\end{array} \\right) \\quad\\text{et}\\quad \n",
- " \\left(\n",
- " \\begin{array}{ccc}\n",
- " 5 & 7 \\\\[0.5em]\n",
- " 7 & 10 \n",
- " \\end{array} \\right)\n",
- " \\mathbf x = \n",
- " \\left( \\begin{array}{c} 1 \\\\ 0 \\end{array} \\right) .$$\n",
- " \n",
- "Pour quelles matrices et méthodes vous vous attendez convergence ? Pour la dernière matrice, le résultat n'est pas celui attendu pourquoi ?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "b = np.array([1,0])\n",
- "# Define the initial guess\n",
- "x0 = np.array([1,1])\n",
- "\n",
- "tolerance = 1e-6\n",
- "maxIter = 10\n",
- "A = np.array([[2,1],[1,3]])\n",
- "# A = np.array([[2,2],[-1,3]])\n",
- "# A = np.array([[5,7],[7,10]])\n",
- "\n",
- "legend = []\n",
- "\n",
- "# To complete: similarly to previous example:\n",
- "# Jacobi, Gauss-Seidel (simple)\n",
- "# PrecGrandient and ConjugateGradient, with\n",
- "# both Jacobi-like and Gauss-seidel-like\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "plt.legend(legend)\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('res');\n",
- "plt.title('Relative residuals')\n",
- "plt.yscale('log')\n",
- "plt.show()\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Matrice de Hilbert\n",
- "\n",
- "Peut-on utiliser une la méthode de Richardson pour resoudre le problème du mauvais conditionnement de la matrice de Hilbert ?"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "n = 10\n",
- "A = hilbert(n)\n",
- "xexact = np.ones(n)\n",
- "b = A.dot(xexact)\n",
- "\n",
- "# Define the initial guess\n",
- "x0 = b\n",
- "\n",
- "tolerance = 1e-6\n",
- "maxIter = 10\n",
- "\n",
- "legend = []\n",
- "\n",
- "# To complete: similarly to previous example:\n",
- "# Jacobi, Gauss-Seidel (simple)\n",
- "# PrecGrandient and ConjugateGradient, with\n",
- "# both Jacobi-like and Gauss-seidel-like\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "plt.legend(legend)\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('res');\n",
- "plt.title('Relative residuals')\n",
- "plt.yscale('log')\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.3 Complements.ipynb b/4.3 Complements.ipynb
deleted file mode 100644
index 3a70a40..0000000
--- a/4.3 Complements.ipynb
+++ /dev/null
@@ -1,186 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Compléments"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt\n",
- "\n",
- "\n",
- "# scipy.linalg.lu : LU decomposition\n",
- "from scipy.linalg import lu\n",
- "# scipy.linalg.cholesky : Cholesky decomposition\n",
- "from scipy.linalg import cholesky\n",
- "from scipy.linalg import hilbert\n",
- "\n",
- "from IterativeMethodsLib import *"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Dans le cas d'une matrice mal conditionnée, on peut essayre d'utiliser la factorisation LU ou de Cholesky comme préconditionneur. Mais cela ne marche pas forcément ..."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "n = 4\n",
- "A = hilbert(n)\n",
- "\n",
- "D = np.diag(np.diag(A))\n",
- "\n",
- "MaxIterations = 1000\n",
- "\n",
- "x = np.ones([n,1])\n",
- "b = A.dot(x)\n",
- "alpha = 0.5\n",
- "\n",
- "# we do not keep track of all the sequence, just the last two entries\n",
- "xk = b\n",
- "rk = b - A.dot(xk)\n",
- "\n",
- "RelativeError = []\n",
- "residualNorm = []\n",
- "\n",
- "# We rewrite Richardson for the sake of the example: \n",
- "for k in range(MaxIterations) :\n",
- " \n",
- " zk = np.linalg.solve(D,rk)\n",
- "\n",
- " xk = xk + alpha*zk\n",
- " rk = rk - alpha*A.dot(zk)\n",
- " #rk = b - A.dot(xk)\n",
- " \n",
- " \n",
- " RelativeError.append( np.linalg.norm(x-xk) / np.linalg.norm(x) )\n",
- " residualNorm.append( np.linalg.norm(rk) )\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#plt.plot(range(MaxIterations), RelativeError, 'b:.')\n",
- "plt.plot(range(MaxIterations), RelativeError, 'b:.',range(MaxIterations), residualNorm, 'g:.')\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('err');\n",
- "# plt.xscale('log') \n",
- "plt.yscale('log')\n",
- "plt.grid(True)\n",
- "plt.legend(['Rel err','res'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "n = 10\n",
- "A = hilbert(n)\n",
- "\n",
- "# epsilon = 2\n",
- "#A = np.array([[epsilon, 1, 2],\n",
- "# [1, 3, 1],\n",
- "# [2, 1, 3] ])\n",
- "\n",
- "# B = np.diag(np.diag(A, k=1),k=1) + np.diag(np.diag(A, k=0),k=0) + np.diag(np.diag(A, k=-1),k=-1) \n",
- "# L = cholesky(B, lower=True)\n",
- "P, L, U = lu(A)\n",
- "\n",
- "MaxIterations = 30\n",
- "\n",
- "x = np.ones([n,1])\n",
- "b = A.dot(x)\n",
- "alpha = 0.5\n",
- "\n",
- "# we do not keep track of all the sequence, just the last two entries\n",
- "xk = b\n",
- "rk = b - A.dot(xk)\n",
- "\n",
- "RelativeError = []\n",
- "residualNorm = []\n",
- "\n",
- "for k in range(MaxIterations) :\n",
- " \n",
- " # y = np.linalg.solve(L,rk)\n",
- " # zk = np.linalg.solve(L.T,y)\n",
- "\n",
- " y = np.linalg.solve(L,P.T.dot(rk))\n",
- " zk = np.linalg.solve(U,y)\n",
- "\n",
- " xk = xk + alpha*zk\n",
- " rk = rk - alpha*A.dot(zk)\n",
- " #rk = b - A.dot(xk)\n",
- " \n",
- " \n",
- " RelativeError.append( np.linalg.norm(x-xk) / np.linalg.norm(x) )\n",
- " residualNorm.append( np.linalg.norm(rk) )\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "#plt.plot(range(MaxIterations), RelativeError, 'b:.')\n",
- "plt.plot(range(MaxIterations), RelativeError, 'b:.',range(MaxIterations), residualNorm, 'g:.')\n",
- "\n",
- "plt.xlabel('n'); plt.ylabel('err');\n",
- "# plt.xscale('log') \n",
- "plt.yscale('log')\n",
- "plt.grid(True)\n",
- "plt.legend(['Rel err','res'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.3 Interpolation de fonction.ipynb b/4.3 Interpolation de fonction.ipynb
deleted file mode 100644
index cd8a001..0000000
--- a/4.3 Interpolation de fonction.ipynb
+++ /dev/null
@@ -1,401 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 1.3 Interpolation d'une fonction continue\n",
- "\n",
- "Soit $f: \\mathbf [a,b] \\rightarrow R$ continue et $t_0,\\ldots,t_n\\in [a,b]$ des noeuds distincts. Le polynôme d'interpolation\n",
- "$\\Pi_n(t)$ est noté $\\Pi_n f(t)$ et est appelé l'interpolant de $f$ aux\n",
- "noeuds $t_0,\\dots,t_n$.\n",
- "\n",
- "\n",
- "Si on prend $$y_k=f(t_k), k= 0,..., n,$$ \n",
- "alors on aura\n",
- "\\begin{equation*}\n",
- "\\Pi_nf(t)=\\sum_{k=0}^n\n",
- "f(t_k)\\varphi_k(t).\n",
- "\\end{equation*}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exercice\n",
- "\n",
- "Ecrivez une fonction Python qui a la définition suivante, en utilisant la fonction `phi` définie plus haut.\n",
- "Ecrivez aussi un petit test sur la base de l'exercice précédent.\n",
- "\n",
- "```python\n",
- "# Lagrange Interpolation of data (t,y), evaluated at ordinate(s) z\n",
- "def LagrangePi(t,y,z):\n",
- " # the input variables are:\n",
- " # t : the interpolatory points\n",
- " # y : the corresponding data at the points t\n",
- " # z : where to evaluate the function\n",
- " \n",
- "```\n",
- "\n",
- "Utilisez le fait que $\\{\\varphi_k, k = 0,...,n\\}$ est une base des polynômes de degré $\\leq n$\n",
- "et que le vecteur $y$ représente les coordonnéee du polynôme d'interpolation recherché par rapport à cette base,\n",
- "c'est-à-dire\n",
- "$$\\Pi_n (z) = y_0 \\varphi_0 + ... + y_n \\varphi_n$$\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# importing libraries used in this book\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt\n",
- "\n",
- "from InterpolationLib import LagrangeBasis as phi"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Lagrange Interpolation of data (t,y), evaluated at ordinate(s) z\n",
- "def LagrangePi(t,y,z):\n",
- " # the input variables are:\n",
- " # t : the interpolatory points\n",
- " # y : the corresponding data at the points t\n",
- " # z : where to evaluate the function\n",
- " \n",
- " # (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- " return result "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# vecteur des points d'interpolation\n",
- "t = np.linspace(0, 1, 5)\n",
- "\n",
- "# vecteur des valeurs\n",
- "y = np.array([3.38, 3.86, 3.85, 3.59, 3.49])\n",
- "\n",
- "z = np.linspace(-0.1, 1.1, 100)\n",
- "plt.plot(t, y, 'ro', z, LagrangePi(t,y,z))\n",
- "\n",
- "plt.xlabel('t'); plt.ylabel('y');\n",
- "plt.legend(['data','p(t)'])\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Erreur d'interpolation\n",
- "\n",
- "Soient $t_0$, $t_1$, $\\ldots$, $t_n$,\n",
- "$(n+1)$ nœuds *équirépartis* dans $I=[a,b]$ et soit $f\\in C^{n+1}(I)$.\n",
- "Alors\n",
- "\\begin{equation}\n",
- " \\max_{x\\in I} |f(t) - \\Pi_n f(t)| \\leq \\dfrac{1}{2(n+1)}\n",
- "\\left(\\frac{b-a}{n}\\right)^{n+1}\\max_{x\\in I}|f^{(n+1)}(t)|.\n",
- "\\end{equation}\n",
- "\n",
- "On remarque que l'erreur d'interpolation dépend de la dérivée\n",
- "$(n+1)$-ième de $f$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exercice\n",
- "\n",
- "On considère les points d'interpolation $$t_0=1, t_1=1.75, t_2=2.5, t_3=3.25, t_4=4$$ \n",
- "et la fonction $$f(t) = t \\sin(2\\pi t)$$.\n",
- "\n",
- "1. Calculez la base de Lagrange associée à ces points. D'abord sur papier, ensuite utilisez Python pour en dessiner le graphique.\n",
- "\n",
- "2. Calculez le polynôme d'interpolation $\\Pi_n$ à l'aide de la base de Lagrange. D'abord sur papier, ensuite avec Python.\n",
- "\n",
- "4. Quelle est l'erreur théorique d'interpolation ? \n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Comportement pour $n$ grand: eg, la fonction de Runge.\n",
- "\n",
- "Le fait que\n",
- "$$\\lim_{n\\rightarrow\\infty} \\dfrac{1}{2(n+1)}\\left(\\dfrac{b-a}{n}\\right)^{n+1}=0$$\n",
- "n'implique pas forcément que $\\max_{t \\in I} |E_nf(t)|$ tende vers zéro quand $n\\rightarrow\\infty$.\n",
- "\n",
- "Soit $f(t)=\\dfrac{1}{1+t^2}$, $t\\in [-5,5]$. Si on l'interpole dans des\n",
- "noeuds équirépartis, l'interpolant présente des oscillations au voisinage des extrémités de l'intervalle.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Runge fonction\n",
- "f = lambda t : 1./(1+t**2)\n",
- "\n",
- "# Values of N to use\n",
- "Nrange = [3,5,10]\n",
- "# plotting points\n",
- "z = np.linspace(-5, 5, 100)\n",
- "\n",
- "# (Complete the code below)\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`sympy` est une librairie pour le calcul symbolique en Python. Nous allons l'utiliser pour étudier le comportement de l'erreur d'interpolation de la fonction de Runge."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Using symbolic python to compute derivatives\n",
- "import sympy as sp\n",
- "\n",
- "# define x as symbol\n",
- "x = sp.symbols('x')\n",
- "\n",
- "# define Runge function\n",
- "f = 1/(1+x**2)\n",
- "\n",
- "# pretty print the 5th derivative of f\n",
- "f5 = sp.diff(f, x,5)\n",
- "# display f5\n",
- "sp.init_printing(use_unicode=True)\n",
- "display(f5)\n",
- "\n",
- "\n",
- "# evalf can be used to compute the value of a function at a given point\n",
- "print('5th derivative evaluated at 3 :')\n",
- "print( f5.evalf(subs={x: 3}) )\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Pour définir une fonction qui accepte un array de valeur, il faut utiliser les lignes suivantes.\n",
- "\n",
- "Ensuite on peut aussi dessiner le graphe ..."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "\n",
- "# to evaluate a function at many given points, we need the following trick\n",
- "diff_f_func = lambda t: float(sp.diff(f,x,k).evalf(subs={x: t}))\n",
- "diff_f = np.vectorize(diff_f_func)\n",
- "\n",
- "# the derivative can be set with k (not very elegant...)\n",
- "k = 4\n",
- "print(diff_f(4.5))\n",
- "\n",
- "# plotting points\n",
- "z = np.linspace(-5, 5, 100)\n",
- "# plot the derivative between -5 and 5\n",
- "\n",
- "plt.plot(z,diff_f(z), 'b')\n",
- "plt.xlabel('t'); plt.ylabel('y'); plt.title('Derivatives of Runge function')\n",
- "plt.legend(Nrange)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "... ou évaluer le maximum pour plusieurs $n$ et voir le comportement de $\\max |f^{(n)}|$ en fonction de $n$."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot max(abs(fn)) in the range -5,5\n",
- "z = np.linspace(-5, 5, 100)\n",
- "\n",
- "Nmax = 10\n",
- "maxValFn = np.zeros(Nmax)\n",
- "for k in range(Nmax):\n",
- " maxValFn[k] = np.max(np.abs(diff_f(z)))\n",
- " \n",
- "plt.plot(range(10), maxValFn)\n",
- "\n",
- "plt.yscale('log')\n",
- "plt.xlabel('n'); plt.ylabel('$\\max|\\partial f|$');\n",
- "plt.title('Max of $|\\partial^n f|$');\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Interpolation de Chebyshev\n",
- "\n",
- "\n",
- "Pour chaque entier positif $n\\geq 1$, pour $i=0,\\dots n$, on note\n",
- "$$\\hat{t}_i=-\\cos(\\pi i/n)\\in[-1,1]$$\n",
- "**les points de Chebyshev** et on définit\n",
- "\n",
- "$$t_i=\\dfrac{a+b}{2}+\\dfrac{b-a}{2}\\hat{t}_i\\in[a,b],$$\n",
- "\n",
- "pour un intervalle arbitraire $[a,b]$. Pour une fonction continue $f\\in\n",
- "C^1([a,b])$, le polynôme d'interpolation $\\Pi_n f$ de degré $n$ aux noeuds\n",
- "$\\{t_i,i=0,\\ldots,n\\}$ converge uniformément vers $f$ quand\n",
- "$n\\rightarrow \\infty$.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Chebichev points on the interval [-1,1]\n",
- "\n",
- "z = np.linspace(0,1, 100)\n",
- "plt.plot(np.cos(np.pi*z), np.sin(np.pi*z))\n",
- "\n",
- "n =5\n",
- "z = np.linspace(0,1, n+1)\n",
- "plt.plot(np.cos(np.pi*z), np.sin(np.pi*z), 'o')\n",
- "plt.plot(np.cos(np.pi*z), 0*z, 'x')\n",
- "\n",
- "for k in range(0,n+1) :\n",
- " plt.plot([np.cos(np.pi*z[k]),np.cos(np.pi*z[k])],[0,np.sin(np.pi*z[k])],':')\n",
- "\n",
- "plt.axis('equal')\n",
- "\n",
- "plt.xlabel('t'); \n",
- "plt.title('Chebyshev points')\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Exemple\n",
- "\n",
- "On reprend le\n",
- " même exemple mais on interpole la fonction de Runge dans les points de\n",
- "Chebyshev. La figure montre les polynômes de\n",
- "Chebyshev de degré $n=5$ et $n=10$. On remarque que les oscillations\n",
- "diminuent lorsqu'on augmente le degré du polynôme.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Runge fonction\n",
- "f = lambda t : 1./(1+t**2)\n",
- "\n",
- "# Values of N to use\n",
- "Nrange = [3,5,10]\n",
- "# plotting points\n",
- "[a,b] = [-5,5]\n",
- "z = np.linspace(a,b, 100)\n",
- "\n",
- "for n in Nrange :\n",
- "\n",
- " # Chebyshev points on [-1,1]\n",
- " ht = -np.cos(np.pi*np.linspace(0,n,n+1)/n)\n",
- " # mapped to [a,b]\n",
- " t =(a+b)/2 + (b-a)/2*ht\n",
- "\n",
- " y = f(t);\n",
- "\n",
- " plt.plot(z, LagrangePi(t,y,z), ':')\n",
- "\n",
- "plt.plot(z,f(z), 'b')\n",
- "plt.xlabel('t'); plt.ylabel('y'); plt.title('Chebyshev interpolation')\n",
- "plt.legend(Nrange)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/4.4 Interpolation par intervalles.ipynb b/4.4 Interpolation par intervalles.ipynb
deleted file mode 100644
index d5cd136..0000000
--- a/4.4 Interpolation par intervalles.ipynb
+++ /dev/null
@@ -1,292 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 1.4 Interpolation par intervalles\n",
- "\n",
- "### Interpolation linéaire par morceaux\n",
- "\n",
- "Soit $f: \\mathbf [a,b] \\rightarrow R$ continue et $a=x_0<\\ldots$(m+1)$ points \n",
- "~~distincts~~ $t_0$, $t_1$,$\\dots$ $t_m$ et $(m+1)$ valeurs $y_0$,\n",
- "$y_1$,$\\dots$ $y_n$, on cherche un polynôme $p$\n",
- " de degré $n , tel que\n",
- "\n",
- "$$\\sum_{j=0}^m |y_j - p(t_j)|^2 \\qquad \\text{ soit le plus petit possible}.$$\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "On appelle **polynôme aux moindres carrés de degré $n$**\n",
- "le polynôme $\\hat{p}_n(t)$ de degré $n$ tel que\n",
- "\\begin{equation}\n",
- " \\sum_{j=0}^m |y_j - \\hat{p}_n(t_j)|^2 \\leq \\sum_{j=0}^m |y_j - p_n(t_j)|^2\n",
- " \\qquad \\forall p_n(t) \\in \\mathbb{P}_n\n",
- "\\end{equation}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Nous cherchons les coefficients du polynôme $p(t) = a_0 + a_1 t + ... + a_n t^n$ qui satisfait *au mieux* les $(n+1)$ équations $p(t_k) = y_k, k=0,...,n$, c'est-à-dire\n",
- "\n",
- "$$a_0 + a_1 t_k + ... + a_n t_k^n = y_k, k=0,...,m$$\n",
- "\n",
- "Ce système s'écrit sous forme matricielle \n",
- "\n",
- "$$\\begin{pmatrix}\n",
- "1 & t_0 & \\cdots & t_0^n \\\\\n",
- "\\vdots & & & \\vdots \\\\\n",
- "1 & t_m & \\cdots & t_m^n\n",
- "\\end{pmatrix}\n",
- "\\begin{pmatrix} a_0 \\\\ \\vdots \\\\ a_n \\end{pmatrix}\n",
- "=\\begin{pmatrix} y_0 \\\\ \\vdots \\\\ y_m \\end{pmatrix}$$\n",
- "\n",
- "Puisque $m
"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "\n"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.6"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/FiniteDifferenceLib.py b/FiniteDifferenceLib.py
deleted file mode 100644
index db4550a..0000000
--- a/FiniteDifferenceLib.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Jan 30
-
-@author: Simone Deparis
-"""
-
-import numpy as np
-
-# This function just provides a line to compare the convergence in a log-log plot.
-def sampleConvergence(h,order,ref) :
- # computes a pseudo order of convergence on h
- # ref is a reference maximum value
- return h**order * (ref/np.max(np.abs(h))**order)
-
-
-
-
-
diff --git a/IntegrationLib.py b/IntegrationLib.py
deleted file mode 100644
index 2c144ce..0000000
--- a/IntegrationLib.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import numpy as np
-
-# This function provides the midpoint quadrature rule.
-def Midpoint( a, b, N, f ) :
- # [a,b] : inteval
- # N : number of subintervals
- # f : fonction to integrate using the midpoint rule
-
- # size of the subintervals
- H = (b - a) / N
-
- # quadrature nodes
- x = np.linspace(a+H/2, b-H/2, N)
-
- # approximate integral
- return H * np.sum( f(x) );
-
-
-# This function provides the trapezoidal quadrature rule.
-def Trapezoidal( a, b, N, f ) :
- # [a,b] : inteval
- # N : number of subintervals
- # f : fonction to integrate using the trapezoidal rule
-
- # size of the subintervals
- H = (b - a) / N
-
- # quadrature nodes
- x = np.linspace(a, b, N+1)
-
- # approximate integral
- return H/2 * ( f(a) + f(b) ) + H * sum( f(x[1:N]) );
-
-# This function just provides the Simpson quadrature rule.
-def Simpson( a, b, N, f ) :
- # [a,b] : inteval
- # N : number of subintervals
- # f : fonction to integrate using the Simpson rule
-
- # size of the subintervals
- H = (b - a) / N
- # points defining intervals
- x = np.linspace(a, b, N+1)
-
- # left quadrature nodes
- xl = x[0:-1]
- # right quadrature nodes
- xr = x[1:]
- # middle quadrature nodes
- xm = (xl+xr)/2
-
- # approximate integral
- return H/6 * sum( f(xl) + 4*f(xm) + f(xr) );
diff --git a/InterpolationLib.py b/InterpolationLib.py
deleted file mode 100644
index 88edde5..0000000
--- a/InterpolationLib.py
+++ /dev/null
@@ -1,163 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Jan 30
-
-@author: Simone Deparis
-"""
-
-import numpy as np
-
-# Defining the Lagrange basis functions
-def LagrangeBasis(t,k,z):
- # the input variables are:
- # t : the interpolatory points
- # k : which basis function
- # z : where to evaluate the function
-
-
- # careful, there are n+1 interpolation points!
- n = t.size - 1
-
- # init result to one, of same type and size as z
- result = np.zeros_like(z) + 1
-
- # first few checks on k:
- if (type(k) != int) or (t.size < 1) or (k > n) or (k < 0):
- raise ValueError('Lagrange basis needs a positive integer k, smaller than the size of t')
-
- # loop on n to compute the product
- for j in range(0,n+1) :
- if (j == k) :
- continue
- if (t[k] == t[j]) :
- raise ValueError('Lagrange basis: all the interpolation points need to be distinct')
-
- result = result * (z - t[j]) / (t[k] - t[j])
-
- return result
-
-
-# Lagrange Interpolation of data (t,y), evaluated at ordinate(s) z
-def LagrangeInterpolation(t,y,z):
- # the input variables are:
- # t : the interpolatory points
- # y : the corresponding data at the points t
- # z : where to evaluate the function
-
- # {phi(t,k,.), k=0,...,n} is a basis of the polynomials of degree n
- # y represents the coordinates of the interpolating polynomial with respect to this basis.
- # Therefore LagrangePi(t,y,.) = y[0] phi(t,0,.) + ... + y[n] phi(t,n,.)
-
- # careful, there are n+1 basis functions!
- n = t.size - 1
-
- # init result to zero, of same type and size as z
- result = np.zeros_like(z)
-
- # loop on n to compute the product
- for k in range(0,n+1) :
- result = result + y[k] * LagrangeBasis(t,k,z)
-
- return result
-
-
-# Piece-wise linear interpolation, equidistributed points
-def PiecewiseLinearInterpolation(a,b,N,f,z):
- # the input variables are:
- # a,b : x[0] = a, x[n] = b
- # f : the corresponding data at the points x
- # z : where to evaluate the function
-
- def localLinearInterpolation (a, H, x, y, zk):
- # first find out in which interval we are. Easy here since equidistributed point
- i = int( (zk-a)//H )
-
- # if we are not in the interval, return constant function
- if x[i] == zk :
- return y[i]
-
- # use formula for local linear interpolation
- return y[i] + (y[i+1] - y[i])/(x[i+1] - x[i])* (zk - x[i])
-
- # careful, there are N+1 points
- H = (b-a)/N
- if np.isclose(H,0) :
- raise ValueError('PiecewiseLinearInterpolation : a and b are too close')
-
- x = np.linspace(a,b,N+1)
-
- # if f is a function, we need to evaluate it at alle the locations
- from types import FunctionType
- if isinstance(f, FunctionType) :
- y = f(x)
- else :
- y = f
-
- # if we want to evaluate the interpolating function in a single point
- if np.ndim(z) == 0 :
- return localLinearInterpolation (b, H, x, y, z)
-
- # if we are here, it means that we need to evaluate the interpolation on many points.
- # init result to zero, of same type and size as z
- result = np.zeros_like(z)
-
- # The following part is not optimized
- for k in range(z.size) :
-
- result[k] = localLinearInterpolation (a, H, x, y, z[k])
-
- return result
-
-# Piece-wise quadratic interpolation, equidistributed points
-def PiecewiseQuadraticInterpolation(a,b,N,f,z):
- # the input variables are:
- # a,b : x[0] = a, x[n] = b
- # f : the corresponding data at the points x
- # only works if fis a fonction
- # z : where to evaluate the function
-
- def localQuadraticInterpolation (a, H, x, f, zk):
- # first find out in which interval we are. Easy here since equidistributed point
- i = int( (zk-a)//H )
-
- # if we are not in the interval, return constant function
- if x[i] == zk :
- return f(x[i])
-
- t = np.array([ x[i], (x[i]+x[i+1])/2, x[i+1]] )
-
- # use formula for local quadratic interpolation
- return LagrangeInterpolation(t,f(t),zk)
-
- # careful, there are N+1 points
- H = (b-a)/N
- if np.isclose(H,0) :
- raise ValueError('PiecewiseQuadraticInterpolation : a and b are too close')
-
- x = np.linspace(a,b,N+1)
-
- # if f is a function, we need to evaluate it at alle the locations
- # from types import FunctionType
- # if ~isinstance(f, FunctionType) :
- # raise ValueError('PiecewiseQuadraticInterpolation : works only with a given function')
-
- # if we want to evaluate the interpolating function in a single point
- if np.ndim(z) == 0 :
- return localQuadraticInterpolation (b, H, x, f, z)
-
- # if we are here, it means that we need to evaluate the interpolation on many points.
- # init result to zero, of same type and size as z
- result = np.zeros_like(z)
-
- # The following part is not optimized
- for k in range(z.size) :
-
- result[k] = localQuadraticInterpolation (a, H, x, f, z[k])
-
- return result
-
-
-
-
-
diff --git a/IterativeMethodsLib.py b/IterativeMethodsLib.py
deleted file mode 100644
index 623b2cf..0000000
--- a/IterativeMethodsLib.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Jan 30
-
-@author: Simone Deparis
-"""
-
-import numpy as np
-
-def Richardson(A, b, x0, P, alpha, maxIterations, tolerance) :
- # Richardson method to approximate the solution of Ax=b
- # x0 : initial guess
- # P : préconditioner
- # alpha : relaxiation parameter
- # maxIterations : maximum number of iterations
- # tolerance : tolerance for relative residual
- # Outputs :
- # xk : approximate solution
- # vector of relative norm of the residuals
-
- # we do not keep track of all the sequence, just the last two entries
- xk = x0
- rk = b - A.dot(xk)
-
- RelativeResidualNorm = []
- for k in range(maxIterations) :
-
- zk = np.linalg.solve(P,rk)
- xk = xk + alpha*zk
-
- rk = b - A.dot(xk)
- # you can verify that this is equivalent to
- # rk = rk - alpha*A.dot(zk)
- RelativeResidualNorm.append(np.linalg.norm(rk)/np.linalg.norm(b))
- if ( RelativeResidualNorm[-1] < tolerance ) :
- print(f'Richardson converged in {k+1} iterations with a relative residual of {RelativeResidualNorm[-1]:1.3e}')
- return xk, RelativeResidualNorm
-
- print(f'Richardson did not converge in {maxIterations} iterations, the relative residual is {np.linalg.norm(rk)/np.linalg.norm(b):1.3e}')
- return xk, RelativeResidualNorm
-
-
-
-def PrecGradient(A, b, x0, P, maxIterations, tolerance) :
- # Preconditioned Gradient method to approximate the solution of Ax=b
- # x0 : initial guess
- # P : préconditioner
- # maxIterations : maximum number of iterations
- # tolerance : tolerance for relative residual
- # Outputs :
- # xk : approximate solution
- # vector of relative norm of the residuals
-
- # we do not keep track of all the sequence, just the last two entries
- xk = x0
- rk = b - A.dot(xk)
-
- RelativeResidualNorm = []
- for k in range(maxIterations) :
-
- zk = np.linalg.solve(P,rk)
- Azk = A.dot(zk)
-
- alphak = zk.dot(rk) / zk.dot(Azk)
-
- xk = xk + alphak*zk
-
- rk = b - A.dot(xk)
- # you can verify that this is equivalent to
- # rk = rk - alphak*A.dot(zk)
- RelativeResidualNorm.append(np.linalg.norm(rk)/np.linalg.norm(b))
- if ( RelativeResidualNorm[-1] < tolerance ) :
- print(f'Gradient converged in {k+1} iterations with a relative residual of {RelativeResidualNorm[-1]:1.3e}')
- return xk, RelativeResidualNorm
-
- print(f'Graident did not converge in {maxIterations} iterations, the relative residual is {np.linalg.norm(rk)/np.linalg.norm(b):1.3e}')
- return xk, RelativeResidualNorm
-
-
-def PrecConjugateGradient(A, b, x0, P, maxIterations, tolerance) :
- # Preconditionate Conjugate Gradient method to approximate the solution of Ax=b
- # x0 : initial guess
- # P : préconditioner
- # maxIterations : maximum number of iterations
- # tolerance : tolerance for relative residual
- # Outputs :
- # xk : approximate solution
- # vector of relative norm of the residuals
-
- # we do not keep track of all the sequence, just the last two entries
- xk = x0
- rk = b - A.dot(xk)
- zk = np.linalg.solve(P,rk)
- pk = zk
-
- RelativeResidualNorm = []
- for k in range(maxIterations) :
-
- Apk = A.dot(pk)
- alphak = pk.dot(rk) / pk.dot(Apk)
-
- xk = xk + alphak*pk
-
- rk = rk - alphak*Apk
- # you can verify that this is equivalent to
- # rk = b - A.dot(xk)
-
- zk = np.linalg.solve(P,rk)
-
- betak = Apk.dot(zk) / pk.dot(Apk)
- pk = zk - betak*pk
-
-
- RelativeResidualNorm.append(np.linalg.norm(rk)/np.linalg.norm(b))
- if ( RelativeResidualNorm[-1] < tolerance ) :
- print(f'Conjugate Gradient converged in {k+1} iterations with a relative residual of {RelativeResidualNorm[-1]:1.3e}')
- return xk, RelativeResidualNorm
-
- print(f'Conjugate Gradient did not converge in {maxIterations} iterations, the relative residual is {np.linalg.norm(rk)/np.linalg.norm(b):1.3e}')
- return xk, RelativeResidualNorm
-
-
-
-
-
-
diff --git a/NonLinearEquationsLib.py b/NonLinearEquationsLib.py
deleted file mode 100644
index 9d67e5b..0000000
--- a/NonLinearEquationsLib.py
+++ /dev/null
@@ -1,269 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Jan 30
-
-@author: Simone Deparis
-"""
-
-import numpy as np
-
-def Bisection(a,b,fun,tolerance,maxIterations) :
- # [a,b] interval of interest
- # fun function
- # tolerance desired accuracy
- # maxIterations : maximum number of iteration
- # returns:
- # zero, residual, number of iterations
-
- if (a >= b) :
- print(' b must be greater than a (b > a)')
- return 0,0,0
-
- # what we consider as "zero"
- eps = 1e-12
-
- # evaluate f at the endpoints
- fa = fun(a)
- fb = fun(b)
-
- if abs(fa) < eps : # a is the solution
- zero = a
- esterr = fa
- k = 0
- return zero, esterr, k
-
- if abs(fb) < eps : # b is the solution
- zero = b
- esterr = fb
- k = 0
- return zero, esterr, k
-
- if fa*fb > 0 :
- print(' The sign of FUN at the extrema of the interval must be different')
- return 0,0,0
-
-
-
- # We want the final error to be smaller than tol,
- # i.e. k > log( (b-a)/tol ) / log(2)
-
- nmax = int(np.ceil(np.log( (b-a)/tol ) / np.log(2)))
-
-
- # but nmax shall be smaller the the nmaximum iterations asked by the user
- if ( maxIterations < nmax ) :
- nmax = int(round(maxIterations))
- print('Warning: nmax is smaller than the minimum number of iterations necessary to reach the tolerance wished');
-
- # vector of intermadiate approximations etc
- x = np.zeros(nmax)
-
- # initial error is the length of the interval.
- esterr = (b - a)
-
- # do not need to store all the a^k and b^k, so I call them with a new variable name:
- ak = a
- bk = b
- # the values of f at those points are fa and fk
-
- for k in range(nmax) :
-
- # approximate solution is midpoint of current interval
- x[k] = (ak + bk) / 2
- fx = fun(x[k]);
- # error estimator is the half of the previous error
- esterr = esterr / 2
-
- # if we found the solution, stop the algorithm
- if np.abs(fx) < eps :
- # error is zero
- zero = x[k]
- esterr = 0;
- return zero, esterr, k
-
- if fx*fa < 0 : # alpha is in (a,x)
- bk = x[k]
- elif fx*fb < 0 : # alpha is in (x,b)
- ak = x[k]
- else :
- error('Algorithm not operating correctly')
-
-
- zero = x[k];
-
- if esterr > tol :
- print('Warning: bisection stopped without converging to the desired tolerance because the maximum number of iterations was reached');
-
- return zero, esterr, k
-
-
-
-
-def FixedPoint( phi, x0, a, b, tol, nmax ) :
- '''
- FixedPoint Find the fixed point of a function by iterative iterations
- FixedPoint( PHI,X0,a,b, TOL,NMAX) tries to find the fixedpoint X of the a
- continuous function PHI nearest to X0 using
- the fixed point iterations method.
- [a,b] : if the iterations exit the interval, the method stops
- If the search fails an error message is displayed.
-
- Outputs : [x, r, n, inc, x_sequence]
- x : the approximated fixed point of the function
- r : the absolute value of the residual in X : |phi(x) - x|
- n : the number of iterations N required for computing X and
- x_sequence : the sequence computed by Newton
- '''
-
- # Initial values
- n = 0
- xk = x0
-
- # initialisation of loop components
- # increments (in abs value) at each iteration
- inc = []
- # in case we wish to plot the sequence
- x = [x0]
-
- # diff : last increment,
- diff = tol + 1 # initially set larger then tolerance
-
- # Loop until tolerance is reached
- while ( diff >= tol and n <= nmax and (xk >= a) and (xk <= b) ) :
- # call phi
- xk1 = phi(xk)
-
- # increments
- diff = np.abs(xk1 - xk)
-
- # prepare the next loop
- n = n + 1
- xk = xk1
- x.append(xk)
-
- # Final residual
- rk1 = np.abs(xk1 - phi(xk1))
-
- # Warning if not converged
- if n > nmax :
- print('FixexPoint stopped without converging to the desired tolerance ')
- print('because the maximum number of iterations was reached')
-
- return xk1, rk1, n, np.array(x)
-
-
-def plotPhi (a,b,phi,label='$\phi$', N = 100) :
- '''
- simple plot of fonction with bisectrice of first quadrant
- useful for preparing grpahics of fixed point iterations
- [a,b] : interval, used for both x and y axis
- phi : funciton to plot
- label : label for the fonction, usually '$\phi$'
- N : number of points for plotting
- '''
- import matplotlib.pyplot as plt
-
- z = np.linspace(a,b,N)
- plt.plot(z,phi(z),'k-')
- plt.plot([a,b],[a,b],':', linewidth=0.5)
-
- plt.xlabel('x'); plt.ylabel(label);
- # Plot the x,y-axis
- plt.plot([a,b], [0,0], 'k-',linewidth=0.1)
- plt.plot([0,0], [a,b], 'k-',linewidth=0.1)
- plt.legend([label, 'y=x'])
- plt.title('Graph de la fonction ' + label)
-
-
-def plotPhiIterations (x) :
- # plot the graphical interpretation of the Fixed Point method
- import matplotlib.pyplot as plt
-
- plt.plot([x[0],x[0]], [0,x[1]], 'g:')
-
- for k in range(x.size-2) :
- plt.plot([x[k],x[k+1]], [x[k+1],x[k+1]], 'g:')
- plt.plot([x[k+1],x[k+1]], [x[k+1],x[k+2]], 'g:')
- k=x.size-2
- plt.plot([x[k],x[k+1]], [x[k+1],x[k+1]], 'g:')
-
- # Putting a sign at the initial point
- deltaAxis = plt.gca().axes.get_ylim()[1] - plt.gca().axes.get_ylim()[0]
- if (x[1] < 0) :
- plt.annotate("$x_0$", (x[0], 0.02*deltaAxis) )
- else :
- plt.annotate("$x_0$", (x[0], -0.05*deltaAxis) )
-
-
-def plotNewtonIterations (a,b,f,x,N=200) :
- # plot the graphical interpretation of the Newton method
- import matplotlib.pyplot as plt
-
- z = np.linspace(a,b,N)
- plt.plot(z,f(z), 'b-', x,f(x), 'rx')
-
- # Putting a sign at the initial point
- deltaAxis = plt.gca().axes.get_ylim()[1] - plt.gca().axes.get_ylim()[0]
-
- # plot the graphical interpretation of the Newton method
- plt.plot([x[0],x[0]], [0,f(x[0])], 'g:')
- for k in range(x.size-1) :
- plt.plot([x[k],x[k+1]], [f(x[k]),0], 'g-')
- plt.plot([x[k+1],x[k+1]], [0,f(x[k+1])], 'g:')
-
- # Putting a sign at the initial point
- if (f(x[k]) < 0) :
- plt.annotate("$x_"+str(k)+"$", (x[k], 0.02*deltaAxis) )
- else :
- plt.annotate("$x_"+str(k)+"$", (x[k], -0.05*deltaAxis) )
-
-
- plt.ylabel('$f$'); plt.xlabel('$x$');
-
- # Plot the x,y-axis
- plt.plot([a,b], [0,0], 'k-',linewidth=0.1)
- plt.plot([0,0], [np.min(f(z)),np.max(f(z))], 'k-',linewidth=0.1)
-
- plt.legend(['$f$','($x_k$,$f(x_k)$)'])
-
-
-def plotBisectionIterations (a,b,f,x,N=200) :
- # plot the graphical interpretation of the Bisection method
- import matplotlib.pyplot as plt
-
- def putSign(y,f,text) :
- if (f(y) < 0) :
- plt.annotate(text, (y, 0.02*deltaAxis) )
- else :
- plt.annotate(text, (y, -0.05*deltaAxis) )
-
-
- z = np.linspace(a,b,N)
- plt.plot(z,f(z), 'b-')
-
- plt.plot([a,a], [0,f(a)], 'g:')
- plt.plot([b,b], [0,f(b)], 'g:')
-
- # For putting a sign at the initial point
- deltaAxis = plt.gca().axes.get_ylim()[1] - plt.gca().axes.get_ylim()[0]
-
- putSign(a,f,"a")
- putSign(b,f,"b")
-
- for k in range(x.size) :
- plt.plot([x[k],x[k]], [0, f(x[k])], 'g:')
- putSign(x[k],f,"$x_"+str(k)+"$")
-
- # Plot the x,y-axis
- plt.plot([a,b], [0,0], 'k-',linewidth=0.1)
- plt.plot([0,0], [np.min(f(z)),np.max(f(z))], 'k-',linewidth=0.1)
-
- plt.ylabel('$f$'); plt.xlabel('$x$');
-
- return
-
-
-
-
-