diff --git a/rational_interpolation_method.pdf b/rational_interpolation_method.pdf index fd4c7a0..ba6c8d4 100644 Binary files a/rational_interpolation_method.pdf and b/rational_interpolation_method.pdf differ diff --git a/rational_interpolation_method.tex b/rational_interpolation_method.tex index 54b9d01..ebbec89 100644 --- a/rational_interpolation_method.tex +++ b/rational_interpolation_method.tex @@ -1,334 +1,258 @@ \documentclass[10pt,a4paper]{article} \usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{hyperref} \usepackage{xcolor} \setlength{\parindent}{0pt} \newcommand{\code}[1]{{\color{blue}\texttt{#1}}} \newcommand{\footpath}[1]{\footnote{\path{#1}}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\I}{\mathcal{I}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\inner}[2]{\left\langle#1,#2\right\rangle_V} \newcommand{\norm}[1]{\left\|#1\right\|_V} \title{\bf The \code{RROMPy} rational interpolation method} \author{D. Pradovera, CSQI, EPF Lausanne -- \texttt{davide.pradovera@epfl.ch}} \date{} \hypersetup{ pdftitle={The RROMPy rational interpolation method}, pdfauthor={Davide Pradovera} } \begin{document} \maketitle \section*{Introduction} This document provides an explanation for the numerical methods provided by the class \code{Rational Interpolant}\footpath{./rrompy/reduction_methods/standard/rational_interpolant.py} and daughters, e.g. \code{Rational Interpolant Greedy}\footpath{./rrompy/reduction_methods/standard/greedy/rational_interpolant_greedy.py}, as well as most of the pivoted approximants\footpath{./rrompy/reduction_methods/pivoted/{,greedy/}rational_interpolant_*.py}. Most of the focus will be dedicated to the impact of the (\code{rationalMode},\code{functionalSolve}) parameters, whose allowed values are \begin{itemize} \item (\code{MINIMAL},\code{NORM}) (default): see Section~\ref{sec:min:norm}; allows for repeated sample points. \item (\code{MINIMAL},\code{DOMINANT}): see Section~\ref{sec:min:dominant}; allows for repeated sample points. -\item (\code{MINIMAL},\code{BARYCENTRIC\_NORM}): see Section~\ref{sec:min:barycentricnorm}; does not allow for a Least Squares approach; undefined for more than one parameter. -\item (\code{MINIMAL},\code{BARYCENTRIC\_AVERAGE}): see Section~\ref{sec:min:barycentricaverage}; does not allow for a Least Squares approach; undefined for more than one parameter. -\item (\code{STANDARD},\code{NORM}): see Section~\ref{sec:out:dominant}; allows for repeated sample points. +\item (\code{BARYCENTRIC},\code{NORM}): see Section~\ref{sec:min:barycentricnorm}; does not allow for a Least Squares approach; undefined for more than one parameter. +\item (\code{BARYCENTRIC},\code{DOMINANT}): see Section~\ref{sec:min:barycentricaverage}; does not allow for a Least Squares approach; undefined for more than one parameter. +\item (\code{STANDARD},\code{NORM}): see Section~\ref{sec:out:norm}; allows for repeated sample points. \item (\code{STANDARD},\code{DOMINANT}): see Section~\ref{sec:out:dominant}; allows for repeated sample points. -\item (\code{STANDARD},\code{BARYCENTRIC\_NORM}): see Section~\ref{sec:out:barycentricnorm}; undefined for more than one parameter. -\item (\code{STANDARD},\code{BARYCENTRIC\_AVERAGE}): see Section~\ref{sec:out:barycentricaverage}; undefined for more than one parameter. \end{itemize} We restrict the discussion to the single-parameter case. The main reference throughout the present document is \cite{Pradovera}. \section{Aim of approximation} We seek an approximation of $u:\C\to V$, with $(V,\inner{\cdot}{\cdot})$ a complex\footnote{The inner product is linear (resp. conjugate linear) in the first (resp. second) argument: $\inner{\alpha v}{\beta w}=\alpha\overline{\beta}\inner{v}{w}$.} Hilbert space (with induced norm $\norm{\cdot}$), of the form $\widehat{p}/\widehat{q}$, where $\widehat{p}:\C\to V$ and $\widehat{q}:\C\to\C$. The target $u$ might be high-dimensional (for instance, the solution of a PDE after Finite Element discretization) or low(er)-dimensional (for instance, a functional of the above-mentioned PDE solution). For a given denominator $\widehat{q}$, the numerator $\widehat{p}$ is found by interpolation of $\widehat{q}u$. Hence, here we focus on the computation of the denominator $\widehat{q}$. Other than the choice of target function $u$, the parameters which affect the computation of $\widehat{q}$ are: \begin{itemize} -\item $\code{mus}\subset\C$ ($\{\mu_j\}_{j=1}^S$ below); for (\code{*},\code{BARYCENTRIC\_*}), the $S$ points must be distinct. -\item $\code{N}\in\N$ ($N$ below); for (\code{MINIMAL},\code{BARYCENTRIC\_*}), $N$ must equal $S-1$. -\item $\code{polybasis}\in\{\code{"CHEBYSHEV"}, \code{"LEGENDRE"}, \code{"MONOMIAL"}\}$; only for (\code{*},\code{NORM}) and (\code{*},\code{DOMINANT}). +\item $\code{mus}\subset\C$ ($\{\mu_j\}_{j=1}^S$ below); for \code{BARYCENTRIC}, the $S$ points must be distinct. +\item $\code{N}\in\N$ ($N$ below); for \code{BARYCENTRIC}, $N$ must equal $S-1$. +\item $\code{polybasis}\in\{\code{"CHEBYSHEV"}, \code{"LEGENDRE"}, \code{"MONOMIAL"}\}$. \end{itemize} To simplify the notation, we set $E=S-1$. For simplicity, we will consider only the case of $S$ distinct sample points. One can deal with the case of confluent sample points by extending the standard (Lagrange) interpolation steps to Hermite-Lagrange ones. In the following, we will make use of the polynomial interpolation operator $\I^M:(\C\times W)^S\to\mathbb{P}^M(\C;W)$, where $M$ is an integer (either $N$ or $E$), and $W$ a Banach space (either $\C$ or $V$). We define its action on samples $\left((\mu_j,\psi_j)\right)_{j=1}^S\in(\C\times W)^S$ as \begin{equation*} \I^M\left(\big((\mu_j,\psi_j)\big)_{j=1}^S\right)=\argmin_{p\in\mathbb{P}^M(\C;W)}\sum_{j=1}^S\left\|p(\mu_j)-\psi_j\right\|_W^2, \end{equation*} where \begin{equation*} \mathbb{P}^M(\C;W)=\left\{\mu\mapsto\sum_{i=0}^M\alpha_i\mu^i\ :\ \alpha_0,\ldots,\alpha_M\in W\right\}. \end{equation*} In \code{RROMPy}, we compute interpolants by employing normal equations: given a basis $\{\phi_i\}_{i=0}^M$ of $\mathbb{P}^M(\C;\C)$, we expand \begin{equation*} p(\mu)=\sum_{i=0}^Mc_i\phi_i(\mu) \end{equation*} and observe that, for optimality, the coefficients $\{c_i\}_{i=0}^M\subset W$ must satisfy \begin{equation*} \sum_{j=1}^S\sum_{l=0}^M\overline{\phi_i(\mu_j)}\phi_l(\mu_j)c_l=\sum_{j=1}^S\overline{\phi_i(\mu_j)}\psi_j\quad\forall i=0,\ldots,M, \end{equation*} i.e., in matrix form\footnote{The superscript ${}^H$ denotes conjugate transposition, i.e. $A^H=\overline{A}^\top$.}, \begin{equation}\label{eq:normal} \underbrace{[c_i]_{i=0}^M}_{\in W^{M+1}}=\bigg(\underbrace{\Big[\overline{\phi_i(\mu_j)}\Big]_{i=0,j=1}^{M,S}}_{=:\Phi^H\in\C^{(M+1)\times S}}\underbrace{\Big[\phi_i(\mu_j)\Big]_{j=1,i=0}^{S,M}}_{=:\Phi\in\C^{S\times(M+1)}}\bigg)^{-1}\underbrace{\Big[\overline{\phi_i(\mu_j)}\Big]_{i=0,j=1}^{M,S}}_{=:\Phi^H\in\C^{(M+1)\times S}}\underbrace{[\psi_j]_{j=1}^S}_{\in W^{S}}. \end{equation} In practice, the polynomial basis $\{\phi_i\}_{i=0}^M$ is determined by the value of $\code{polybasis}$: \begin{itemize} \item If $\code{polybasis}=\code{"CHEBYSHEV"}$, then $\phi_k(\mu)=\mu^k$ for $k\in\{0,1\}$ and $\phi_k(\mu)=2\mu\phi_{k-1}(\mu)-\phi_{k-2}(\mu)$ for $k\geq2$. \item If $\code{polybasis}=\code{"LEGENDRE"}$, then $\phi_k(\mu)=\mu^k$ for $k\in\{0,1\}$ and $\phi_k(\mu)=(2-1/k)\mu\phi_{k-1}(\mu)-(1-1/k)\phi_{k-2}(\mu)$ for $k\geq2$. \item If $\code{polybasis}=\code{"MONOMIAL"}$, then $\phi_k(\mu)=\mu^k$ for $k\geq0$. \end{itemize} Moreover, it will prove useful to define the ``snapshot rank'' $r=r(\{u(\mu_j)\}_{j=1}^S)=\text{dim}~\text{span}\{u(\mu_j)\}_{j=1}^S$. This quantity is bounded from above by $S$ and by the dimension of $V$ (e.g., $r=1$ if $V=\C$). \section{\code{MINIMAL} Rational Interpolation (MRI)} The main motivation behind the MRI method involves the modified approximation problem \begin{equation}\label{eq:simple} u\approx\I^E\left(\Big(\big(\mu_j,\widehat{q}(\mu_j)u(\mu_j)\big)\Big)_{j=1}^S\right)\Big/\widehat{q}, \end{equation} where $\widehat{q}:\C\to\C$ is a polynomial of degree $\leq N\leq E$. The denominator $\widehat{q}$ is found as \begin{equation}\label{eq:glob1} \widehat{q}=\argmin_{\substack{q\in\mathbb{P}^N(\C;\C)\\(\star)}}\norm{\frac{\textrm{d}^E}{\textrm{d}\mu^E}\I^E\left(\Big(\big(\mu_j,q(\mu_j)u(\mu_j)\big)\Big)_{j=1}^S\right)} \end{equation} where $(\star)$ is a normalization condition (which changes depending on \code{functionalSolve}) to exclude the trivial minimizer $\widehat{q}\equiv0$. The methods described differ in terms of the constraint $(\star)$, as well as of the degrees of freedom which are chosen to represent the denominator $q$. For \eqref{eq:glob1} to be well-defined (unique optimizer up to unit scaling), in addition to the condition $N\leq S-1$, we must have (by balance of degrees of freedom vs. constraints) $N\leq r$ -\subsection{Polynomial coefficients as degrees of freedom} +\subsection{Polynomial coefficients as degrees of freedom}\label{sec:mri:poly} If the polynomial basis $\{\phi_i\}_{i=0}^E$ in \eqref{eq:normal} is hierarchical (as the three ones above), then the $E$-th derivative of $\I^E$ is proportional to the coefficient $c_E$, and we have \begin{equation}\label{eq:poly1} \widehat{q}=\argmin_{\substack{q\in\mathbb{P}^N(\C;\C)\\(\star)}}\norm{[\underbrace{0,\ldots,0}_E,1]\big(\Phi^H\Phi\big)^{-1}\Phi^H[q(\mu_j)u(\mu_j)]_{j=1}^S}. \end{equation} Using the Kronecker delta ($\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ if $i\neq j$), the last term $[q(\mu_j)u(\mu_j)]_{j=1}^S\in V^S$ can be factored into \begin{equation}\label{eq:poly2} \Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}[q(\mu_j)]_{j=1}^S=\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\widetilde{\Phi}\mathbf{q}, \end{equation} where $\widetilde{\Phi}$ is the $S\times(N+1)$ matrix obtained by extracting the first $N+1$ columns of $\Phi$. We remark that we have expanded the polynomial $q$ using the basis\footnote{In theory, nothing prevents us from using different bases for $\I^E$ and $q$, cf. Section~\ref{sec:observations}.} $\{\phi_i\}_{i=0}^N$: $q(\mu)=\sum_{i=0}^Nq_i\phi_i(\mu)$, with coefficients $\mathbf{q}=[q_i]_{i=0}^N\in\C^{N+1}$. Combining \eqref{eq:poly1} and \eqref{eq:poly2}, it is useful to consider the $(N+1)\times(N+1)$ Hermitian matrix with entries ($0\leq i,i'\leq N$) \begin{equation}\label{eq:poly3} G_{ii'}=\sum_{j,j'=1}^S\left(\Phi\big(\Phi^H\Phi\big)^{-1}\right)_{j,E+1}\left(\big(\Phi^H\Phi\big)^{-1}\Phi^H\right)_{E+1,j'}\overline{\Phi}_{ji}\Phi_{j'i'}\inner{u(\mu_{j'})}{u(\mu_j)}. \end{equation} If $(\star)$ is quadratic (resp. linear) in $\mathbf{q}$, then we can cast the computation of the denominator as a quadratically (resp. linearly) constrained quadratic program involving $G$. \subsubsection{Quadratic constraint}\label{sec:min:norm} We constrain $\widehat{\mathbf{q}}$ to have unit (Euclidean) norm. The resulting optimization problem can be cast as a minimal (normalized) eigenvector problem for $G$ in \eqref{eq:poly3}. More explicitly, \[\widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\\|\mathbf{q}\|_2=1}}\mathbf{q}^HG\mathbf{q}.\] \subsubsection{Linear constraint}\label{sec:min:dominant} We constrain $\widehat{q}_N=1$, thus forcing $q$ to be monic, with degree exactly $N$. Given $G$ in \eqref{eq:poly3}, the resulting optimization problem can be solved rather easily as: \[\widehat{\mathbf{q}}=\frac{G^{-1}\mathbf{e}_{N+1}}{\mathbf{e}_{N+1}^\top G^{-1}\mathbf{e}_{N+1}},\quad\text{with }\mathbf{e}_{N+1}=[0,\ldots,0,1]^\top\in\C^{N+1}.\] \subsection{Barycentric coefficients as degrees of freedom}\label{sec:bary} Here we assume that the sample points are distinct, and that $N=E=S-1$, so that, in particular, $\Phi=\widetilde{\Phi}$. Considering the constraint $N\leq r$, we can deduce that the approach presented in this section can only be applied if the rank of the snapshots is either full or defective by 1. -We can choose for convenience a non-hierarchical basis, dependent on the sample points, for $q$ and $\I^E$, taking inspiration from barycentric interpolation: +We can choose for convenience a non-hierarchical rational basis, dependent on the sample points, for $q$ and $\I^E$, taking inspiration from barycentric interpolation: +\begin{equation*} +\varphi_i(\mu)=\frac{1}{\mu-\mu_{i+1}}\quad\text{for }i=0,\ldots,N-1. +\end{equation*} +Evaluation of such basis at the sample points leads to unbounded results. To correct this, we symbolically weigh each row by the nodal polynomial $\prod_{j=1}^S(\cdot-\mu_j)$, causing cancellation of the unbounded terms. The resulting ``effective'' polynomial basis is \begin{equation}\label{eq:bary1} \phi_i(\mu)=\prod_{\substack{j=1\\j\neq i+1}}^S(\mu-\mu_j). \end{equation} Since all elements of the basis are monic and of degree exactly $N$, the minimization problem can be cast as \begin{equation}\label{eq:bary2} \widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\(\star)}}\norm{[\underbrace{1,\ldots,1}_{S}]\big(\Phi^H\Phi\big)^{-1}\Phi^H\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\Phi\mathbf{q}\,}. \end{equation} At the same time, it is easy to see from \eqref{eq:bary1} that the Vandermonde-like matrix $\Phi$ is diagonal, so that \[\big(\Phi^H\Phi\big)^{-1}\Phi^H\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\Phi=\big(\Phi^H\Phi\big)^{-1}\Phi^H\Phi\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}=\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S},\] and \begin{equation}\label{eq:bary3} \widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\(\star)}}\norm{\sum_{i=0}^Eu(\mu_{i+1})q_i}. \end{equation} Considering \eqref{eq:bary3}, it is useful to define the $S\times S$ Hermitian ``snapshot Gramian'' matrix with entries ($0\leq i,i'\leq N$) \begin{equation}\label{eq:bary4} G_{ii'}=\inner{u(\mu_{i'+1})}{u(\mu_{i+1})}. \end{equation} So, once again, if $(\star)$ is quadratic (resp. linear) in $\mathbf{q}$, then we can cast the computation of the denominator as a quadratically (resp. linearly) constrained quadratic program involving $G$. Before specifying the kind of normalization enforced, it is important to make a remark on numerical stability. The basis in \eqref{eq:bary1} is actually just a ($i$-dependent) factor away from being the Lagrangian one (for which $\phi_i(\mu_j)$ would equal $\delta_{(i+1)j}$ instead of \[\delta_{(i+1)j}\prod_{k\neq i+1}(\mu_j-\mu_k),\] as it does in our case). As such, it is generally a bad idea to numerically evaluate $q$ starting from its expansion coefficients with respect to $\{\phi_i\}_{i=0}^N$. We get around this by exploiting the following trick, whose foundation is in \cite[Section 2.3.3]{Klein}: the roots of $\widehat{q}=\sum_{i=0}^N\widehat{q}_i\phi_i$ are the $N$ finite eigenvalues $\lambda$ of the generalized $(N+2)\times(N+2)$ eigenproblem \begin{equation}\label{eq:bary5} \textrm{Det}\left(\begin{bmatrix} 0 & \widehat{q}_0 & \cdots & \widehat{q}_N\\ 1 & \mu_1 & & \\ \vdots & & \ddots & \\ 1 & & & \mu_S \end{bmatrix}-\begin{bmatrix} 0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{bmatrix}\lambda\right)=0. \end{equation} This computation is numerically more stable than most other manipulations of a polynomial in the basis \eqref{eq:bary1}. Once the roots of $\widehat{q}$ have been computed, one can either convert it to nodal form \begin{equation}\label{eq:pole1} \widehat{q}(\mu)\propto\prod_{i=1}^N(\mu-\widehat{\lambda}_i), \end{equation} or forgo using $\widehat{q}$ completely, in favor of a Heaviside-like approximation involving the newly computed roots $\{\widehat{\lambda}_i\}_{i=1}^N$ as poles: \[ \frac{\widehat{p}(\mu)}{\widehat{q}(\mu)}\quad\rightsquigarrow\quad\widehat{b}_0+\sum_{i=1}^N\frac{\widehat{b}_i}{\mu-\widehat{\lambda}_i}. \] See the final paragraph in Section~\ref{sec:observations} for a slightly more detailed motivation of why the Heaviside form of the approximant might be more useful than the standard rational one $\widehat{p}/\widehat{q}$ in practice. \subsubsection{Quadratic constraint}\label{sec:min:barycentricnorm} We constrain $\widehat{\mathbf{q}}$ to have unit (Euclidean) norm. The resulting optimization problem can be cast as a minimal (normalized) eigenvector problem for $G$ in \eqref{eq:bary4}. More explicitly, \[\widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\\|\mathbf{q}\|_2=1}}\mathbf{q}^HG\mathbf{q}.\] \subsubsection{Linear constraint}\label{sec:min:barycentricaverage} We constrain $\sum_{i=0}^N\widehat{q}_i=1$, so that the polynomial $\widehat{q}$ is monic. Given $G$ in \eqref{eq:bary4}, the resulting optimization problem can be solved rather easily as: \[\widehat{\mathbf{q}}=\frac{G^{-1}\mathbf{1}_{S}}{\mathbf{1}_{S}^\top G^{-1}\mathbf{1}_{S}},\quad\text{with }\mathbf{1}_{S}=[1,\ldots,1]^\top\in\C^{S}.\] \subsection{Minor observations for MRI}\label{sec:observations} \begin{itemize} \item If $N=E$, normal equations are not necessary to compute $\I^E$, since $\Phi$ is square and can be inverted directly. However, in practical applications, it may be useful to decrease the degree $E$ of the interpolant (which, in our presentation, we kept fixed to $S-1$ for simplicity) to overcome numerical instabilities which may arise in the (pseudo-)inversion of $\Phi$. If this happens, $\Phi$ becomes non-square, and normal equations are the only option. -\item For \code{BARYCENTRYC\_*}, a specific choice of polynomial basis for $\I^E$ was used to diagonalize the functional. Under the assumptions that the sample points are distinct (and that $E+1=S$), one can employ the quasi-Lagrangian basis \eqref{eq:bary1} to expand $\I^E$ in the other approaches as well, thus simplifying significantly the structure of \eqref{eq:glob1}: +\item For \code{BARYCENTRYC}, a specific choice of polynomial basis for $\I^E$ was used to diagonalize the functional. Under the assumptions that the sample points are distinct (and that $E+1=S$), one can employ the quasi-Lagrangian basis \eqref{eq:bary1} to expand $\I^E$ in the other approaches as well, thus simplifying significantly the structure of \eqref{eq:glob1}: \begin{equation*} \widehat{q}=\argmin_{\substack{q\in\mathbb{P}^N(\C;\C)\\(\star)}}\norm{\sum_{j=1}^Sq(\mu_j)u(\mu_j)\prod_{\substack{j'=1\\j'\neq j}}^S\frac1{\mu_j-\mu_{j'}}}. \end{equation*} This is independent of the basis used to expand $q$ and, numerically, has repercussions only on the computation of the term $\big(\Phi^H\Phi\big)^{-1}\Phi^H$ in \eqref{eq:poly3}. -\item In general, \code{NORM} and \code{BARYCENTRIC\_NORM} can be expected to be more numerically stable than \code{DOMINANT} and \code{BARYCENTRIC\_AVERAGE}, respectively. This is due to the fact that the normalization is enforced in a more numerically robust fashion. +\item In general, \code{NORM} can be expected to be more numerically stable than \code{DOMINANT}. This is due to the fact that the normalization is enforced in a more numerically robust fashion. \item If the snapshots are orthonormalized via \code{POD}\footpath{./rrompy/sampling/engines/sampling_engine_pod.py}, a simple unitary transformation allows to replace $V$ with $\C^r$. As a consequence, all the $V$-inner products (resp. norms) can be recast as Euclidean inner products (resp. norms) involving the $R$ factor of the generalized ($V$-orthonormal) QR decomposition of the snapshots. -\item If a univariate rational surrogate is built in the scope of multivariate pole-matching-based pivoted approximation\footpath{./rrompy/reduction_methods/pivoted/*}, the rational approximant is converted into a Heaviside/nodal representation when different surrogates are combined. As such, the \code{BARYCENTRIC\_*} approach may be preferable to avoid extra computations, as well as additional round-off artifacts. +\item If a univariate rational surrogate is built in the scope of multivariate pole-matching-based pivoted approximation\footpath{./rrompy/reduction_methods/pivoted/*}, the rational approximant is converted into a Heaviside/nodal representation when different surrogates are combined. As such, the \code{BARYCENTRIC} approach may be preferable to avoid extra computations, as well as additional round-off artifacts. \end{itemize} -\section{\code{STANDARD} Rational Interpolation (SRI)} -\subsection{SRI based on polynomial interpolation} -The main motivation behind the polynomial SRI method involves the modified approximation problem +\section{\code{STANDARD} Rational Interpolation (SRI)}\label{sec:out:sri} +The main motivation behind the (polynomial) SRI method involves the modified approximation problem \begin{equation*} u\approx\I^M\left(\Big(\big(\mu_j,\widehat{q}(\mu_j)u(\mu_j)\big)\Big)_{j=1}^S\right)\Big/\widehat{q}, \end{equation*} where $\widehat{q}:\C\to\C$ is a polynomial of degree $\leq N\leq E$, and $M\in\N$ is such that $0\leq M\leq S-2$. The denominator $\widehat{q}$ is found as \begin{equation}\label{eq:glob2} \widehat{q}=\argmin_{\substack{q\in\mathbb{P}^N(\C;\C)\\(\star)}}\sum_{\ell=1}^S\norm{q(\mu_\ell)u(\mu_\ell)-\I^M\left(\Big(\big(\mu_j,q(\mu_j)u(\mu_j)\big)\Big)_{j=1}^S\right)(\mu_\ell)}^2 \end{equation} where $(\star)$ is a normalization condition (which changes depending on \code{functionalSolve}) to exclude the trivial minimizer $\widehat{q}\equiv0$. For \eqref{eq:glob2} to be well-defined (unique optimizer up to unit scaling), in addition to the conditions $N\leq S-1$ and $M\leq S-2$, we must have (by balance of degrees of freedom vs. constraints) $M+N/r+1\leq S$. As before, we can expand the target functional by linear algebra \begin{equation}\label{eq:poly51} \widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\(\star)}}\sum_{\ell=1}^S\norm{\mathbf{e}_\ell^\top\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\widetilde{\Phi}\mathbf{q}-\mathbf{e}_\ell^\top\Phi\big(\Phi^H\Phi\big)^{-1}\Phi^H\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\widetilde{\Phi}\mathbf{q}}^2, \end{equation} with $\Phi\in\C^{S\times(M+1)}$, $\widetilde{\Phi}\in\C^{S\times(N+1)}$, and \begin{equation*} \mathbf{e}_\ell=[\underbrace{0,\ldots,0}_{\ell-1},1,\underbrace{0,\ldots,0}_{S-\ell}]^\top\in\C^S. \end{equation*} We observe that $I-\Phi\big(\Phi^H\Phi\big)^{-1}\Phi^H$ (with $I$ the $S\times S$ identity matrix) corresponds to the (orthogonal) projection onto the orthogonal complement of the range of $\Phi$, so that we may express it as $\Psi\Psi^H$, with the $S-M-1$ columns of $\Psi$ being an orthonormal basis for the orthogonal complement of the range of $\Phi$ (which can be found by completion of the QR decomposition of $\Phi$). Hence, we have \begin{equation}\label{eq:poly52} \widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\(\star)}}\sum_{\ell=1}^S\norm{\mathbf{e}_\ell^\top\Psi\Psi^H\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{S,S}\widetilde{\Phi}\mathbf{q}}^2. \end{equation} As above, we can interpret \eqref{eq:poly52} as the minimization of a quadratic form represented by a $(N+1)\times(N+1)$ Hermitian matrix with entries ($0\leq i,i'\leq N$) \begin{equation}\label{eq:poly53} G_{ii'}=\sum_{j,j'=1}^S\inner{u(\mu_{j'})}{u(\mu_j)}\left(\Psi\Psi^H\right)_{jj'}\overline{\Phi}_{ji}\Phi_{j'i'}. \end{equation} -The matrix $G$ above can also be expressed as $\widetilde{\Phi}^H\big(\mathring{G}\bullet\left(\Psi\Psi^H\right)\big)\widetilde{\Phi}$, with $\mathring{G}$ the snapshot Gramian \eqref{eq:bary4} and $\bullet$ the Hadamard product. If a Cholesky decomposition $\mathring{G}=\mathring{R}^H\mathring{R}$ is available, then $G=(\mathring{R}^H\odot\Phi^H)^H(\mathring{R}^H\odot\Phi^H)$, with $\odot$ the Khatri-Rao product. +The matrix $G$ above can also be expressed as $\widetilde{\Phi}^H\big(\mathring{G}\bullet\left(\Psi\Psi^H\right)\big)\widetilde{\Phi}$, with $\mathring{G}$ the snapshot Gramian \eqref{eq:bary4} and $\bullet$ the Hadamard product. If a Cholesky decomposition $\mathring{G}=\mathring{R}^H\mathring{R}$ is available, then $G=\widetilde{\Phi}^H(\mathring{R}\odot\Psi^H)^H(\mathring{R}\odot\Psi^H)\widetilde{\Phi}$, with $\odot$ the Khatri-Rao product. If $(\star)$ is quadratic (resp. linear) in $\mathbf{q}$, then we can cast the computation of the denominator as a quadratically (resp. linearly) constrained quadratic program involving $G$. -\subsubsection{Quadratic constraint}\label{sec:out:norm} +\subsection{Quadratic constraint}\label{sec:out:norm} We constrain $\widehat{\mathbf{q}}$ to have unit (Euclidean) norm. The resulting optimization problem can be cast as a minimal (normalized) eigenvector problem for $G$ in \eqref{eq:poly53}. More explicitly, \[\widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\\|\mathbf{q}\|_2=1}}\mathbf{q}^HG\mathbf{q}.\] -\subsubsection{Linear constraint}\label{sec:out:dominant} +\subsection{Linear constraint}\label{sec:out:dominant} We constrain $\widehat{q}_N=1$, thus forcing $q$ to be monic, with degree exactly $N$. Given $G$ in \eqref{eq:poly53}, the resulting optimization problem can be solved rather easily as: \[\widehat{\mathbf{q}}=\frac{G^{-1}\mathbf{e}_{N+1}}{\mathbf{e}_{N+1}^\top G^{-1}\mathbf{e}_{N+1}},\quad\text{with }\mathbf{e}_{N+1}=[0,\ldots,0,1]^\top\in\C^{N+1}.\] - -\subsection{SRI based on barycentric interpolation} -The main motivation behind the barycentric SRI method involves the modified approximation problem -\begin{equation*} -u\approx\widetilde{\I}^{N-1}\left(\Big(\big(\mu_j,\widehat{q}(\mu_j)u(\mu_j)\big)\Big)_{j=1}^S\right)\Big/\widehat{q}, -\end{equation*} -where $\widehat{q}:\C\to\C$ is a polynomial of degree $\leq N\leq E$ and the modified interpolator $\widetilde{\I}^{N-1}$ is the same as $\I^{N-1}$, but only considers the $N$ samples at $\mu_1,\ldots,\mu_N$ (we assume that the sample points are distinct and their order is fixed). - -The denominator $\widehat{q}$ is found as -\begin{equation}\label{eq:glob3} -\widehat{q}=\argmin_{\substack{q\in\mathbb{P}^N(\C;\C)\\(\star)}}\sum_{\ell=N+1}^Sw_\ell\norm{q(\mu_\ell)u(\mu_\ell)-\widetilde{\I}^{N-1}\left(\Big(\big(\mu_j,q(\mu_j)u(\mu_j)\big)\Big)_{j=1}^N\right)(\mu_\ell)}^2 -\end{equation} -where $(\star)$ is a normalization condition (which changes depending on \code{functionalSolve}) to exclude the trivial minimizer $\widehat{q}\equiv0$, and $w_{N+1},\ldots,w_S$ are positive weights that will be specified. - -For \eqref{eq:glob2} to be well-defined (unique optimizer up to unit scaling), in addition to the condition $N\leq S-1$, we must have (by balance of degrees of freedom vs. constraints) $(1+1/r)N\leq S$. - -We can choose a non-hierarchical basis, dependent on the sample points, for $q$ and $\widetilde{\I}^{N-1}$, taking inspiration from barycentric interpolation: for $\widetilde{\I}^{N-1}$, we take the basis -\begin{equation}\label{eq:bary51} -\phi_i(\mu)=\prod_{\substack{j=1\\j\neq i+1}}^N(\mu-\mu_j)=\frac{\prod_{j=1}^N(\mu-\mu_j)}{\mu-\mu_{i+1}}\quad\text{for }i=0,\ldots,N-1, -\end{equation} -whereas, for $q$, we use the basis $\phi_0,\ldots,\phi_{N-1}$, augmented with -\begin{equation}\label{eq:bary52} -\phi_N(\mu)=\prod_{j=1}^N(\mu-\mu_j). -\end{equation} - -Given -\begin{equation}\label{eq:bary53} -q(\mu)=\sum_{i=0}^{N-1}q_j\phi_i(\mu)+q_N\phi_N(\mu), -\end{equation} -it is easy to see that -\begin{equation}\label{eq:bary54} -\widetilde{\I}^{N-1}\left(\Big(\big(\mu_j,q(\mu_j)u(\mu_j)\big)\Big)_{j=1}^N\right)=\sum_{i=0}^{N-1}q_iu(\mu_{i+1})\phi_i, -\end{equation} -independently of $q_N$. This allows to express \eqref{eq:glob3} as -\begin{equation}\label{eq:bary55} -\widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\(\star)}}\sum_{\ell=N+1}^Sw_\ell\left|\phi_N(\mu_\ell)\right|^2\norm{u(\mu_\ell)\left(\mathbf{e}_{\ell-N}^H\Phi[q_i]_{i=0}^{N-1}+q_N\right)-\mathbf{e}_{\ell-N}^H\Phi\Big[u(\mu_j)\delta_{jj'}\Big]_{j=1,j'=1}^{N,N}[q_i]_{i=0}^{N-1}}^2, -\end{equation} -where $\Phi_{ij}=1/(\mu_{N+i}-\mu_j)$, for $i=1,\ldots,S-N$ and $j=1,\ldots,N$, and -\begin{equation*} -\mathbf{e}_{\ell-N}=[\underbrace{0,\ldots,0}_{\ell-N-1},1,\underbrace{0,\ldots,0}_{S-\ell}]^\top\in\C^{S-N}. -\end{equation*} -Following usual barycentric interpolation customs \cite{Klein}, we set $w_\ell=\left|\phi_N(\mu_\ell)\right|^{-2}$ for $\ell=N+1,\ldots,S$. - -As above, we can interpret \eqref{eq:bary55} as the minimization of a quadratic form represented by a $(N+1)\times(N+1)$ Hermitian matrix with entries -\begin{equation}\label{eq:bary56} -G_{ii'}=\begin{cases} -\sum_{\ell=N+1}^S\inner{u(\mu_\ell)-u(\mu_{i'+1})}{u(\mu_\ell)-u(\mu_{i+1})}\overline{\Phi}_{\ell-N,i+1}\Phi_{\ell-N,i'+1},&0\leq i,i'\leq N-1,\\ -\sum_{\ell=N+1}^S\inner{u(\mu_\ell)}{u(\mu_\ell)-u(\mu_{i+1})}\overline{\Phi}_{\ell-N,i+1},&0\leq i\leq N-1,i'=N,\\ -\sum_{\ell=N+1}^S\inner{u(\mu_\ell)-u(\mu_{i'+1})}{u(\mu_\ell)}\Phi_{\ell-N,i'+1},&i=N,0\leq i'\leq N-1,\\ -\sum_{\ell=N+1}^S\norm{u(\mu_\ell)}^2,&i=i'=N. -\end{cases} -\end{equation} -So, once again, if $(\star)$ is quadratic (resp. linear) in $\mathbf{q}$, then we can cast the computation of the denominator as a quadratically (resp. linearly) constrained quadratic program involving $G$. - -The observation on stability presented in Section~\ref{sec:bary} apply also here. The only difference is that the generalized eigenproblem \eqref{eq:bary5} must be adjusted to account for the bias: -\begin{equation}\label{eq:bary57} -\textrm{Det}\left(\begin{bmatrix} -\widehat{q}_N & \widehat{q}_0 & \cdots & \widehat{q}_{N-1}\\ -1 & \mu_1 & & \\ -\vdots & & \ddots & \\ -1 & & & \mu_N -\end{bmatrix}-\begin{bmatrix} -0 & & & \\ - & 1 & & \\ - & & \ddots & \\ - & & & 1 -\end{bmatrix}\lambda\right)=0. -\end{equation} - -\subsubsection{Quadratic constraint}\label{sec:out:barycentricnorm} -We constrain $\widehat{\mathbf{q}}$ to have unit (Euclidean) norm. The resulting optimization problem can be cast as a minimal (normalized) eigenvector problem for $G$ in \eqref{eq:bary56}. More explicitly, -\[\widehat{\mathbf{q}}=\argmin_{\substack{\mathbf{q}\in\C^{N+1}\\\|\mathbf{q}\|_2=1}}\mathbf{q}^HG\mathbf{q}.\] - -\subsubsection{Linear constraint}\label{sec:out:barycentricaverage} -We constrain $\widehat{q}_N=1$, so that the polynomial $\widehat{q}$ is monic. Given $G$ in \eqref{eq:bary56}, the resulting optimization problem can be solved rather easily as: -\[\widehat{\mathbf{q}}=\frac{G^{-1}\mathbf{e}_{N+1}}{\mathbf{e}_{N+1}^\top G^{-1}\mathbf{e}_{N+1}},\quad\text{with }\mathbf{e}_{N+1}=[0,\ldots,0,1]^\top\in\C^{N+1}.\] - \begin{thebibliography}{1} \bibitem{Pradovera}% D.~Pradovera, Interpolatory rational model order reduction of parametric problems lacking uniform inf-sup stability, SIAM J. Numer. Anal. 58 (2020) 2265--2293. doi:10.1137/19M1269695. \bibitem{Klein}% G.~Klein, Applications of Linear Barycentric Rational Interpolation, PhD Thesis no. 1762, Universit\'e de Fribourg (2012). \end{thebibliography} \end{document} diff --git a/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_greedy_pivoted_greedy.py b/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_greedy_pivoted_greedy.py index 5a188a8..3e42d5f 100644 --- a/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_greedy_pivoted_greedy.py +++ b/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_greedy_pivoted_greedy.py @@ -1,551 +1,559 @@ #Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # from copy import deepcopy as copy import numpy as np from .generic_pivoted_greedy_approximant import ( GenericPivotedGreedyApproximantBase, GenericPivotedGreedyApproximantPolyMatch, GenericPivotedGreedyApproximantPoleMatch) from rrompy.reduction_methods.standard.greedy import RationalInterpolantGreedy from rrompy.reduction_methods.standard.greedy.generic_greedy_approximant \ import pruneSamples from rrompy.reduction_methods.pivoted import ( RationalInterpolantGreedyPivotedPolyMatch, RationalInterpolantGreedyPivotedPoleMatch) from rrompy.utilities.base.types import paramList from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.exception_manager import RROMPyAssert from rrompy.parameter import emptyParameterList from rrompy.utilities.parallel import poolRank, recv __all__ = ['RationalInterpolantGreedyPivotedGreedyPolyMatch', 'RationalInterpolantGreedyPivotedGreedyPoleMatch'] class RationalInterpolantGreedyPivotedGreedyBase( GenericPivotedGreedyApproximantBase): def _preliminaryTraining(self): """Initialize starting snapshots of solution map.""" RROMPyAssert(self._mode, message = "Cannot start greedy algorithm.") if self.samplingEngine.nsamples > 0: return self.resetSamples() self.samplingEngine.scaleFactor = self.scaleFactorDer musPivot = self.samplerTrainSet.generatePoints(self.S) while len(musPivot) > self.S: musPivot.pop() muTestBasePivot = self.samplerPivot.generatePoints(self.nTestPoints, False) idxPop = pruneSamples(self.mapParameterListPivot(muTestBasePivot), self.mapParameterListPivot(musPivot), 1e-10 * self.scaleFactorPivot[0]) muTestBasePivot.pop(idxPop) self._mus = emptyParameterList() self.mus.reset((self.S - 1, self.HFEngine.npar)) self.muTest = emptyParameterList() self.muTest.reset((len(muTestBasePivot) + 1, self.HFEngine.npar)) self.mus.data[:, self.directionPivot] = musPivot[: -1] self.mus.data[:, self.directionMarginal] = np.repeat(self.muMargLoc, self.S - 1, axis = 0) self.muTest.data[: -1, self.directionPivot] = muTestBasePivot.data self.muTest.data[-1, self.directionPivot] = musPivot[-1] self.muTest.data[:, self.directionMarginal] = np.repeat(self.muMargLoc, len(muTestBasePivot) + 1, axis = 0) vbMng(self, "MAIN", ("Adding first {} sample point{} at {} to training " "set.").format(self.S - 1, "" + "s" * (self.S > 2), self.mus), 3) self.samplingEngine.iterSample(self.mus) self._S = len(self.mus) self._approxParameters["S"] = self.S def setupApproxPivoted(self, mus:paramList) -> int: if self.checkComputedApproxPivoted(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") vbMng(self, "INIT", "Setting up pivoted approximant.", 10) if not hasattr(self, "_plotEstPivot"): self._plotEstPivot = "NONE" idx, sizes, emptyCores = self._preSetupApproxPivoted(mus) S0 = copy(self.S) pMat, Ps, Qs, req, musA = None, [], [], [], None if len(idx) == 0: vbMng(self, "MAIN", "Idling.", 45) if self.storeAllSamples: self.storeSamples() pL = recv(source = 0, tag = poolRank()) pMat = np.empty((pL, 0), dtype = np.complex) musA = np.empty((0, self.mu0.shape[1]), dtype = np.complex) else: for i in idx: self.muMargLoc = mus[[i]] vbMng(self, "MAIN", "Building marginal model no. {} at " "{}.".format(i + 1, self.muMargLoc[0]), 25) self.samplingEngine.resetHistory() self.trainedModel = None self.verbosity -= 5 self.samplingEngine.verbosity -= 5 RationalInterpolantGreedy.setupApprox(self, self._plotEstPivot) self.verbosity += 5 self.samplingEngine.verbosity += 5 if self.storeAllSamples: self.storeSamples(i + self._nmusOld) pMat, req, musA = self._localPivotedResult(pMat, req, emptyCores, musA) Ps += [copy(self.trainedModel.data.P)] Qs += [copy(self.trainedModel.data.Q)] if not self.matchState and self.autoCollapse: Ps[-1].postmultiplyTensorize(pMat.T) self._S = S0 del self.muMargLoc for r in req: r.wait() if not self.matchState and self.autoCollapse: pMat = pMat[:, : 0] self._postSetupApproxPivoted(musA, pMat, Ps, Qs, sizes) vbMng(self, "DEL", "Done setting up pivoted approximant.", 10) return 0 def setupApprox(self, plotEst : str = "NONE") -> int: if self.checkComputedApprox(): return -1 if '_' not in plotEst: plotEst = plotEst + "_NONE" plotEstM, self._plotEstPivot = plotEst.split("_") val = super().setupApprox(plotEstM) return val class RationalInterpolantGreedyPivotedGreedyPolyMatch( RationalInterpolantGreedyPivotedGreedyBase, GenericPivotedGreedyApproximantPolyMatch, RationalInterpolantGreedyPivotedPolyMatch): """ ROM greedy pivoted greedy rational interpolant computation for parametric problems (with polynomial matching). Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for matching; defaults to 1; - 'matchingKind': kind of matching; allowed values include 'ROTATE' and 'PROJECT'; defaults to 'ROTATE'; - 'matchingWeightError': weight for matching in error estimation; defaults to 0; - 'matchingErrorRelative': whether error estimation is relative; defaults to False, i.e. absolute error; - 'S': total number of pivot samples current approximant relies upon; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'samplerPivot': pivot sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'SMarginal': number of starting marginal samples; - 'samplerMarginal': marginal sample point generator via sparse grid; - 'errorEstimatorKindMarginal': kind of marginal error estimator; available values include 'LOOK_AHEAD', 'LOOK_AHEAD_DOERFLER', and 'NONE'; defaults to 'NONE'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to samplerPivot; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', and 'NONE'; defaults to 'NONE'; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; defaults to 1e-1; - 'maxIterMarginal': maximum number of marginal greedy steps; defaults to 1e2; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; defaults to False; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for matching; - 'matchingKind': kind of matching; - 'matchingWeightError': weight for matching in error estimation; - 'matchingErrorRelative': whether error estimation is relative; - 'N': degree of rational interpolant denominator; + - 'rationalMode': mode of rational approximation; - 'errorEstimatorKindMarginal': kind of marginal error estimator; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; - 'errorEstimatorKind': kind of error estimator; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; - 'maxIterMarginal': maximum number of marginal greedy steps; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator via sparse grid. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for matching. matchingKind: Kind of matching. matchingWeightError: Weight for matching in error estimation. matchingErrorRelative: Whether error estimation is relative. S: Total number of pivot samples current approximant relies upon. N: Denominator degree of approximant. samplerPivot: Pivot sample point generator. + rationalMode: Mode of rational approximation. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator via sparse grid. errorEstimatorKindMarginal: Kind of marginal error estimator. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. errorEstimatorKind: kind of error estimator. greedyTolMarginal: Uniform error tolerance for marginal greedy algorithm. maxIterMarginal: Maximum number of marginal greedy steps. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. autoCollapse: Whether to collapse trained reduced model as soon as it is built. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ class RationalInterpolantGreedyPivotedGreedyPoleMatch( RationalInterpolantGreedyPivotedGreedyBase, GenericPivotedGreedyApproximantPoleMatch, RationalInterpolantGreedyPivotedPoleMatch): """ ROM greedy pivoted greedy rational interpolant computation for parametric problems (with pole matching). Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for pole matching optimization; defaults to 1; - 'matchingShared': required ratio of marginal points to share resonance; defaults to 1.; - 'badPoleCorrection': strategy for correction of bad poles; available values include 'ERASE', 'RATIONAL', and 'POLYNOMIAL'; defaults to 'ERASE'; - 'matchingWeightError': weight for pole matching optimization in error estimation; defaults to 0; - 'matchingErrorRelative': whether error estimation is relative; defaults to False, i.e. absolute error; - 'S': total number of pivot samples current approximant relies upon; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'samplerPivot': pivot sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'SMarginal': number of starting marginal samples; - 'samplerMarginal': marginal sample point generator via sparse grid; - 'errorEstimatorKindMarginal': kind of marginal error estimator; available values include 'LOOK_AHEAD', 'LOOK_AHEAD_DOERFLER', and 'NONE'; defaults to 'NONE'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to samplerPivot; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', and 'NONE'; defaults to 'NONE'; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; defaults to 1e-1; - 'maxIterMarginal': maximum number of marginal greedy steps; defaults to 1e2; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; defaults to False; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for pole matching optimization; - 'matchingShared': required ratio of marginal points to share resonance; - 'badPoleCorrection': strategy for correction of bad poles; - 'matchingWeightError': weight for pole matching optimization in error estimation; - 'matchingErrorRelative': whether error estimation is relative; - 'N': degree of rational interpolant denominator; + - 'rationalMode': mode of rational approximation; - 'errorEstimatorKindMarginal': kind of marginal error estimator; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; - 'errorEstimatorKind': kind of error estimator; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; - 'maxIterMarginal': maximum number of marginal greedy steps; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator via sparse grid. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for pole matching optimization. matchingShared: Required ratio of marginal points to share resonance. badPoleCorrection: Strategy for correction of bad poles. matchingWeightError: Weight for pole matching optimization in error estimation. matchingErrorRelative: Whether error estimation is relative. S: Total number of pivot samples current approximant relies upon. N: Denominator degree of approximant. samplerPivot: Pivot sample point generator. + rationalMode: Mode of rational approximation. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator via sparse grid. errorEstimatorKindMarginal: Kind of marginal error estimator. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. errorEstimatorKind: kind of error estimator. greedyTolMarginal: Uniform error tolerance for marginal greedy algorithm. maxIterMarginal: Maximum number of marginal greedy steps. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. autoCollapse: Whether to collapse trained reduced model as soon as it is built. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ diff --git a/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_pivoted_greedy.py b/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_pivoted_greedy.py index 3a1664a..f975aac 100644 --- a/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_pivoted_greedy.py +++ b/rrompy/reduction_methods/pivoted/greedy/rational_interpolant_pivoted_greedy.py @@ -1,515 +1,515 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # from copy import deepcopy as copy import numpy as np from .generic_pivoted_greedy_approximant import ( GenericPivotedGreedyApproximantBase, GenericPivotedGreedyApproximantPolyMatch, GenericPivotedGreedyApproximantPoleMatch) from rrompy.reduction_methods.standard import RationalInterpolant from rrompy.reduction_methods.pivoted import ( RationalInterpolantPivotedPolyMatch, RationalInterpolantPivotedPoleMatch) from rrompy.utilities.base.types import paramList from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.exception_manager import RROMPyException, RROMPyAssert from rrompy.parameter import emptyParameterList from rrompy.utilities.parallel import poolRank, recv __all__ = ['RationalInterpolantPivotedGreedyPolyMatch', 'RationalInterpolantPivotedGreedyPoleMatch'] class RationalInterpolantPivotedGreedyBase( GenericPivotedGreedyApproximantBase): def computeSnapshots(self): """Compute snapshots of solution map.""" RROMPyAssert(self._mode, message = "Cannot start snapshot computation.") - vbMng(self, "INIT", "Starting computation of snapshots.", 5) + vbMng(self, "INIT", "Starting computation of snapshots.", 10) self.samplingEngine.scaleFactor = self.scaleFactorDer if not hasattr(self, "musPivot") or len(self.musPivot) != self.S: self.musPivot = self.samplerPivot.generatePoints(self.S) while len(self.musPivot) > self.S: self.musPivot.pop() musLoc = emptyParameterList() musLoc.reset((self.S, self.HFEngine.npar)) self.samplingEngine.resetHistory() musLoc.data[:, self.directionPivot] = self.musPivot.data musLoc.data[:, self.directionMarginal] = np.repeat(self.muMargLoc, self.S, axis = 0) self.samplingEngine.iterSample(musLoc) - vbMng(self, "DEL", "Done computing snapshots.", 5) + vbMng(self, "DEL", "Done computing snapshots.", 10) self._m_selfmus = copy(musLoc) self._mus = self.musPivot self._m_HFEparameterMap = copy(self.HFEngine.parameterMap) self.HFEngine.parameterMap = { "F": [self.HFEngine.parameterMap["F"][self.directionPivot[0]]], "B": [self.HFEngine.parameterMap["B"][self.directionPivot[0]]]} def addMarginalSamplePoints(self, musMarginal:paramList, *args, **kwargs): """Add marginal sample points to reduced model.""" raise RROMPyException(("Cannot add marginal samples to marginal " "greedy reduced model.")) def setupApproxPivoted(self, mus:paramList) -> int: if self.checkComputedApproxPivoted(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") vbMng(self, "INIT", "Setting up pivoted approximant.", 10) idx, sizes, emptyCores = self._preSetupApproxPivoted(mus) pMat, Ps, Qs, req, musA = None, [], [], [], None if len(idx) == 0: vbMng(self, "MAIN", "Idling.", 45) if self.storeAllSamples: self.storeSamples() pL = recv(source = 0, tag = poolRank()) pMat = np.empty((pL, 0), dtype = np.complex) musA = np.empty((0, self.mu0.shape[1]), dtype = np.complex) else: for i in idx: self.muMargLoc = mus[[i]] vbMng(self, "MAIN", "Building marginal model no. {} at " "{}.".format(i + 1, self.muMargLoc[0]), 25) self.samplingEngine.resetHistory() self.trainedModel = None self.verbosity -= 5 self.samplingEngine.verbosity -= 5 RationalInterpolant.setupApprox(self) self.verbosity += 5 self.samplingEngine.verbosity += 5 self._mus = self._m_selfmus self.HFEngine.parameterMap = self._m_HFEparameterMap del self._m_selfmus, self._m_HFEparameterMap if self.storeAllSamples: self.storeSamples(i + self._nmusOld) pMat, req, musA = self._localPivotedResult(pMat, req, emptyCores, musA) Ps += [copy(self.trainedModel.data.P)] Qs += [copy(self.trainedModel.data.Q)] if (self.rationalMode[-6:] != "OUTPUT" # collapse by hand and not self.matchState and self.autoCollapse): Ps[-1].postmultiplyTensorize(pMat.T) del self.muMargLoc if self.trainedModel.data._collapsed: pMat = pMat[:, : 0] for r in req: r.wait() self._postSetupApproxPivoted(musA, pMat, Ps, Qs, sizes) vbMng(self, "DEL", "Done setting up pivoted approximant.", 10) return 0 class RationalInterpolantPivotedGreedyPolyMatch( RationalInterpolantPivotedGreedyBase, GenericPivotedGreedyApproximantPolyMatch, RationalInterpolantPivotedPolyMatch): """ ROM pivoted greedy rational interpolant computation for parametric problems (with polynomial matching). Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for matching; defaults to 1; - 'matchingKind': kind of matching; allowed values include 'ROTATE' and 'PROJECT'; defaults to 'ROTATE'; - 'matchingWeightError': weight for matching in error estimation; defaults to 0; - 'matchingErrorRelative': whether error estimation is relative; defaults to False, i.e. absolute error; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': number of starting marginal samples; - 'samplerMarginal': marginal sample point generator via sparse grid; - 'errorEstimatorKindMarginal': kind of marginal error estimator; available values include 'LOOK_AHEAD', 'LOOK_AHEAD_DOERFLER', and 'NONE'; defaults to 'NONE'; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; defaults to 1e-1; - 'maxIterMarginal': maximum number of marginal greedy steps; defaults to 1e2; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; defaults to False; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musPivot: Array of pivot snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for matching; - 'matchingKind': kind of matching; - 'matchingWeightError': weight for matching in error estimation; - 'matchingErrorRelative': whether error estimation is relative; - 'errorEstimatorKindMarginal': kind of marginal error estimator; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; - 'maxIterMarginal': maximum number of marginal greedy steps; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator via sparse grid. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for matching. matchingKind: Kind of matching. matchingWeightError: Weight for matching in error estimation. matchingErrorRelative: Whether error estimation is relative. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator via sparse grid. errorEstimatorKindMarginal: Kind of marginal error estimator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. M: Degree of rational interpolant numerator. N: Degree of rational interpolant denominator. greedyTolMarginal: Uniform error tolerance for marginal greedy algorithm. maxIterMarginal: Maximum number of marginal greedy steps. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. autoCollapse: Whether to collapse trained reduced model as soon as it is built. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ class RationalInterpolantPivotedGreedyPoleMatch( RationalInterpolantPivotedGreedyBase, GenericPivotedGreedyApproximantPoleMatch, RationalInterpolantPivotedPoleMatch): """ ROM pivoted greedy rational interpolant computation for parametric problems (with pole matching). Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for pole matching optimization; defaults to 1; - 'matchingShared': required ratio of marginal points to share resonance; defaults to 1.; - 'badPoleCorrection': strategy for correction of bad poles; available values include 'ERASE', 'RATIONAL', and 'POLYNOMIAL'; defaults to 'ERASE'; - 'matchingWeightError': weight for pole matching optimization in error estimation; defaults to 0; - 'matchingErrorRelative': whether error estimation is relative; defaults to False, i.e. absolute error; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': number of starting marginal samples; - 'samplerMarginal': marginal sample point generator via sparse grid; - 'errorEstimatorKindMarginal': kind of marginal error estimator; available values include 'LOOK_AHEAD', 'LOOK_AHEAD_DOERFLER', and 'NONE'; defaults to 'NONE'; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; defaults to 1e-1; - 'maxIterMarginal': maximum number of marginal greedy steps; defaults to 1e2; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; defaults to False; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musPivot: Array of pivot snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for pole matching optimization; - 'matchingShared': required ratio of marginal points to share resonance; - 'badPoleCorrection': strategy for correction of bad poles; - 'matchingWeightError': weight for pole matching optimization in error estimation; - 'matchingErrorRelative': whether error estimation is relative; - 'errorEstimatorKindMarginal': kind of marginal error estimator; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'greedyTolMarginal': uniform error tolerance for marginal greedy algorithm; - 'maxIterMarginal': maximum number of marginal greedy steps; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'autoCollapse': whether to collapse trained reduced model as soon as it is built; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator via sparse grid. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for pole matching optimization. matchingShared: Required ratio of marginal points to share resonance. badPoleCorrection: Strategy for correction of bad poles. matchingWeightError: Weight for pole matching optimization in error estimation. matchingErrorRelative: Whether error estimation is relative. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator via sparse grid. errorEstimatorKindMarginal: Kind of marginal error estimator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. M: Degree of rational interpolant numerator. N: Degree of rational interpolant denominator. greedyTolMarginal: Uniform error tolerance for marginal greedy algorithm. maxIterMarginal: Maximum number of marginal greedy steps. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. autoCollapse: Whether to collapse trained reduced model as soon as it is built. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ diff --git a/rrompy/reduction_methods/pivoted/rational_interpolant_greedy_pivoted.py b/rrompy/reduction_methods/pivoted/rational_interpolant_greedy_pivoted.py index c8b8f7e..1e5c0af 100644 --- a/rrompy/reduction_methods/pivoted/rational_interpolant_greedy_pivoted.py +++ b/rrompy/reduction_methods/pivoted/rational_interpolant_greedy_pivoted.py @@ -1,830 +1,842 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # from copy import deepcopy as copy import numpy as np from .generic_pivoted_approximant import (GenericPivotedApproximantBase, GenericPivotedApproximantNoMatch, GenericPivotedApproximantPolyMatch, GenericPivotedApproximantPoleMatch) from .gather_pivoted_approximant import gatherPivotedApproximant from rrompy.reduction_methods.standard.greedy.rational_interpolant_greedy \ import RationalInterpolantGreedy from rrompy.reduction_methods.standard.greedy.generic_greedy_approximant \ import pruneSamples from rrompy.utilities.base.types import Np1D, paramList from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.poly_fitting.polynomial import polyvander as pv from rrompy.utilities.poly_fitting.piecewise_linear import sparsekinds as sk from rrompy.utilities.exception_manager import (RROMPyException, RROMPyAssert, RROMPyWarning) from rrompy.parameter import emptyParameterList from rrompy.utilities.parallel import (masterCore, poolRank, indicesScatter, isend, recv) __all__ = ['RationalInterpolantGreedyPivotedNoMatch', 'RationalInterpolantGreedyPivotedPolyMatch', 'RationalInterpolantGreedyPivotedPoleMatch'] class RationalInterpolantGreedyPivotedBase(GenericPivotedApproximantBase, RationalInterpolantGreedy): def __init__(self, *args, **kwargs): self._preInit() self._addParametersToList(toBeExcluded = ["MAuxiliary", "NAuxiliary"]) super().__init__(*args, **kwargs) if self.nparPivot > 1: self.HFEngine._ignoreResidues = 1 self._postInit() @property def tModelType(self): if hasattr(self, "_temporaryPivot"): return RationalInterpolantGreedy.tModelType.fget(self) return super().tModelType @property def MAuxiliary(self): return 0 @property def NAuxiliary(self): return 0 def _polyvanderAuxiliary(self, mus, deg, *args, **kwargs): degEff = [0] * self.npar degEff[self.directionPivot[0]] = deg[0] return pv(mus, degEff, *args, **kwargs) def _marginalizeMiscellanea(self, forward:bool): if forward: self._m_selfmus = copy(self.mus) self._m_HFEparameterMap = copy(self.HFEngine.parameterMap) self._mus = self.checkParameterListPivot( self.mus(self.directionPivot)) self.HFEngine.parameterMap = { "F": [self.HFEngine.parameterMap["F"][self.directionPivot[0]]], "B": [self.HFEngine.parameterMap["B"][self.directionPivot[0]]]} else: self._mus = self._m_selfmus self.HFEngine.parameterMap = self._m_HFEparameterMap del self._m_selfmus, self._m_HFEparameterMap def _marginalizeTrainedModel(self, forward:bool): if forward: del self._temporaryPivot self.trainedModel.data.mu0 = self.mu0 self.trainedModel.data.scaleFactor = [1.] * self.npar self.trainedModel.data.scaleFactor[self.directionPivot[0]] = ( self.scaleFactor[0]) self.trainedModel.data.parameterMap = self.HFEngine.parameterMap self._m_musUniqueCN = copy(self._musUniqueCN) musUniqueCNAux = np.zeros((self.S, self.npar), dtype = np.complex) musUniqueCNAux[:, self.directionPivot[0]] = self._musUniqueCN(0) self._musUniqueCN = self.checkParameterList(musUniqueCNAux) self._m_derIdxs = copy(self._derIdxs) for j in range(len(self._derIdxs)): for l in range(len(self._derIdxs[j])): derjl = self._derIdxs[j][l][0] self._derIdxs[j][l] = [0] * self.npar self._derIdxs[j][l][self.directionPivot[0]] = derjl self.trainedModel.data.Q._dirPivot = self.directionPivot[0] self.trainedModel.data.P._dirPivot = self.directionPivot[0] # tell greedy error estimator that operator / RHS is pivot-affine if hasattr(self.HFEngine.A, "is_affine"): self._A_is_affine = self.HFEngine.A.is_affine else: self._A_is_affine = 0 if hasattr(self.HFEngine.b, "is_affine"): self._b_is_affine = self.HFEngine.b.is_affine else: self._b_is_affine = 0 if self._A_is_affine >= 1 / 2 and self._b_is_affine >= 1 / 2: self._affine_lvl += [1 / 2] else: self._temporaryPivot = 1 self.trainedModel.data.mu0 = self.checkParameterListPivot( self.mu0(self.directionPivot)) self.trainedModel.data.scaleFactor = self.scaleFactor self.trainedModel.data.parameterMap = { "F": [self.HFEngine.parameterMap["F"][self.directionPivot[0]]], "B": [self.HFEngine.parameterMap["B"][self.directionPivot[0]]]} self._musUniqueCN = copy(self._m_musUniqueCN) self._derIdxs = copy(self._m_derIdxs) del self._m_musUniqueCN, self._m_derIdxs del self.trainedModel.data.Q._dirPivot del self.trainedModel.data.P._dirPivot if self._A_is_affine >= 1 / 2 and self._b_is_affine >= 1 / 2: self._affine_lvl.pop() del self._A_is_affine, self._b_is_affine self.trainedModel.data.npar = self.npar def errorEstimator(self, mus:Np1D, return_max : bool = False) -> Np1D: """Standard residual-based error estimator.""" setupOK = self.setupApproxLocal() if setupOK > 0: err = np.empty(len(mus)) err[:] = np.nan if not return_max: return err return err, [- setupOK], np.nan self._marginalizeTrainedModel(True) errRes = super().errorEstimator(mus, return_max) self._marginalizeTrainedModel(False) return errRes def _preliminaryTraining(self): """Initialize starting snapshots of solution map.""" RROMPyAssert(self._mode, message = "Cannot start greedy algorithm.") self.resetSamples() self.samplingEngine.scaleFactor = self.scaleFactorDer musPivot = self.samplerTrainSet.generatePoints(self.S) while len(musPivot) > self.S: musPivot.pop() muTestPivot = self.samplerPivot.generatePoints(self.nTestPoints, False) idxPop = pruneSamples(self.mapParameterListPivot(muTestPivot), self.mapParameterListPivot(musPivot), 1e-10 * self.scaleFactorPivot[0]) muTestPivot.pop(idxPop) self._mus = emptyParameterList() self.mus.reset((self.S - 1, self.HFEngine.npar)) self.muTest = emptyParameterList() self.muTest.reset((len(muTestPivot) + 1, self.HFEngine.npar)) self.mus.data[:, self.directionPivot] = musPivot[: -1] self.mus.data[:, self.directionMarginal] = np.repeat(self.muMargLoc, self.S - 1, axis = 0) self.muTest.data[: -1, self.directionPivot] = muTestPivot.data self.muTest.data[-1, self.directionPivot] = musPivot[-1] self.muTest.data[:, self.directionMarginal] = np.repeat(self.muMargLoc, len(muTestPivot) + 1, axis = 0) vbMng(self, "MAIN", ("Adding first {} sample point{} at {} to training " "set.").format(self.S - 1, "" + "s" * (self.S > 2), self.mus), 3) self.samplingEngine.iterSample(self.mus) self._S = len(self.mus) self._approxParameters["S"] = self.S def setupApproxLocal(self) -> int: """Compute rational interpolant.""" self._marginalizeMiscellanea(True) setupOK = super().setupApproxLocal() self._marginalizeMiscellanea(False) return setupOK def addMarginalSamplePoints(self, musMarginal:paramList, *args, **kwargs) -> int: """Add marginal sample points to reduced model.""" RROMPyAssert(self._mode, message = "Cannot add sample points.") musMarginal = self.checkParameterListMarginal(musMarginal) vbMng(self, "INIT", "Adding marginal sample point{} at {}.".format( "s" * (len(musMarginal) > 1), musMarginal), 5) if (self.SMarginal > 0 and hasattr(self, "polybasisMarginal") and self.polybasisMarginal in sk): RROMPyWarning(("Manually adding new samples with piecewise linear " "marginal interpolation is dangerous. Sample depth " "in samplerMarginal must be managed correctly.")) _musOld = self.mus self._musMarginal.append(musMarginal) S0 = copy(self.S) req, is_master = [], masterCore() idx, sizes = indicesScatter(len(musMarginal), return_sizes = True) _trainedModelOld = copy(self.trainedModel) _collapsed = (_trainedModelOld is not None and _trainedModelOld.data._collapsed) pMat, Ps, Qs, mus = None, [], [], None emptyCores = np.where(np.array(sizes) == 0)[0] if is_master else [] if len(idx) == 0: vbMng(self, "MAIN", "Idling.", 25) if self.storeAllSamples: self.storeSamples() pL = recv(source = 0, tag = poolRank()) pMat = np.empty((pL, 0), dtype = np.complex) mus = np.empty((0, self.mu0.shape[1]), dtype = np.complex) else: _scaleFactorOldPivot = copy(self.scaleFactor) self.scaleFactor = self.scaleFactorPivot self._temporaryPivot = 1 for i in idx: self.muMargLoc = self.musMarginal[[i + self.SMarginal]] vbMng(self, "MAIN", "Building marginal model no. {} at {}.".format( i + self.SMarginal + 1, self.musMarginal[i + self.SMarginal]), 5) self.samplingEngine.resetHistory() self.trainedModel = None self.verbosity -= 5 self.samplingEngine.verbosity -= 5 RationalInterpolantGreedy.setupApprox(self, *args, **kwargs) self.verbosity += 5 self.samplingEngine.verbosity += 5 musi = self.samplingEngine.mus pMati = self.samplingEngine.projectionMatrix if self.storeAllSamples: self.storeSamples(i + self.SMarginal) if not self.matchState: if self.POD == 1 and not ( hasattr(self.HFEngine.C, "is_mu_independent") and self.HFEngine.C.is_mu_independent in self._output_lvl): raise RROMPyException(("Cannot apply mu-dependent C " "to orthonormalized samples.")) vbMng(self, "INIT", "Extracting system output from state.", 35) pMatiEff = None for j, mu in enumerate(musi): pMij = np.expand_dims(self.HFEngine.applyC( pMati[:, j], mu), -1) if pMatiEff is None: pMatiEff = np.array(pMij) else: pMatiEff = np.append(pMatiEff, pMij, axis = 1) pMati = pMatiEff vbMng(self, "DEL", "Done extracting system output.", 35) if i == 0: mus = copy(musi.data) for dest in emptyCores: req += [isend(len(pMati), dest = dest, tag = dest)] else: mus = np.vstack((mus, musi.data)) if not _collapsed: if pMat is None: pMat = copy(pMati) else: pMat = np.hstack((pMat, pMati)) Ps += [copy(self.trainedModel.data.P)] Qs += [copy(self.trainedModel.data.Q)] if _collapsed: # collapse by hand Ps[-1].postmultiplyTensorize(pMati.T) self._S = S0 del self._temporaryPivot, self.muMargLoc self.scaleFactor = _scaleFactorOldPivot for r in req: r.wait() if _collapsed: pMat = pMati[:, : 0] pMat, Ps, Qs, mus, nsamples = gatherPivotedApproximant(pMat, Ps, Qs, mus, sizes, self.polybasis) self._mus = _musOld self.mus.append(mus) Psupp = np.append(0, np.cumsum(nsamples[: -1]).astype(int)) if _trainedModelOld is None: self._setupTrainedModel(pMat, forceNew = True) self.trainedModel.data.Qs, self.trainedModel.data.Ps = [], [] self.trainedModel.data.Psupp = [] else: self._trainedModel = _trainedModelOld if _collapsed: self._setupTrainedModel(1.) Psupp = [0] * len(musMarginal) else: Psupp = Psupp + self.trainedModel.data.projMat.shape[1] self._setupTrainedModel(pMat, 1) self._SMarginal += len(musMarginal) self.trainedModel.data.Qs += Qs self.trainedModel.data.Ps += Ps self.trainedModel.data.Psupp += list(Psupp) self._preliminaryMarginalFinalization() self._finalizeMarginalization() vbMng(self, "DEL", "Done adding marginal sample points.", 5) return 0 def setupApprox(self, *args, **kwargs) -> int: """Compute rational interpolant.""" if self.checkComputedApprox(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") vbMng(self, "INIT", "Setting up {}.". format(self.name()), 5) self.computeScaleFactor() self._mus = emptyParameterList() self._musMarginal = emptyParameterList() musMarginal = self.samplerMarginal.generatePoints(self.SMarginal) while len(musMarginal) > self.SMarginal: musMarginal.pop() self._SMarginal = 0 val = self.addMarginalSamplePoints(musMarginal, *args, **kwargs) vbMng(self, "DEL", "Done setting up approximant.", 5) return val class RationalInterpolantGreedyPivotedNoMatch( RationalInterpolantGreedyPivotedBase, GenericPivotedApproximantNoMatch): """ ROM pivoted rational interpolant (without matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'S': total number of pivot samples current approximant relies upon; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'samplerPivot': pivot sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to samplerPivot; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', and 'NONE'; defaults to 'NONE'; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'N': degree of rational interpolant denominator; + - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; - 'errorEstimatorKind': kind of error estimator; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. S: Total number of pivot samples current approximant relies upon. N: Denominator degree of approximant. samplerPivot: Pivot sample point generator. + rationalMode: Mode of rational approximation. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. polybasis: Type of polynomial basis for pivot interpolation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. errorEstimatorKind: kind of error estimator. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ class RationalInterpolantGreedyPivotedPolyMatch( RationalInterpolantGreedyPivotedBase, GenericPivotedApproximantPolyMatch): """ ROM pivoted rational interpolant (with polynomial matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for matching; defaults to 1; - 'matchingKind': kind of matching; allowed values include 'ROTATE' and 'PROJECT'; defaults to 'ROTATE'; - 'S': total number of pivot samples current approximant relies upon; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'samplerPivot': pivot sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to samplerPivot; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', and 'NONE'; defaults to 'NONE'; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for matching; - 'matchingKind': kind of matching; - 'N': degree of rational interpolant denominator; + - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; - 'errorEstimatorKind': kind of error estimator; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for matching. matchingKind: Kind of matching. S: Total number of pivot samples current approximant relies upon. N: Denominator degree of approximant. samplerPivot: Pivot sample point generator. + rationalMode: Mode of rational approximation. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. errorEstimatorKind: kind of error estimator. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ def setupApprox(self, *args, **kwargs) -> int: if self.checkComputedApprox(): return -1 self.purgeparamsMarginal() setupOK = super().setupApprox(*args, **kwargs) if self.matchState: self._postApplyC() return setupOK class RationalInterpolantGreedyPivotedPoleMatch( RationalInterpolantGreedyPivotedBase, GenericPivotedApproximantPoleMatch): """ ROM pivoted rational interpolant (with pole matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for pole matching optimization; defaults to 1; - 'matchingShared': required ratio of marginal points to share resonance; defaults to 1.; - 'badPoleCorrection': strategy for correction of bad poles; available values include 'ERASE', 'RATIONAL', and 'POLYNOMIAL'; defaults to 'ERASE'; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to samplerPivot; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', and 'NONE'; defaults to 'NONE'; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for pole matching optimization; - 'matchingShared': required ratio of marginal points to share resonance; - 'badPoleCorrection': strategy for correction of bad poles; + - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; - 'errorEstimatorKind': kind of error estimator; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for pole matching optimization. matchingShared: Required ratio of marginal points to share resonance. badPoleCorrection: Strategy for correction of bad poles. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. + rationalMode: Mode of rational approximation. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. errorEstimatorKind: kind of error estimator. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ def setupApprox(self, *args, **kwargs) -> int: if self.checkComputedApprox(): return -1 self.purgeparamsMarginal() setupOK = super().setupApprox(*args, **kwargs) if self.matchState: self._postApplyC() return setupOK diff --git a/rrompy/reduction_methods/pivoted/rational_interpolant_pivoted.py b/rrompy/reduction_methods/pivoted/rational_interpolant_pivoted.py index 42131a8..eee6c96 100755 --- a/rrompy/reduction_methods/pivoted/rational_interpolant_pivoted.py +++ b/rrompy/reduction_methods/pivoted/rational_interpolant_pivoted.py @@ -1,724 +1,724 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # import numpy as np from collections.abc import Iterable from copy import deepcopy as copy from .generic_pivoted_approximant import (GenericPivotedApproximantBase, GenericPivotedApproximantNoMatch, GenericPivotedApproximantPolyMatch, GenericPivotedApproximantPoleMatch) from .gather_pivoted_approximant import gatherPivotedApproximant from rrompy.reduction_methods.standard.rational_interpolant import ( RationalInterpolant) from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.base.types import paramList from rrompy.utilities.numerical.hash_derivative import nextDerivativeIndices from rrompy.utilities.poly_fitting.piecewise_linear import sparsekinds as sk from rrompy.utilities.exception_manager import (RROMPyException, RROMPyAssert, RROMPyWarning) from rrompy.parameter import emptyParameterList from rrompy.utilities.parallel import (masterCore, poolRank, indicesScatter, isend, recv) __all__ = ['RationalInterpolantPivotedNoMatch', 'RationalInterpolantPivotedPolyMatch', 'RationalInterpolantPivotedPoleMatch'] class RationalInterpolantPivotedBase(GenericPivotedApproximantBase, RationalInterpolant): def __init__(self, *args, **kwargs): self._preInit() self._addParametersToList(toBeExcluded = ["polydegreetype", "MAuxiliary", "NAuxiliary"]) super().__init__(*args, **kwargs) if self.nparPivot > 1: self.HFEngine._ignoreResidues = 1 self._postInit() @property def MAuxiliary(self): return 0 @property def NAuxiliary(self): return 0 @property def polydegreetype(self): return "TENSOR_TOTAL" @property def scaleFactorDer(self): """Value of scaleFactorDer.""" if self._scaleFactorDer == "NONE": return 1. if self._scaleFactorDer == "AUTO": return self.scaleFactorPivot return self._scaleFactorDer @scaleFactorDer.setter def scaleFactorDer(self, scaleFactorDer): if isinstance(scaleFactorDer, (str,)): scaleFactorDer = scaleFactorDer.upper() elif isinstance(scaleFactorDer, Iterable): scaleFactorDer = list(scaleFactorDer) self._scaleFactorDer = scaleFactorDer self._approxParameters["scaleFactorDer"] = self._scaleFactorDer def _setupInterpolationIndices(self): """Setup parameters for polyvander.""" RROMPyAssert(self._mode, message = "Cannot setup interpolation indices.") if (self._musUniqueCN is None or len(self._reorder) != len(self.musPivot)): try: muPC = self.trainedModel.centerNormalizePivot(self.musPivot) except: muPC = self.trainedModel.centerNormalize(self.musPivot) self._musUniqueCN, musIdxsTo, musIdxs, musCount = (muPC.unique( return_index = True, return_inverse = True, return_counts = True)) self._musUnique = self.musPivot[musIdxsTo] self._derIdxs = [None] * len(self._musUniqueCN) self._reorder = np.empty(len(musIdxs), dtype = int) filled = 0 for j, cnt in enumerate(musCount): self._derIdxs[j] = nextDerivativeIndices([], self.nparPivot, cnt) jIdx = np.nonzero(musIdxs == j)[0] self._reorder[jIdx] = np.arange(filled, filled + cnt) filled += cnt def addMarginalSamplePoints(self, musMarginal:paramList) -> int: """Add marginal sample points to reduced model.""" RROMPyAssert(self._mode, message = "Cannot add sample points.") musMarginal = self.checkParameterListMarginal(musMarginal) vbMng(self, "INIT", "Adding marginal sample point{} at {}.".format( "s" * (len(musMarginal) > 1), musMarginal), 5) if (self.SMarginal > 0 and hasattr(self, "polybasisMarginal") and self.polybasisMarginal in sk): RROMPyWarning(("Manually adding new samples with piecewise linear " "marginal interpolation is dangerous. Sample depth " "in samplerMarginal must be managed correctly.")) mus = np.empty((self.S * len(musMarginal), self.HFEngine.npar), dtype = np.complex) mus[:, self.directionPivot] = np.tile(self.musPivot.data, (len(musMarginal), 1)) mus[:, self.directionMarginal] = np.repeat(musMarginal.data, self.S, axis = 0) self._mus.append(mus) self._musMarginal.append(musMarginal) N0 = copy(self.N) req, is_master = [], masterCore() idx, sizes = indicesScatter(len(musMarginal), return_sizes = True) _collapsed = self.trainedModel.data._collapsed pMat, Ps, Qs = None, [], [] emptyCores = np.where(np.array(sizes) == 0)[0] if is_master else [] if len(idx) == 0: vbMng(self, "MAIN", "Idling.", 30) if self.storeAllSamples: self.storeSamples() pL = recv(source = 0, tag = poolRank()) pMat = np.empty((pL, 0), dtype = np.complex) else: _scaleFactorOldPivot = copy(self.scaleFactor) self.scaleFactor = self.scaleFactorPivot self._temporaryPivot = 1 for i in idx: musi = self.mus[self.S * (i + self.SMarginal) : self.S * (i + self.SMarginal + 1)] vbMng(self, "MAIN", "Building marginal model no. {} at {}.".format( i + self.SMarginal + 1, self.musMarginal[i + self.SMarginal]), 5) - vbMng(self, "INIT", "Starting computation of snapshots.", 10) + vbMng(self, "INIT", "Starting computation of snapshots.", 15) self.samplingEngine.resetHistory() self.samplingEngine.iterSample(musi) - vbMng(self, "DEL", "Done computing snapshots.", 10) + vbMng(self, "DEL", "Done computing snapshots.", 15) self.verbosity -= 5 self.samplingEngine.verbosity -= 5 if self.rationalMode[-6:] == "OUTPUT": vbMng(self, "INIT", "Extracting system output from state.", 35) self.samplingEngine.samples_output = self.HFEngine.applyC( self.samplingEngine.samples, self.mus) _POD, self._POD = self.POD, 1 vbMng(self, "DEL", "Done extracting system output.", 35) self._setupRational(self._setupDenominator()) if self.rationalMode[-6:] == "OUTPUT": self._POD = _POD pMati = np.empty((1, 0), dtype = np.complex) else: pMati = self.samplingEngine.projectionMatrix self.verbosity += 5 self.samplingEngine.verbosity += 5 if self.storeAllSamples: self.storeSamples(i + self.SMarginal) if self.rationalMode[-6:] != "OUTPUT" and not self.matchState: if self.POD == 1 and not ( hasattr(self.HFEngine.C, "is_mu_independent") and self.HFEngine.C.is_mu_independent in self._output_lvl): raise RROMPyException(("Cannot apply mu-dependent C " "to orthonormalized samples.")) vbMng(self, "INIT", "Extracting system output from state.", 35) pMatiEff = None for j, mu in enumerate(musi): pMij = np.expand_dims(self.HFEngine.applyC( pMati[:, j], mu), -1) if pMatiEff is None: pMatiEff = np.array(pMij) else: pMatiEff = np.append(pMatiEff, pMij, axis = 1) pMati = pMatiEff vbMng(self, "DEL", "Done extracting system output.", 35) if i == 0: for dest in emptyCores: req += [isend(len(pMati), dest = dest, tag = dest)] if not _collapsed: if pMat is None: pMat = copy(pMati) else: pMat = np.hstack((pMat, pMati)) Ps += [copy(self.trainedModel.data.P)] Qs += [copy(self.trainedModel.data.Q)] if _collapsed and self.rationalMode[-6:] != "OUTPUT": Ps[-1].postmultiplyTensorize(pMati.T) # collapse by hand del self.trainedModel.data.Q, self.trainedModel.data.P self.N = N0 del self._temporaryPivot self.scaleFactor = _scaleFactorOldPivot if _collapsed: pMat = pMati[:, : 0] for r in req: r.wait() pMat, Ps, Qs, _, _ = gatherPivotedApproximant(pMat, Ps, Qs, self.mus.data, sizes, self.polybasis, False) if _collapsed: self._setupTrainedModel(1.) Psupp = [0] * len(musMarginal) else: self._setupTrainedModel(pMat, len(self.trainedModel.data.projMat) > 0) Psupp = (self.SMarginal + np.arange(0, len(musMarginal))) * self.S self._SMarginal += len(musMarginal) self.trainedModel.data.Qs += Qs self.trainedModel.data.Ps += Ps self.trainedModel.data.Psupp += list(Psupp) self._preliminaryMarginalFinalization() self._finalizeMarginalization() vbMng(self, "DEL", "Done adding marginal sample points.", 5) return 0 def setupApprox(self) -> int: """Compute rational interpolant.""" if self.checkComputedApprox(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") vbMng(self, "INIT", "Setting up {}.". format(self.name()), 5) self.computeScaleFactor() self.resetSamples() self.samplingEngine.scaleFactor = self.scaleFactorDer self._mus = emptyParameterList() self._musMarginal = emptyParameterList() self.musPivot = self.samplerPivot.generatePoints(self.S) while len(self.musPivot) > self.S: self.musPivot.pop() musMarginal = self.samplerMarginal.generatePoints(self.SMarginal) while len(musMarginal) > self.SMarginal: musMarginal.pop() self._setupTrainedModel(np.zeros((0, 0)), forceNew = True, collapsed = self.rationalMode[-6:] == "OUTPUT") self.trainedModel.data.Qs, self.trainedModel.data.Ps = [], [] self.trainedModel.data.Psupp = [] self._SMarginal = 0 val = self.addMarginalSamplePoints(musMarginal) vbMng(self, "DEL", "Done setting up approximant.", 5) return val class RationalInterpolantPivotedNoMatch(RationalInterpolantPivotedBase, GenericPivotedApproximantNoMatch): """ ROM pivoted rational interpolant (without matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musPivot: Array of pivot snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for pivot interpolation. M: Numerator degree of approximant. N: Denominator degree of approximant. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ class RationalInterpolantPivotedPolyMatch(RationalInterpolantPivotedBase, GenericPivotedApproximantPolyMatch): """ ROM pivoted rational interpolant (with polynomial matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for matching; defaults to 1; - 'matchingKind': kind of matching; allowed values include 'ROTATE' and 'PROJECT'; defaults to 'ROTATE'; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musPivot: Array of pivot snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for matching; - 'matchingKind': kind of matching; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for matching. matchingKind: Kind of matching. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. M: Numerator degree of approximant. N: Denominator degree of approximant. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ def setupApprox(self, *args, **kwargs) -> int: if self.checkComputedApprox(): return -1 if self.rationalMode[-6:] == "OUTPUT": self.matchState = False self.purgeparamsMarginal() setupOK = super().setupApprox(*args, **kwargs) if self.matchState: self._postApplyC() return setupOK class RationalInterpolantPivotedPoleMatch(RationalInterpolantPivotedBase, GenericPivotedApproximantPoleMatch): """ ROM pivoted rational interpolant (with pole matching) computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. directionPivot(optional): Pivot components. Defaults to [0]. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'matchState': whether to match the system state rather than the system output; defaults to False; - 'matchingWeight': weight for pole matching optimization; defaults to 1; - 'matchingShared': required ratio of marginal points to share resonance; defaults to 1.; - 'badPoleCorrection': strategy for correction of bad poles; available values include 'ERASE', 'RATIONAL', and 'POLYNOMIAL'; defaults to 'ERASE'; - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for pivot interpolation; defaults to 'MONOMIAL'; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; allowed values include 'MONOMIAL_*', 'CHEBYSHEV_*', 'LEGENDRE_*', 'NEARESTNEIGHBOR', and 'PIECEWISE_LINEAR_*'; defaults to 'MONOMIAL'; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; defaults to 'AUTO', i.e. maximum allowed; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'nNeighborsMarginal': number of marginal nearest neighbors; defaults to 1; only for 'NEARESTNEIGHBOR'; . 'polydegreetypeMarginal': type of polynomial degree for marginal; defaults to 'TOTAL'; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; defaults to 0; . 'interpTolMarginal': tolerance for marginal interpolation; defaults to None; not for 'NEARESTNEIGHBOR' or 'PIECEWISE_LINEAR_*'; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights; only for radial basis. - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'radialDirectionalWeightsMarginal': radial basis weights for marginal interpolant; defaults to 1; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for pivot interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. directionPivot: Pivot components. mus: Array of snapshot parameters. musPivot: Array of pivot snapshot parameters. musMarginal: Array of marginal snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'matchState': whether to match the system state rather than the system output; - 'matchingWeight': weight for pole matching optimization; - 'matchingShared': required ratio of marginal points to share resonance; - 'badPoleCorrection': strategy for correction of bad poles; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for pivot interpolation; - 'polybasisMarginal': type of polynomial basis for marginal interpolation; - 'paramsMarginal': dictionary of parameters for marginal interpolation; include: . 'MMarginal': degree of marginal interpolant; . 'nNeighborsMarginal': number of marginal nearest neighbors; . 'polydegreetypeMarginal': type of polynomial degree for marginal; . 'polyTruncateTolMarginal': tolerance for truncation of marginal interpolator; . 'interpTolMarginal': tolerance for marginal interpolation; . 'radialDirectionalWeightsMarginalAdapt': bounds for adaptive rescaling of marginal radial basis weights. - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for pivot interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of pivot samples current approximant relies upon; - 'samplerPivot': pivot sample point generator; - 'SMarginal': total number of marginal samples current approximant relies upon; - 'samplerMarginal': marginal sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. matchState: Whether to match the system state rather than the system output. matchingWeight: Weight for pole matching optimization. matchingShared: Required ratio of marginal points to share resonance. badPoleCorrection: Strategy for correction of bad poles. S: Total number of pivot samples current approximant relies upon. samplerPivot: Pivot sample point generator. SMarginal: Total number of marginal samples current approximant relies upon. samplerMarginal: Marginal sample point generator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for pivot interpolation. polybasisMarginal: Type of polynomial basis for marginal interpolation. paramsMarginal: Dictionary of parameters for marginal interpolation. M: Numerator degree of approximant. N: Denominator degree of approximant. radialDirectionalWeightsMarginal: Radial basis weights for marginal interpolant. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for pivot interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for pivot parameter values. muBoundsMarginal: list of bounds for marginal parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ def setupApprox(self, *args, **kwargs) -> int: if self.checkComputedApprox(): return -1 self.purgeparamsMarginal() setupOK = super().setupApprox(*args, **kwargs) if self.matchState: self._postApplyC() return setupOK diff --git a/rrompy/reduction_methods/standard/generic_standard_approximant.py b/rrompy/reduction_methods/standard/generic_standard_approximant.py index 5d918f5..7ccd778 100644 --- a/rrompy/reduction_methods/standard/generic_standard_approximant.py +++ b/rrompy/reduction_methods/standard/generic_standard_approximant.py @@ -1,217 +1,217 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # import numpy as np from copy import deepcopy as copy from rrompy.reduction_methods.base.generic_approximant import ( GenericApproximant) from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.base.types import Np2D, paramList from rrompy.utilities.exception_manager import (RROMPyException, RROMPyAssert, RROMPyWarning) __all__ = ['GenericStandardApproximant'] class GenericStandardApproximant(GenericApproximant): """ ROM interpolant computation for parametric problems (ABSTRACT). Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'S': total number of samples current approximant relies upon; - 'sampler': sample point generator. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. mus: Array of snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of samples current approximant relies upon; - 'sampler': sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. S: Number of solution snapshots over which current approximant is based upon. sampler: Sample point generator. muBounds: list of bounds for parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ def __init__(self, *args, **kwargs): self._preInit() from rrompy.parameter.parameter_sampling import EmptySampler as ES self._addParametersToList([], [], ["sampler"], [ES()]) super().__init__(*args, **kwargs) self._postInit() @property def mus(self): """Value of mus. Its assignment may reset snapshots.""" return self._mus @mus.setter def mus(self, mus): mus = self.checkParameterList(mus) musOld = copy(self.mus) if hasattr(self, '_mus') else None if (musOld is None or len(mus) != len(musOld) or not mus == musOld): self.resetSamples() self._mus = mus @property def muBounds(self): """Value of muBounds.""" return self.sampler.lims @property def sampler(self): """Value of sampler.""" return self._sampler @sampler.setter def sampler(self, sampler): if 'generatePoints' not in dir(sampler): raise RROMPyException("Sampler type not recognized.") if hasattr(self, '_sampler') and self._sampler is not None: samplerOld = self.sampler self._sampler = copy(sampler) self._approxParameters["sampler"] = self.sampler if not 'samplerOld' in locals() or samplerOld != self.sampler: self.resetSamples() def setSamples(self, samplingEngine, merge : bool = False): """Copy samplingEngine and samples.""" vbMng(self, "INIT", "Transfering samples.", 15) if isinstance(samplingEngine, (str, list, tuple,)): self.setupSampling() self.samplingEngine.load(samplingEngine, merge) elif merge: try: selfkeys = self.samplingEngine.feature_keys for key in samplingEngine.feature_keys: if key in selfkeys: self.samplingEngine._mergeFeature(key, samplingEngine.feature_vals[key]) except: RROMPyWarning(("Sample merge failed. Falling back to complete " "sampling engine replacement.")) self.samplingEngine = copy(samplingEngine) else: self.samplingEngine = copy(samplingEngine) if self.POD != 0 and (self.samplingEngine.nsamples != len(self.samplingEngine.samples_normal)): RROMPyWarning(("Assigning non-POD sampling engine to POD " "approximant is unstable. Declassing local " "POD to 0.")) self._POD = 0 self._mus = copy(self.samplingEngine.mus) self.scaleFactor = self.samplingEngine.scaleFactor vbMng(self, "DEL", "Done transfering samples.", 15) def computeSnapshots(self): """Compute snapshots of solution map.""" RROMPyAssert(self._mode, message = "Cannot start snapshot computation.") if self.samplingEngine.nsamples != self.S: self.computeScaleFactor() self.samplingEngine.scaleFactor = self.scaleFactorDer - vbMng(self, "INIT", "Starting computation of snapshots.", 5) + vbMng(self, "INIT", "Starting computation of snapshots.", 10) self.mus = self.sampler.generatePoints(self.S) while len(self.mus) > self.S: self.mus.pop() self.samplingEngine.iterSample(self.mus) - vbMng(self, "DEL", "Done computing snapshots.", 5) + vbMng(self, "DEL", "Done computing snapshots.", 10) def computeScaleFactor(self): """Compute parameter rescaling factor.""" self.scaleFactor = .5 * np.abs(( self.mapParameterList(self.muBounds[0]) - self.mapParameterList(self.muBounds[1]))[0]) def _setupTrainedModel(self, pMat:Np2D, pMatUpdate : bool = False, collapsed : bool = False): if collapsed: pMat, pMatUpdate = 1., False else: if self.POD == 1 and not ( hasattr(self.HFEngine.C, "is_mu_independent") and self.HFEngine.C.is_mu_independent in self._output_lvl): raise RROMPyException(("Cannot apply mu-dependent C to " "orthonormalized samples.")) vbMng(self, "INIT", "Extracting system output from state.", 35) pMat = self.HFEngine.applyC(pMat, self.mus) vbMng(self, "DEL", "Done extracting system output.", 35) if self.trainedModel is None: self.trainedModel = self.tModelType() self.trainedModel.verbosity = self.verbosity self.trainedModel.timestamp = self.timestamp datadict = {"mu0": self.mu0, "mus": copy(self.mus), "projMat": pMat, "scaleFactor": self.scaleFactor, "parameterMap": self.HFEngine.parameterMap} self.trainedModel.data = self.initializeModelData(datadict)[0] else: self.trainedModel = self.trainedModel if pMatUpdate: self.trainedModel.data.projMat = np.hstack( (self.trainedModel.data.projMat, pMat)) else: self.trainedModel.data.projMat = copy(pMat) self.trainedModel.data.mus = copy(self.mus) if collapsed: self.trainedModel.data._collapsed = True def addSamplePoints(self, mus:paramList) -> int: """Add sample points to reduced model.""" if not self.checkComputedApprox(): raise RROMPyException(("Cannot add samples before initializing " "reduced model through setupApprox.")) RROMPyAssert(self._mode, message = "Cannot add sample points.") mus = self.checkParameterList(mus) vbMng(self, "INIT", "Adding sample point{} at {}.".format("s" * (len(mus) > 1), mus), 5) for mu in mus: self.mus.append(mu) self.samplingEngine.nextSample(mu) self._S = len(self.mus) val = self.setupApprox() vbMng(self, "DEL", "Done adding sample points.", 5) return val diff --git a/rrompy/reduction_methods/standard/greedy/rational_interpolant_greedy.py b/rrompy/reduction_methods/standard/greedy/rational_interpolant_greedy.py index fdf9cb6..a7a3fcd 100755 --- a/rrompy/reduction_methods/standard/greedy/rational_interpolant_greedy.py +++ b/rrompy/reduction_methods/standard/greedy/rational_interpolant_greedy.py @@ -1,440 +1,465 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # from copy import deepcopy as copy import numpy as np from rrompy.hfengines.base.linear_affine_engine import checkIfAffine from .generic_greedy_approximant import GenericGreedyApproximant from rrompy.utilities.poly_fitting.polynomial import polyvander from rrompy.utilities.numerical import dot from rrompy.utilities.numerical.degree import degreeToDOFs from rrompy.utilities.expression import expressionEvaluator from rrompy.reduction_methods.standard import RationalInterpolant from rrompy.utilities.base.types import Np1D, Tuple, paramVal, List from rrompy.utilities.base.verbosity_depth import verbosityManager as vbMng from rrompy.utilities.poly_fitting import customFit from rrompy.utilities.exception_manager import (RROMPyWarning, RROMPyException, RROMPyAssert, RROMPy_FRAGILE) from rrompy.sampling import sampleList, emptySampleList __all__ = ['RationalInterpolantGreedy'] class RationalInterpolantGreedy(GenericGreedyApproximant, RationalInterpolant): """ ROM greedy rational interpolant computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'S': number of starting training points; - 'polydegreetype': type of polynomial degree; allowed values are '*' or '*_*', with * either 'TENSOR' or 'TOTAL'; defaults to 'TOTAL'; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'MAuxiliary': degree of rational interpolant numerator with respect to non-pivot parameters; defaults to 0; - 'NAuxiliary': degree of rational interpolant denominator with respect to non-pivot parameters; defaults to 0; - 'sampler': sample point generator; + - 'rationalMode': mode of rational approximation; allowed values + include 'MINIMAL[_STATE]' and 'BARYCENTRIC[_STATE]'; defaults + to 'MINIMAL'; - 'greedyTol': uniform error tolerance for greedy algorithm; defaults to 1e-2; - 'collinearityTol': collinearity tolerance for greedy algorithm; defaults to 0.; - 'maxIter': maximum number of greedy steps; defaults to 1e2; - 'nTestPoints': number of test points; defaults to 5e2; - 'samplerTrainSet': training sample points generator; defaults to sampler; - 'polybasis': type of basis for interpolation; defaults to 'MONOMIAL'; - 'errorEstimatorKind': kind of error estimator; available values include 'AFFINE', 'DISCREPANCY', 'LOOK_AHEAD', 'LOOK_AHEAD_RES', 'LOOK_AHEAD_OUTPUT', and 'NONE'; defaults to 'NONE'; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. mus: Array of snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; + - 'rationalMode': mode of rational approximation; - 'polydegreetype': type of polynomial degree; - 'N': degree of rational interpolant denominator; - 'MAuxiliary': degree of rational interpolant numerator with respect to non-pivot parameters; defaults to 0; - 'NAuxiliary': degree of rational interpolant denominator with respect to non-pivot parameters; defaults to 0; - 'greedyTol': uniform error tolerance for greedy algorithm; - 'collinearityTol': collinearity tolerance for greedy algorithm; - 'maxIter': maximum number of greedy steps; - 'nTestPoints': number of test points; - 'samplerTrainSet': training sample points generator; + - 'polybasis': type of polynomial basis for interpolation; - 'errorEstimatorKind': kind of error estimator; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for interpolation; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of samples current approximant relies upon; - 'sampler': sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. S: number of test points. polydegreetype: Type of polynomial degree. N: Denominator degree of approximant. MAuxiliary: Degree of rational interpolant numerator with respect to non-pivot parameters. NAuxiliary: Degree of rational interpolant denominator with respect to non-pivot parameters. sampler: Sample point generator. + rationalMode: Mode of rational approximation. greedyTol: uniform error tolerance for greedy algorithm. collinearityTol: Collinearity tolerance for greedy algorithm. maxIter: maximum number of greedy steps. nTestPoints: number of starting training points. samplerTrainSet: training sample points generator. + polybasis: Type of polynomial basis for interpolation. errorEstimatorKind: kind of error estimator. functionalSolve: Strategy for minimization of denominator functional. interpTol: tolerance for interpolation. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: tolerance for robust rational denominator management. muBounds: list of bounds for parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. """ _allowedEstimatorKinds = ["AFFINE", "DISCREPANCY", "LOOK_AHEAD", "LOOK_AHEAD_RES", "LOOK_AHEAD_OUTPUT", "NONE"] def __init__(self, *args, **kwargs): self._preInit() self._addParametersToList(["errorEstimatorKind"], ["DISCREPANCY"], - toBeExcluded = ["rationalMode", "M"]) + toBeExcluded = ["M"]) super().__init__(*args, **kwargs) self.M = "AUTO" self._postInit() @property - def rationalMode(self): return "MINIMAL_STATE" + def rationalMode(self): + """Value of rationalMode.""" + return self._rationalMode + @rationalMode.setter + def rationalMode(self, rationalMode): + try: + rationalMode = rationalMode.upper().strip().replace(" ","") + if rationalMode in ["MINIMAL", "BARYCENTRIC"]: + rationalMode += "_STATE" + if rationalMode not in ["MINIMAL_STATE", "BARYCENTRIC_STATE"]: + raise RROMPyException("Prescribed mode not recognized.") + except Exception as E: + RROMPyWarning(str(E) + " Overriding to 'MINIMAL'.") + rationalMode = "MINIMAL_STATE" + self._rationalMode = rationalMode + self._approxParameters["rationalMode"] = self.rationalMode @property def errorEstimatorKind(self): """Value of errorEstimatorKind.""" return self._errorEstimatorKind @errorEstimatorKind.setter def errorEstimatorKind(self, errorEstimatorKind): errorEstimatorKind = errorEstimatorKind.upper() if errorEstimatorKind not in self._allowedEstimatorKinds: RROMPyWarning(("Error estimator kind not recognized. Overriding " "to 'NONE'.")) errorEstimatorKind = "NONE" self._errorEstimatorKind = errorEstimatorKind self._approxParameters["errorEstimatorKind"] = self.errorEstimatorKind def _polyvanderAuxiliary(self, mus, deg, *args, **kwargs): return polyvander(mus, deg, *args, **kwargs) def getErrorEstimatorDiscrepancy(self, mus:Np1D) -> Np1D: """Discrepancy-based residual estimator.""" checkIfAffine(self.HFEngine, "apply discrepancy-based error estimator", False, self._affine_lvl) mus = self.checkParameterList(mus) muCTest = self.trainedModel.centerNormalize(mus) tMverb, self.trainedModel.verbosity = self.trainedModel.verbosity, 0 QTest = self.trainedModel.getQVal(mus) QTzero = np.where(QTest == 0.)[0] if len(QTzero) > 0: RROMPyWarning(("Adjusting estimator to avoid division by " "numerically zero denominator.")) QTest[QTzero] = (np.finfo(np.complex).eps / len(self.trainedModel.data.Q)) self.HFEngine.buildA() self.HFEngine.buildb() nAs, nbs = self.HFEngine.nAs, self.HFEngine.nbs muTrainEff = self.mapParameterList(self.mus) muTestEff = self.mapParameterList(mus) PTrain = self.trainedModel.getPVal(self.mus).data.T QTrain = self.trainedModel.getQVal(self.mus) QTzero = np.where(QTrain == 0.)[0] if len(QTzero) > 0: RROMPyWarning(("Adjusting estimator to avoid division by " "numerically zero denominator.")) QTrain[QTzero] = (np.finfo(np.complex).eps / len(self.trainedModel.data.Q)) PTest = self.trainedModel.getPVal(mus).data self.trainedModel.verbosity = tMverb radiusAbTrain = np.empty((self.S, nAs * self.S + nbs), dtype = np.complex) radiusA = np.empty((self.S, nAs, len(mus)), dtype = np.complex) radiusb = np.empty((nbs, len(mus)), dtype = np.complex) for j, thA in enumerate(self.HFEngine.thAs): idxs = j * self.S + np.arange(self.S) radiusAbTrain[:, idxs] = expressionEvaluator(thA[0], muTrainEff, (self.S, 1)) * PTrain radiusA[:, j] = PTest * expressionEvaluator(thA[0], muTestEff, (len(mus),)) for j, thb in enumerate(self.HFEngine.thbs): idx = nAs * self.S + j radiusAbTrain[:, idx] = QTrain * expressionEvaluator(thb[0], muTrainEff, (self.S,)) radiusb[j] = QTest * expressionEvaluator(thb[0], muTestEff, (len(mus),)) QRHSNorm2 = self._affineResidualMatricesContraction(radiusb) deg = [self.M] + [self.MAuxiliary] * (self.npar - 1) vanTrain = self._polyvanderAuxiliary(self._musUniqueCN, deg, self.polybasis, self._derIdxs, self._reorder) interpPQ = customFit(vanTrain, radiusAbTrain, rcond = self.interpTol) vanTest = self._polyvanderAuxiliary(muCTest, deg, self.polybasis) DradiusAb = vanTest.dot(interpPQ) radiusA = (radiusA - DradiusAb[:, : - nbs].reshape(len(mus), -1, self.S).T) radiusb = radiusb - DradiusAb[:, - nbs :].T ff, Lf, LL = self._affineResidualMatricesContraction(radiusb, radiusA) err = np.abs((LL - 2. * np.real(Lf) + ff) / QRHSNorm2) ** .5 return err def getErrorEstimatorLookAhead(self, mus:Np1D, what : str = "") -> Tuple[Np1D, List[int]]: """Residual estimator based on look-ahead idea.""" err, idxMaxEst = self._EIMStep(mus) mu_muTestS = mus[idxMaxEst] app_muTestSample = self.trainedModel.getApproxReduced(mu_muTestS) if self._mode == RROMPy_FRAGILE: if what == "RES" and not self.HFEngine.isCEye: raise RROMPyException(("Cannot compute LOOK_AHEAD_RES " "estimator in fragile mode for " "non-scalar C.")) app_muTestSample = dot(self.trainedModel.data.projMat[:, : app_muTestSample.shape[0]], app_muTestSample) else: app_muTestSample = dot(self.samplingEngine.projectionMatrix, app_muTestSample) app_muTestSample = sampleList(app_muTestSample) if what == "RES": errmu = self.HFEngine.residual(mu_muTestS, app_muTestSample, post_c = False) solmu = self.HFEngine.residual(mu_muTestS, None, post_c = False) normSol = self.HFEngine.norm(solmu, dual = True) normErr = self.HFEngine.norm(errmu, dual = True) else: for j, mu in enumerate(mu_muTestS): uEx = self.samplingEngine.nextSample(mu) if what == "OUTPUT": uEx = self.HFEngine.applyC(uEx, mu) app_muTS = self.HFEngine.applyC(app_muTestSample[j], mu) if j == 0: app_muTestS = emptySampleList() app_muTestS.reset((len(app_muTS), len(mu_muTestS)), dtype = np.complex) app_muTestS[j] = app_muTS if j == 0: solmu = emptySampleList() solmu.reset((len(uEx), len(mu_muTestS)), dtype = np.complex) solmu[j] = uEx if what == "OUTPUT": app_muTestSample = app_muTestS errmu = solmu - app_muTestSample normSol = self.HFEngine.norm(solmu, is_state = what != "OUTPUT") normErr = self.HFEngine.norm(errmu, is_state = what != "OUTPUT") - errTestMean = np.mean(err[idxMaxEst]) - errMean = np.mean(normErr / normSol) - return err * errMean / errTestMean, idxMaxEst + errTest, errRel = err[idxMaxEst], normErr / normSol + errMean = errTest.conj().dot(errRel) / errTest.conj().dot(errTest) + return errMean * err, idxMaxEst def getErrorEstimatorNone(self, mus:Np1D) -> Np1D: """EIM-based residual estimator.""" err = self._EIMStep(mus, True) err *= self.greedyTol / np.mean(err) return err def _EIMStep(self, mus:Np1D, only_one : bool = False) -> Tuple[Np1D, List[int]]: """EIM step to find next magic point.""" mus = self.checkParameterList(mus) tMverb, self.trainedModel.verbosity = self.trainedModel.verbosity, 0 QTest = self.trainedModel.getQVal(mus) QTzero = np.where(QTest == 0.)[0] if len(QTzero) > 0: RROMPyWarning(("Adjusting estimator to avoid division by " "numerically zero denominator.")) QTest[QTzero] = (np.finfo(np.complex).eps / len(self.trainedModel.data.Q)) muCTest = self.trainedModel.centerNormalize(mus) muCTrain = self.trainedModel.centerNormalize(self.mus) self.trainedModel.verbosity = tMverb deg = [self.M] + [self.MAuxiliary] * (self.npar - 1) - S0 = degreeToDOFs(deg, self.polydegreetypeM) - if S0 != len(muCTrain): - raise RROMPyException(("Cannot evaluate error estimator for LS " - "interpolation.")) - deg[0] = self.M + 1 + S0 = len(muCTrain) + if S0 != degreeToDOFs(deg, self.polydegreetypeM): + if self.npar > 1: + raise RROMPyException(("Evaluating error estimator without " + "interpolation is not allowed.")) + RROMPyWarning(("Evaluating error estimator without " + "interpolation might lead to instabilities.")) + deg[0] = self.S - 1 + deg[0] = deg[0] + 1 vanTest = self._polyvanderAuxiliary(muCTest, deg, self.polybasis, degreetype = self.polydegreetypeM) vanTest, vanTestNext = vanTest[:, : S0], vanTest[:, S0 :] idxsTest = np.arange(vanTestNext.shape[1]) basis = np.zeros((len(idxsTest), 0), dtype = float) idxMaxEst = np.empty(len(idxsTest), dtype = int) while len(idxsTest) > 0: vanTrial = self._polyvanderAuxiliary(muCTrain, deg, self.polybasis, degreetype = self.polydegreetypeM) vanTrial, vanTrialNext = vanTrial[:, : S0], vanTrial[:, S0 :] vanTrial = np.hstack((vanTrial, vanTrialNext.dot(basis).reshape( len(vanTrialNext), basis.shape[1]))) valuesTrial = vanTrialNext[:, idxsTest] vanTestEff = np.hstack((vanTest, vanTestNext.dot(basis).reshape( len(vanTestNext), basis.shape[1]))) vanTestNextEff = vanTestNext[:, idxsTest] - coeffTest = np.linalg.solve(vanTrial, valuesTrial) + coeffTest = customFit(vanTrial, valuesTrial) errTest = np.abs((vanTestNextEff - vanTestEff.dot(coeffTest)) / np.expand_dims(QTest, 1)) if only_one: return errTest elif basis.shape[1] == 0: errTest0 = errTest[:, 0] idxMaxErr = np.unravel_index(np.argmax(errTest), errTest.shape) idxMaxEst[idxsTest[idxMaxErr[1]]] = idxMaxErr[0] muCTrain.append(muCTest[idxMaxErr[0]]) basis = np.pad(basis, [(0, 0), (0, 1)], "constant") basis[idxsTest[idxMaxErr[1]], -1] = 1. idxsTest = np.delete(idxsTest, idxMaxErr[1]) return errTest0, idxMaxEst def errorEstimator(self, mus:Np1D, return_max : bool = False) -> Np1D: """Standard residual-based error estimator.""" setupOK = self.setupApproxLocal() if setupOK > 0: err = np.empty(len(mus)) err[:] = np.nan if not return_max: return err return err, [- setupOK], np.nan mus = self.checkParameterList(mus) vbMng(self.trainedModel, "INIT", "Evaluating error estimator at mu = {}.".format(mus), 10) if self.errorEstimatorKind == "AFFINE": err = self.getErrorEstimatorAffine(mus) else: self._setupInterpolationIndices() if self.errorEstimatorKind == "DISCREPANCY": err = self.getErrorEstimatorDiscrepancy(mus) elif self.errorEstimatorKind[: 10] == "LOOK_AHEAD": err, idxMaxEst = self.getErrorEstimatorLookAhead(mus, self.errorEstimatorKind[11 :]) else: #if self.errorEstimatorKind == "NONE": err = self.getErrorEstimatorNone(mus) vbMng(self.trainedModel, "DEL", "Done evaluating error estimator.", 10) if not return_max: return err if self.errorEstimatorKind[: 10] != "LOOK_AHEAD": idxMaxEst = [np.argmax(err)] return err, idxMaxEst, err[idxMaxEst] _warnPlottingNormalization = 1 def plotEstimator(self, *args, **kwargs): super().plotEstimator(*args, **kwargs) if (self.errorEstimatorKind == "NONE" and self._warnPlottingNormalization): RROMPyWarning(("Error estimator arbitrarily normalized before " "plotting.")) self._warnPlottingNormalization = 0 def greedyNextSample(self, *args, **kwargs) -> Tuple[Np1D, int, float, paramVal]: """Compute next greedy snapshot of solution map.""" RROMPyAssert(self._mode, message = "Cannot add greedy sample.") err, muidx, maxErr, muNext = super().greedyNextSample(*args, **kwargs) if (self.errorEstimatorKind == "NONE" and not np.isnan(maxErr) and not np.isinf(maxErr)): maxErr = None return err, muidx, maxErr, muNext def _preliminaryTraining(self): """Initialize starting snapshots of solution map.""" RROMPyAssert(self._mode, message = "Cannot start greedy algorithm.") if self.samplingEngine.nsamples > 0: return super()._preliminaryTraining() def setupApproxLocal(self) -> int: """Compute rational interpolant.""" if self.checkComputedApprox(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") self.verbosity -= 10 vbMng(self, "INIT", "Setting up local approximant.", 5) pMat = self.samplingEngine.projectionMatrix firstRun = self.trainedModel is None if not firstRun: pMat = pMat[:, len(self.trainedModel.data.mus) :] self._setupTrainedModel(pMat, not firstRun) Q = self._setupDenominator() unstable = 0 if not firstRun: _M = self.M self._setupRational(Q) if not firstRun and self.M < _M: RROMPyWarning(("Instability in numerator computation. " "Aborting.")) unstable = 1 if not unstable: self.trainedModel.data.approxParameters = copy( self.approxParameters) vbMng(self, "DEL", "Done setting up local approximant.", 5) self.verbosity += 10 return unstable def setupApprox(self, plotEst : str = "NONE") -> int: val = super().setupApprox(plotEst) if val == 0: self._setupRational(self.trainedModel.data.Q, self.trainedModel.data.P) self.trainedModel.data.approxParameters = copy( self.approxParameters) return val diff --git a/rrompy/reduction_methods/standard/rational_interpolant.py b/rrompy/reduction_methods/standard/rational_interpolant.py old mode 100755 new mode 100644 index b1ef6b9..6c5dbd6 --- a/rrompy/reduction_methods/standard/rational_interpolant.py +++ b/rrompy/reduction_methods/standard/rational_interpolant.py @@ -1,1016 +1,1006 @@ # Copyright (C) 2018-2020 by the RROMPy authors # # This file is part of RROMPy. # # RROMPy is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # RROMPy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with RROMPy. If not, see . # from copy import deepcopy as copy import numpy as np from scipy.sparse import coo_matrix from scipy.linalg import eig, eigvals, khatri_rao from .generic_standard_approximant import GenericStandardApproximant from rrompy.utilities.poly_fitting.polynomial import ( polybases, polyfitname, polyvander as pvP, polyTimes, PolynomialInterpolator as PI, PolynomialInterpolatorNodal as PIN) from rrompy.utilities.poly_fitting.heaviside import rational2heaviside from rrompy.utilities.base.types import (Np1D, Np2D, Tuple, List, paramList, interpEng) from rrompy.utilities.base import verbosityManager as vbMng from rrompy.utilities.numerical import pseudoInverse, dot, baseDistanceMatrix from rrompy.utilities.numerical.factorials import multifactorial from rrompy.utilities.numerical.degree import (mapTotalToTensor, degreeToDOFs, reduceDegreeMN, reduceDegreeN) from rrompy.utilities.numerical.hash_derivative import (nextDerivativeIndices, hashDerivativeToIdx as hashD, hashIdxToDerivative as hashI) from rrompy.utilities.exception_manager import (RROMPyException, RROMPyAssert, RROMPyWarning) __all__ = ['RationalInterpolant'] def polyTimesTable(P:interpEng, mus:Np1D, reorder:List[int], derIdxs:List[List[List[int]]], scl : Np1D = None) -> Np2D: """Table of polynomial products.""" if not isinstance(P, PI): raise RROMPyException(("Polynomial to evaluate must be a polynomial " "interpolator.")) Pvals = [[0.] * len(derIdx) for derIdx in derIdxs] for j, derIdx in enumerate(derIdxs): nder = len(derIdx) for der in range(nder): derI = hashI(der, P.npar) Pvals[j][der] = P([mus[j]], derI, scl) / multifactorial(derI) return blockDiagDer(Pvals, reorder, derIdxs, format = "dense") def vanderInvTable(vanInv:Np2D, reorder:List[int], derIdxs:List[List[List[int]]]) -> Np2D: """Table of Vandermonde pseudo-inverse.""" S = len(reorder) Ts = [None] * len(vanInv) for k in range(len(vanInv)): invLocs = [None] * len(derIdxs) idxGlob = 0 for j, derIdx in enumerate(derIdxs): nder = len(derIdx) idxGlob += nder idxLoc = np.arange(S)[(reorder >= idxGlob - nder) * (reorder < idxGlob)] invLocs[j] = vanInv[k, idxLoc] Ts[k] = blockDiagDer(invLocs, reorder, derIdxs, [2, 1, 0]) return Ts def blockDiagDer(vals:List[Np1D], reorder:List[int], derIdxs:List[List[List[int]]], permute : List[int] = None, format : str = "sparse") -> Np2D: """Table of derivative values for point confluence.""" S = len(reorder) if format == "sparse": data, idxI, idxJ = [], [], [] else: #format == "dense": T = np.zeros((S, S), dtype = np.complex) if permute is None: permute = [0, 1, 2] idxGlob = 0 for j, derIdx in enumerate(derIdxs): nder = len(derIdx) idxGlob += nder idxLoc = np.arange(S)[(reorder >= idxGlob - nder) * (reorder < idxGlob)] val = vals[j] for derI, derIdxI in enumerate(derIdx): for derJ, derIdxJ in enumerate(derIdx): diffIdx = [x - y for (x, y) in zip(derIdxI, derIdxJ)] if all([x >= 0 for x in diffIdx]): diffj = hashD(diffIdx) i1, i2, i3 = np.array([derI, derJ, diffj])[permute] if format == "sparse": data += [val[i3]] idxI += [idxLoc[i1]] idxJ += [idxLoc[i2]] else: T[idxLoc[i1], idxLoc[i2]] = val[i3] if format == "sparse": T = coo_matrix((data, (idxI, idxJ)), shape = (S, S)).tocsr() return T def barycentricRoots(coeffs:Np1D, supp:Np1D, return_vectors : bool = False) -> Np1D: m = len(supp) if len(coeffs) == m: coeffs = np.append(0., coeffs) arrow = np.diag(np.append(0., supp) + 0.j) arrow[0], arrow[1 :, 0] = coeffs, 1. active = np.diag(np.append(0., np.ones(m))) if return_vectors: roots, vectors = eig(arrow, active) else: roots = eigvals(arrow, active) badRoots = np.isinf(roots) + np.isnan(roots) == False if return_vectors: return roots[badRoots], vectors[:, badRoots] return roots[badRoots] class RationalInterpolant(GenericStandardApproximant): """ ROM rational interpolant computation for parametric problems. Args: HFEngine: HF problem solver. mu0(optional): Default parameter. Defaults to 0. approxParameters(optional): Dictionary containing values for main parameters of approximant. Recognized keys are: - 'POD': kind of snapshots orthogonalization; allowed values include 0, 1/2, and 1; defaults to 1, i.e. POD; - 'scaleFactorDer': scaling factors for derivative computation; defaults to 'AUTO'; - 'S': total number of samples current approximant relies upon; - 'sampler': sample point generator; - 'rationalMode': mode of rational approximation; allowed values include 'MINIMAL[_STATE]', 'MINIMAL_OUTPUT', + 'BARYCENTRIC[_STATE]', 'BARYCENTRIC_OUTPUT', 'STANDARD[_STATE]', and 'STANDARD_OUTPUT'; defaults to 'MINIMAL'; - 'polybasis': type of polynomial basis for interpolation; defaults to 'MONOMIAL'; - 'polydegreetype': type of polynomial degree; allowed values are '*' or '*_*', with * either 'TENSOR' or 'TOTAL'; defaults to 'TOTAL'; - 'M': degree of rational interpolant numerator; defaults to 'AUTO', i.e. maximum allowed; - 'N': degree of rational interpolant denominator; defaults to 'AUTO', i.e. maximum allowed; - 'MAuxiliary': degree of rational interpolant numerator with respect to non-pivot parameters; defaults to 0; - 'NAuxiliary': degree of rational interpolant denominator with respect to non-pivot parameters; defaults to 0; - 'functionalSolve': strategy for minimization of denominator - functional; allowed values include 'NORM', 'DOMINANT', - 'BARYCENTRIC[_NORM]', and 'BARYCENTRIC_AVERAGE' (check pdf in - main folder for explanation); defaults to 'NORM'; + functional; allowed values include 'NORM' and 'DOMINANT'; + defaults to 'NORM'; - 'interpTol': tolerance for interpolation; defaults to None; - 'forceQReal': force denominator to have real coefficients; defaults to False; - 'polyTruncateTol': tolerance for truncation of rational terms; defaults to 0; - 'QTol': tolerance for robust rational denominator management; defaults to 0. Defaults to empty dict. verbosity(optional): Verbosity level. Defaults to 10. Attributes: HFEngine: HF problem solver. mu0: Default parameter. mus: Array of snapshot parameters. approxParameters: Dictionary containing values for main parameters of approximant. Recognized keys are in parameterList. parameterListSoft: Recognized keys of soft approximant parameters: - 'POD': kind of snapshots orthogonalization; - 'scaleFactorDer': scaling factors for derivative computation; - 'rationalMode': mode of rational approximation; - 'polybasis': type of polynomial basis for interpolation; - 'polydegreetype': type of polynomial degree; - 'M': degree of rational interpolant numerator; - 'N': degree of rational interpolant denominator; - 'MAuxiliary': degree of rational interpolant numerator with respect to non-pivot parameters; defaults to 0; - 'NAuxiliary': degree of rational interpolant denominator with respect to non-pivot parameters; defaults to 0; - 'functionalSolve': strategy for minimization of denominator functional; - 'interpTol': tolerance for interpolation via numpy.polyfit; - 'forceQReal': force denominator to have real coefficients; - 'polyTruncateTol': tolerance for truncation of rational terms; - 'QTol': tolerance for robust rational denominator management. parameterListCritical: Recognized keys of critical approximant parameters: - 'S': total number of samples current approximant relies upon; - 'sampler': sample point generator. verbosity: Verbosity level. POD: Kind of snapshots orthogonalization. scaleFactorDer: Scaling factors for derivative computation. S: Number of solution snapshots over which current approximant is based upon. sampler: Sample point generator. rationalMode: Mode of rational approximation. polybasis: Type of polynomial basis for interpolation. polydegreetype: Type of polynomial degree. M: Numerator degree of approximant. N: Denominator degree of approximant. MAuxiliary: Degree of rational interpolant numerator with respect to non-pivot parameters. NAuxiliary: Degree of rational interpolant denominator with respect to non-pivot parameters. functionalSolve: Strategy for minimization of denominator functional. interpTol: Tolerance for interpolation via numpy.polyfit. forceQReal: Force denominator to have real coefficients. polyTruncateTol: Tolerance for truncation of rational terms. QTol: Tolerance for robust rational denominator management. muBounds: list of bounds for parameter values. samplingEngine: Sampling engine. uHF: High fidelity solution(s) with parameter(s) lastSolvedHF as sampleList. lastSolvedHF: Parameter(s) corresponding to last computed high fidelity solution(s) as parameterList. uApproxReduced: Reduced approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApproxReduced: Parameter(s) corresponding to last computed reduced approximate solution(s) as parameterList. uApprox: Approximate solution(s) with parameter(s) lastSolvedApprox as sampleList. lastSolvedApprox: Parameter(s) corresponding to last computed approximate solution(s) as parameterList. Q: Numpy 1D vector containing complex coefficients of approximant denominator. P: Numpy 2D vector whose columns are FE dofs of coefficients of approximant numerator. """ - - _allowedFunctionalSolveKinds = ["NORM", "DOMINANT", "BARYCENTRIC_NORM", - "BARYCENTRIC_AVERAGE"] def __init__(self, *args, **kwargs): self._preInit() self._addParametersToList(["rationalMode", "polybasis", "M", "N", "polydegreetype", "MAuxiliary", "NAuxiliary", "functionalSolve", "interpTol", "forceQReal", "polyTruncateTol", "QTol"], ["MINIMAL", "MONOMIAL", "AUTO", "AUTO", "TOTAL", 0, 0, "NORM", -1, 0, 0., 0.]) super().__init__(*args, **kwargs) self._postInit() @property def tModelType(self): from .trained_model.trained_model_rational import TrainedModelRational return TrainedModelRational @property def rationalMode(self): """Value of rationalMode.""" return self._rationalMode @rationalMode.setter def rationalMode(self, rationalMode): try: rationalMode = rationalMode.upper().strip().replace(" ","") - if rationalMode in ["MINIMAL", "STANDARD"]: + if rationalMode in ["MINIMAL", "BARYCENTRIC", "STANDARD"]: rationalMode += "_STATE" if rationalMode not in ["MINIMAL_STATE", "MINIMAL_OUTPUT", + "BARYCENTRIC_STATE", "BARYCENTRIC_OUTPUT", "STANDARD_STATE", "STANDARD_OUTPUT"]: raise RROMPyException("Prescribed mode not recognized.") except Exception as E: RROMPyWarning(str(E) + " Overriding to 'MINIMAL'.") rationalMode = "MINIMAL_STATE" self._rationalMode = rationalMode self._approxParameters["rationalMode"] = self.rationalMode @property def polybasis(self): """Value of polybasis.""" return self._polybasis @polybasis.setter def polybasis(self, polybasis): try: polybasis = polybasis.upper().strip().replace(" ","") if polybasis not in polybases: raise RROMPyException("Prescribed polybasis not recognized.") except Exception as E: RROMPyWarning(str(E) + " Overriding to 'MONOMIAL'.") polybasis = "MONOMIAL" self._polybasis = polybasis self._approxParameters["polybasis"] = self.polybasis @property def functionalSolve(self): """Value of functionalSolve.""" return self._functionalSolve @functionalSolve.setter def functionalSolve(self, functionalSolve): try: functionalSolve = functionalSolve.upper().strip().replace(" ","") - if functionalSolve == "BARYCENTRIC": functionalSolve += "_NORM" - if functionalSolve not in self._allowedFunctionalSolveKinds: + if functionalSolve not in ["NORM", "DOMINANT"]: raise RROMPyException(("Prescribed functionalSolve not " "recognized.")) except Exception as E: RROMPyWarning(str(E) + " Overriding to 'NORM'.") functionalSolve = "NORM" self._functionalSolve = functionalSolve self._approxParameters["functionalSolve"] = self.functionalSolve @property def interpTol(self): """Value of interpTol.""" return self._interpTol @interpTol.setter def interpTol(self, interpTol): self._interpTol = interpTol self._approxParameters["interpTol"] = self.interpTol @property def M(self): """Value of M.""" return self._M @M.setter def M(self, M): if isinstance(M, str): M = M.strip().replace(" ","") if "-" not in M: M = M + "-0" self._M_isauto, self._M_shift = True, int(M.split("-")[-1]) M = 0 if M < 0: raise RROMPyException("M must be non-negative.") self._M = M self._approxParameters["M"] = self.M def _setMAuto(self): - if (self.rationalMode != "STANDARD_OUTPUT" - or self.functionalSolve[:11] == "BARYCENTRIC"): + if self.rationalMode != "STANDARD_OUTPUT": SEff = self.S else: dim = self.HFEngine.outputdim if hasattr(self, "_N_isauto") and self._N_isauto: M, N = reduceDegreeMN(self.S - 2, self.S - 1, self.S, self.npar, self.polydegreetype, self.MAuxiliary, self.NAuxiliary, dim, 0) self.M = max(0, M - self._M_shift) self.N = max(0, N - self._N_shift) vbMng(self, "MAIN", "Automatically setting M to {} and N to {}.".format( self.M, self.N), 25) return degN = degreeToDOFs([self.N] + [self.NAuxiliary] * (self.npar - 1), self.polydegreetypeM) - 1 SEff = int(np.floor(self.S - degN / dim)) M = reduceDegreeN(SEff - 1, SEff, self.npar, self.polydegreetypeM, self.MAuxiliary) self.M = max(0, M - self._M_shift) vbMng(self, "MAIN", "Automatically setting M to {}.".format(self.M), 25) @property def N(self): """Value of N.""" return self._N @N.setter def N(self, N): if isinstance(N, str): N = N.strip().replace(" ","") if "-" not in N: N = N + "-0" self._N_isauto, self._N_shift = True, int(N.split("-")[-1]) N = 0 if N < 0: raise RROMPyException("N must be non-negative.") self._N = N self._approxParameters["N"] = self.N def _setNAuto(self): if self.rationalMode[-6:] == "OUTPUT": dim = self.HFEngine.outputdim else: dim = self.HFEngine.spacedim if (self.rationalMode[:7] == "MINIMAL" - or self.functionalSolve[:11] == "BARYCENTRIC"): - if self.rationalMode[:7] == "MINIMAL": - SEff = min(self.S, dim + 1) - else: #if self.rationalMode[:7] == "STANDARD": - SEff = int(np.floor(dim * self.S / (dim + 1.))) + 1 + or self.rationalMode[:11] == "BARYCENTRIC"): + SEff = min(self.S, dim + 1) else: if hasattr(self, "_M_isauto") and self._M_isauto: if self.rationalMode == "STANDARD_OUTPUT": return self._setMAuto() M = reduceDegreeN(self.S - 2, self.S - 1, self.npar, self.polydegreetypeM, self.MAuxiliary) else: M = reduceDegreeN(self.M, self.S - 1, self.npar, self.polydegreetypeM, self.MAuxiliary) degM = degreeToDOFs([M] + [self.MAuxiliary] * (self.npar - 1), self.polydegreetypeN) SEff = min(self.S, dim * (self.S - degM) + 1) N = reduceDegreeN(SEff - 1, SEff, self.npar, self.polydegreetypeN, self.NAuxiliary) self.N = max(0, N - self._N_shift) vbMng(self, "MAIN", "Automatically setting N to {}.".format(self.N), 25) @property def MAuxiliary(self): """Value of MAuxiliary.""" return self._MAuxiliary @MAuxiliary.setter def MAuxiliary(self, MAuxiliary): if MAuxiliary < 0: raise RROMPyException("MAuxiliary must be non-negative.") self._MAuxiliary = MAuxiliary self._approxParameters["MAuxiliary"] = self.MAuxiliary @property def NAuxiliary(self): """Value of NAuxiliary.""" return self._NAuxiliary @NAuxiliary.setter def NAuxiliary(self, NAuxiliary): if NAuxiliary < 0: raise RROMPyException("NAuxiliary must be non-negative.") self._NAuxiliary = NAuxiliary self._approxParameters["NAuxiliary"] = self.NAuxiliary @property def polydegreetype(self): """Value of polydegreetype.""" return self._polydegreetype @polydegreetype.setter def polydegreetype(self, polydegreetype): try: polydegreetype = polydegreetype.upper().strip().replace(" ","") if "_" not in polydegreetype: polydegreetype = "{0}_{0}".format(polydegreetype) polydegreetypes = polydegreetype.split("_") if not np.all([pdt in ["TOTAL", "TENSOR"] for pdt in polydegreetypes]): raise RROMPyException(("Prescribed polydegreetype not " "recognized.")) except Exception as E: RROMPyWarning(str(E) + " Overriding to 'TENSOR_TOTAL'.") polydegreetype = "TENSOR_TOTAL" self._polydegreetype = polydegreetype self._approxParameters["polydegreetype"] = self.polydegreetype @property def polydegreetypeM(self): return self.polydegreetype.split("_")[0] @property def polydegreetypeN(self): return self.polydegreetype.split("_")[1] @property def forceQReal(self): """Value of forceQReal.""" return self._forceQReal @forceQReal.setter def forceQReal(self, forceQReal): self._forceQReal = forceQReal self._approxParameters["forceQReal"] = self.forceQReal - + @property def polyTruncateTol(self): """Value of tolerance for truncation of rational terms.""" return self._polyTruncateTol @polyTruncateTol.setter def polyTruncateTol(self, polyTruncateTol): if polyTruncateTol < 0.: RROMPyWarning(("Overriding prescribed negative numerator " "tolerance to 0.")) polyTruncateTol = 0. self._polyTruncateTol = polyTruncateTol self._approxParameters["polyTruncateTol"] = self.polyTruncateTol @property def QTol(self): """Value of tolerance for robust rational denominator management.""" return self._QTol @QTol.setter def QTol(self, QTol): if QTol < 0.: RROMPyWarning(("Overriding prescribed negative robustness " "tolerance to 0.")) QTol = 0. self._QTol = QTol self._approxParameters["QTol"] = self.QTol def resetSamples(self): """Reset samples.""" super().resetSamples() self._musUniqueCN = None self._derIdxs = None self._reorder = None def _setupInterpolationIndices(self): """Setup parameters for polyvander.""" if self._musUniqueCN is None or len(self._reorder) != len(self.mus): self._musUniqueCN, musIdxsTo, musIdxs, musCount = ( self.trainedModel.centerNormalize(self.mus).unique( return_index = True, return_inverse = True, return_counts = True)) self._musUnique = self.mus[musIdxsTo] self._derIdxs = [None] * len(self._musUniqueCN) self._reorder = np.empty(len(musIdxs), dtype = int) filled = 0 for j, cnt in enumerate(musCount): self._derIdxs[j] = nextDerivativeIndices([], self.mus.shape[1], cnt) jIdx = np.nonzero(musIdxs == j)[0] self._reorder[jIdx] = np.arange(filled, filled + cnt) filled += cnt def _setupDenominator(self): """Compute rational denominator.""" RROMPyAssert(self._mode, message = "Cannot setup denominator.") vbMng(self, "INIT", "Starting computation of denominator.", 7) if self.functionalSolve != "NORM" and self.npar > 1: RROMPyWarning(("Strategy for functional optimization must be " "'NORM' for more than one parameter. Overriding to " "'NORM'.")) self.functionalSolve = "NORM" - if (self.functionalSolve[:11] == "BARYCENTRIC" - and self.rationalMode[:7] == "MINIMAL" + if (self.rationalMode[:11] == "BARYCENTRIC" and self.npar > 1): + RROMPyWarning(("Minimal barycentric strategy cannot be applied " + "with more than one parameter. Overriding to " + "'MINIMAL'.")) + self.rationalMode = "MINIMAL" + self.rationalMode[11:] + if (self.rationalMode[:11] == "BARYCENTRIC" and not hasattr(self, "_N_isauto") and self.N + 1 < self.S): RROMPyWarning(("Minimal barycentric strategy cannot be applied " - "with Least Squares. Overriding to 'NORM'.")) - self.functionalSolve = "NORM" - if self.functionalSolve[:11] == "BARYCENTRIC": + "with Least Squares. Overriding to 'MINIMAL'.")) + self.rationalMode = "MINIMAL" + self.rationalMode[11:] + if self.rationalMode[:11] == "BARYCENTRIC": self._setupInterpolationIndices() if len(self._musUnique) != self.S: RROMPyWarning(("Barycentric functional optimization cannot be " "applied to repeated samples. Overriding to " "'NORM'.")) - self.functionalSolve = "NORM" + self.rationalMode = "MINIMAL" + self.rationalMode[11:] if hasattr(self, "_N_isauto"): self._setNAuto() - if (self.rationalMode[:7] != "MINIMAL" and hasattr(self, "_M_isauto") - and self.functionalSolve[:11] != "BARYCENTRIC"): + if self.rationalMode[:8] == "STANDARD" and hasattr(self, "_M_isauto"): self._setMAuto() if self.rationalMode[-6:] == "OUTPUT": dim = self.HFEngine.outputdim else: dim = self.HFEngine.spacedim if (self.rationalMode[:7] == "MINIMAL" - or self.functionalSolve[:11] == "BARYCENTRIC"): - if self.rationalMode[:7] == "MINIMAL": - SEff = min(self.S, dim + 1) - else: #if self.rationalMode[:7] == "STANDARD": - SEff = int(np.floor(dim * self.S / (dim + 1.))) + 1 - N = reduceDegreeN(self.N, SEff, self.npar, self.polydegreetypeN, - self.NAuxiliary) + or self.rationalMode[:11] == "BARYCENTRIC"): + N = reduceDegreeN(self.N, min(self.S, dim + 1), self.npar, + self.polydegreetypeN, self.NAuxiliary) if N < self.N: RROMPyWarning(("N too large compared to S. Reducing N by " "{}.").format(self.N - N)) self.N = N else: N = reduceDegreeMN(self.M, self.N, self.S, self.npar, self.polydegreetype, self.MAuxiliary, self.NAuxiliary, dim)[1] if N < self.N: RROMPyWarning(("N too large compared to S. Reducing N by " "{}.").format(self.N - N)) self.N = N - invD, TN = None, None - if self.functionalSolve[:11] != "BARYCENTRIC": + if self.rationalMode[:11] != "BARYCENTRIC": invD, TN = self._computeInterpolantInverseBlocks() - elif self.rationalMode[:7] != "MINIMAL": - musCN = self._musUniqueCN[self._reorder] - TN = np.empty((0, self.S + 1), dtype = np.complex) Rscaling = None if self.rationalMode[-6:] == "OUTPUT": sampleE = self.samplingEngine.samples_output elif self.POD == 1: sampleE = self.samplingEngine.Rscale elif self.POD == 1/2: sampleE = self.samplingEngine.samples_normal Rscaling = self.samplingEngine.Rscale else: sampleE = self.samplingEngine.samples while self.N > 0: - if self.functionalSolve[:11] != "BARYCENTRIC": + if self.rationalMode[:11] != "BARYCENTRIC": NEff = degreeToDOFs([self.N] + [self.NAuxiliary] * (self.npar - 1), self.polydegreetypeN) TN = TN[:, : NEff] - elif self.rationalMode[:7] != "MINIMAL": - NOld = TN.shape[1] - 1 - dTN = (musCN[self.N : NOld] - - musCN[: self.N].reshape(1, -1)) ** -1 - dTN = np.pad(dTN, [(0, 0), (1, 0)], "constant", - constant_values = 1) - TN = np.vstack((TN[:, : self.N + 1], dTN)) if self.rationalMode[:7] == "MINIMAL": ev, eV = self.findeveVGMinimal(sampleE, invD, TN, Rscaling) - else: #if self.rationalMode[: 8] == "STANDARD": - ev, eV = self.findeveVGStandard(sampleE, invD, TN, Rscaling) - if (self.rationalMode[:7] == "MINIMAL" - and self.functionalSolve[:11] == "BARYCENTRIC"): + elif self.rationalMode[:11] == "BARYCENTRIC": + ev, eV = self.findeveVGBarycentric(sampleE, Rscaling) break + else: #if self.rationalMode[:8] == "STANDARD": + ev, eV = self.findeveVGStandard(sampleE, invD, TN, Rscaling) nevBad = np.sum(np.abs(ev / ev[-1]) < self.QTol) if not nevBad: break dN = int(np.ceil(nevBad / (1 + self.NAuxiliary) ** (self.npar - 1))) vbMng(self, "MAIN", ("Smallest {} eigenvalue{} below tolerance. Reducing N by " "{}.").format(nevBad, "s" * (nevBad > 1), dN), 10) self.N = self.N - dN if self.rationalMode[-5:] == "STATE" and self.POD != 1: del self._gram if self.N <= 0: self.N = 0 eV = np.zeros((1,) + (self.NAuxiliary + 1,) * (self.npar - 1), dtype = np.complex) eV[(0,) * self.npar] = 1. - if self.N > 0 and self.functionalSolve[:11] == "BARYCENTRIC": + if self.N > 0 and self.rationalMode[:11] == "BARYCENTRIC": q = PIN() q.polybasis, q.nodes = self.polybasis, eV else: q = PI() q.npar, q.polybasis = self.npar, self.polybasis deg = [self.N + 1] + [self.NAuxiliary + 1] * (self.npar - 1) if self.polydegreetypeN == "TENSOR": q.coeffs = eV.reshape(deg) else: #if self.polydegreetypeN == "TOTAL": q.coeffs = mapTotalToTensor(deg, self.npar, eV) q.truncate(self.polyTruncateTol) vbMng(self, "DEL", "Done computing denominator.", 7) return q def _setupNumerator(self): """Compute rational numerator.""" RROMPyAssert(self._mode, message = "Cannot setup numerator.") vbMng(self, "INIT", "Starting computation of numerator.", 7) self._setupInterpolationIndices() Qevaldiag = polyTimesTable(self.trainedModel.data.Q, self._musUniqueCN, self._reorder, self._derIdxs, self.scaleFactorRel) if self.rationalMode[-6:] == "OUTPUT": Qevaldiag = Qevaldiag.dot(self.samplingEngine.samples_output.T) else: if self.POD == 1: Qevaldiag = Qevaldiag.dot(self.samplingEngine.Rscale.T) elif self.POD == 1/2: Qevaldiag = Qevaldiag * self.samplingEngine.Rscale if hasattr(self, "_M_isauto"): self._setMAuto() M = reduceDegreeN(self.M, self.S, self.npar, self.polydegreetypeM, self.MAuxiliary) if M < self.M: RROMPyWarning(("M too large compared to S. Reducing M by " "{}").format(self.M - M)) self.M = M while self.M >= 0: deg = [self.M] + [self.MAuxiliary] * (self.npar - 1) pParRest = [deg, self.polybasis, self.verbosity >= 5, self.polydegreetypeM, self.polyTruncateTol, {"derIdxs": self._derIdxs, "reorder": self._reorder, "scl": self.scaleFactorRel, "degreetype": self.polydegreetypeM}, {"rcond": self.interpTol}] p = PI() wellCond, msg = p.setupByInterpolation(self._musUniqueCN, Qevaldiag, *pParRest) vbMng(self, "MAIN", msg, 5) if wellCond: break vbMng(self, "MAIN", ("Polyfit is poorly conditioned. Reducing M " "by 1."), 10) self.M = self.M - 1 if self.M < 0: raise RROMPyException(("Instability in computation of numerator. " "Aborting.")) vbMng(self, "DEL", "Done computing numerator.", 7) return p def setupApprox(self) -> int: """Compute rational interpolant.""" if self.checkComputedApprox(): return -1 RROMPyAssert(self._mode, message = "Cannot setup approximant.") vbMng(self, "INIT", "Setting up {}.".format(self.name()), 5) self.computeSnapshots() if self.rationalMode[-6:] == "OUTPUT": vbMng(self, "INIT", "Extracting system output from state.", 35) self.samplingEngine.samples_output = self.HFEngine.applyC( self.samplingEngine.samples, self.mus) _POD, self._POD = self.POD, 1 vbMng(self, "DEL", "Done extracting system output.", 35) self._setupTrainedModel(self.samplingEngine.projectionMatrix, collapsed = (self.rationalMode[-6:] == "OUTPUT")) self._setupRational(self._setupDenominator()) self.trainedModel.data.approxParameters = copy(self.approxParameters) if self.rationalMode[-6:] == "OUTPUT": self._POD = _POD vbMng(self, "DEL", "Done setting up approximant.", 5) return 0 def _setupRational(self, Q:interpEng, P : interpEng = None): - vbMng(self, "INIT", "Starting approximant finalization.", 5) + vbMng(self, "INIT", "Starting approximant finalization.", 10) self.trainedModel.data.Q = Q if P is None: P = self._setupNumerator() while self.N > 0 and self.npar == 1: if self.HFEngine._ignoreResidues: pls = Q.roots() cfs, projMat = None, None else: cfs, pls, _ = rational2heaviside(P, Q) cfs = cfs[: len(pls)].T if self.POD != 1: projMat = self.samplingEngine.projectionMatrix else: projMat = None foci = self.sampler.normalFoci() plsA = self.mapParameterList(self.mapParameterList(self.mu0)(0, 0) + self.scaleFactor * pls, "B")(0) idxBad = self.HFEngine.flagBadPolesResiduesAbsolute(plsA, cfs, projMat) if not self.HFEngine._ignoreResidues: cfs[:, idxBad] = 0. idxBad += self.HFEngine.flagBadPolesResiduesRelative(pls, cfs, projMat, foci) idxBad = idxBad > 0 if not np.any(idxBad): break vbMng(self, "MAIN", "Removing {} spurious pole{} out of {}.".format( np.sum(idxBad), "s" * (np.sum(idxBad) > 1), self.N), 10) if isinstance(Q, PIN): Q.nodes = Q.nodes[idxBad == False] else: Q = PI() Q.npar, Q.polybasis = 1, self.polybasis Q.coeffs = np.ones(1, dtype = np.complex) for pl in pls[idxBad == False]: Q.coeffs = polyTimes(Q.coeffs, [- pl, 1.], Pbasis = Q.polybasis, Rbasis = Q.polybasis) Q.coeffs /= np.linalg.norm(Q.coeffs) self.trainedModel.data.Q = Q self.N = Q.deg[0] P = self._setupNumerator() self.trainedModel.data.P = P - vbMng(self, "DEL", "Terminated approximant finalization.", 5) + vbMng(self, "DEL", "Terminated approximant finalization.", 10) def _computeInterpolantInverseBlocks(self) -> Tuple[List[Np2D], Np2D]: """ Compute inverse factors for rational interpolant target functional. """ - RROMPyAssert(self._mode, message = "Cannot solve eigenvalue problem.") self._setupInterpolationIndices() pvPPar = [self.polybasis, self._derIdxs, self._reorder, self.scaleFactorRel, self.polydegreetypeN] lagrange = self.npar == 1 and self.S == len(self._musUniqueCN) if self.rationalMode[:7] == "MINIMAL" and not lagrange: E = reduceDegreeN(self.S - 1, self.S, self.npar, self.polydegreetypeN, self.NAuxiliary) TN = None deg = [self.NAuxiliary] * self.npar while self.N >= 0: if TN is None: deg[0] = self.N TE = TN = pvP(self._musUniqueCN, deg, *pvPPar) if self.rationalMode[:7] == "MINIMAL" and not lagrange: deg[0] = E if self.N == E: TE = TN else: TE = pvP(self._musUniqueCN, deg, *pvPPar) fitOut = pseudoInverse(TE, rcond = self.interpTol, full = True) degE = deg[0] if self.npar == 1 else deg vbMng(self, "MAIN", ("Fitting {} samples with degree {} through {}... " "Conditioning of pseudoinverse system:" "{:.4e}.").format(TE.shape[0], degE, polyfitname(self.polybasis), fitOut[1][1][0] / fitOut[1][1][-1]), 5) if fitOut[1][0] == TE.shape[1]: if self.rationalMode[:7] == "MINIMAL" and not lagrange: deg0 = [E - 1] + [self.NAuxiliary] * (self.npar - 1) S0 = degreeToDOFs(deg0, self.polydegreetypeN) fitinv = fitOut[0][S0 :] break if self.rationalMode[:7] == "MINIMAL" and not lagrange: E -= 1 if E >= self.N: continue vbMng(self, "MAIN", ("Polyfit is poorly conditioned. Reducing " "N by 1."), 10) self.N, TN = self.N - 1, None if self.N < 0: raise RROMPyException(("Instability in computation of " "denominator. Aborting.")) if self.rationalMode[:7] == "MINIMAL": if lagrange: mus = self._musUniqueCN[self._reorder] dist = baseDistanceMatrix(mus, magnitude = False)[..., 0] dist[np.arange(self.S), np.arange(self.S)] = 1. fitinvE = np.prod(dist, axis = 1) ** -1 vbMng(self, "MAIN", ("Evaluating quasi-Lagrangian basis of degree {} at {} " "sample points.").format(self.S - 1, self.S), 5) invD = [np.diag(fitinvE)] else: invD = vanderInvTable(fitinv, self._reorder, self._derIdxs) else: #if self.rationalMode[: 8] == "STANDARD": if self.rationalMode[-6:] == "OUTPUT": dim = self.HFEngine.outputdim else: dim = self.HFEngine.spacedim degN = degreeToDOFs([self.N] + [self.NAuxiliary] * (self.npar - 1), self.polydegreetypeN) - 1 SEff = int(np.floor(self.S - degN / dim)) MEff = reduceDegreeN(SEff - 1, SEff, self.npar, - self.polydegreetypeM, self.MAuxiliary) + self.polydegreetypeM, self.MAuxiliary) deg = [self.MAuxiliary] * self.npar while MEff >= 0: deg[0] = MEff if MEff == self.N and self.MAuxiliary == self.NAuxiliary: TM = TN else: TM = pvP(self._musUniqueCN, deg, *pvPPar) Q, R = np.linalg.qr(TM, 'complete') fitOut = pseudoInverse(R[: R.shape[1]], rcond = self.interpTol, full = True) degM = deg[0] if self.npar == 1 else deg vbMng(self, "MAIN", ("Fitting {} samples with degree {} through {}... " "Conditioning of pseudoinverse system:" "{:.4e}.").format(TM.shape[0], degM, polyfitname(self.polybasis), fitOut[1][1][0] / fitOut[1][1][-1]), 15) if fitOut[1][0] == R.shape[1]: break MEff -= 1 Qo = Q[:, TM.shape[1] :].T.conj() vbMng(self, "MAIN", ("Loss of orthogonality for adjoint vandermonde: " "{:.4e}.").format(np.linalg.norm(Qo.dot(TM))), 5) invD = [Qo] return invD, TN def findeveVGMinimal(self, sampleE:Np2D, invD:List[Np2D], TN:Np2D, Rscaling : Np1D = None) -> Tuple[Np1D, Np2D]: """ Compute eigenvalues and eigenvectors of rational denominator matrix, or of its right chol factor if POD. """ RROMPyAssert(self._mode, message = "Cannot solve spectral problem.") - if self.POD == 1 or self.rationalMode == "MINIMAL_OUTPUT": - if self.functionalSolve[:11] == "BARYCENTRIC": - Rstack = sampleE - else: - vbMng(self, "INIT", "Building generalized half-gramian.", 10) - S, eWidth = sampleE.shape[0], len(invD) - Rstack = np.zeros((S * eWidth, TN.shape[1]), - dtype = np.complex) - for k in range(eWidth): - Rstack[k * S : (k + 1) * S, :] = dot(sampleE, - dot(invD[k], TN)) - vbMng(self, "DEL", "Done building half-gramian.", 10) + if self.POD == 1 or self.rationalMode[-6:] == "OUTPUT": + vbMng(self, "INIT", "Building generalized half-gramian.", 10) + S, eWidth = sampleE.shape[0], len(invD) + Rstack = np.zeros((S * eWidth, TN.shape[1]), + dtype = np.complex) + for k in range(eWidth): + Rstack[k * S : (k + 1) * S, :] = dot(sampleE, + dot(invD[k], TN)) + vbMng(self, "DEL", "Done building half-gramian.", 10) if self.forceQReal: Rstack = np.vstack((np.real(Rstack), np.imag(Rstack))) _, s, Vh = np.linalg.svd(Rstack, full_matrices = Rstack.shape[0] < Rstack.shape[1]) evG, eVG = s[::-1], Vh[::-1].T.conj() if len(evG) < eVG.shape[1]: evG = np.append(np.zeros(eVG.shape[1] - len(evG)), evG) evExp, probKind = -2., "svd " else: if not hasattr(self, "_gram"): vbMng(self, "INIT", "Building gramian matrix.", 10) self._gram = self.HFEngine.innerProduct(sampleE, sampleE, is_state = True) if Rscaling is not None: self._gram = (self._gram.T * Rscaling.conj()).T * Rscaling vbMng(self, "DEL", "Done building gramian.", 10) - if self.functionalSolve[:11] == "BARYCENTRIC": - G = self._gram - else: - vbMng(self, "INIT", "Building generalized gramian.", 10) - G = np.zeros((TN.shape[1],) * 2, dtype = np.complex) - for k in range(len(invD)): - iDkN = dot(invD[k], TN) - G += dot(dot(self._gram, iDkN).T, iDkN.conj()).T - vbMng(self, "DEL", "Done building gramian.", 10) + vbMng(self, "INIT", "Building generalized gramian.", 10) + G = np.zeros((TN.shape[1],) * 2, dtype = np.complex) + for k in range(len(invD)): + iDkN = dot(invD[k], TN) + G += dot(dot(self._gram, iDkN).T, iDkN.conj()).T + vbMng(self, "DEL", "Done building gramian.", 10) if self.forceQReal: G = np.real(G) evG, eVG = np.linalg.eigh(G) evExp, probKind = -1., "eigen" - if (self.functionalSolve in ["NORM", "BARYCENTRIC_NORM"] + if (self.functionalSolve == "NORM" or evG[0] == 0. or np.sum(np.abs(evG) < np.finfo(float).eps * np.abs(evG[-1]) * len(evG)) == 1): eV = eVG[:, 0] - elif self.functionalSolve == "BARYCENTRIC_AVERAGE": - eV = eVG.dot(evG ** evExp * np.sum(eVG, axis = 0).conj()) else: eV = eVG.dot(evG ** evExp * eVG[0].conj()) - if self.functionalSolve[:11] == "BARYCENTRIC": _evG = np.array(evG) evG[1 :] -= evG[0] eV /= np.linalg.norm(eV) if self.npar == 1: degE = self.N else: degE = [self.N] + [self.NAuxiliary] * (self.npar - 1) cond = evG[-1] / evG[1] if evG[0] > 0 and evG[1] > 0 else np.inf vbMng(self, "MAIN", ("Solved {}problem of size {} (degree {}) with condition number " "{:.4e}.").format(probKind, len(evG) - 1, degE, cond), 5) - if self.functionalSolve[:11] == "BARYCENTRIC": - S, mus = len(eV), self._musUniqueCN[self._reorder].flatten() - poles, qTm1 = barycentricRoots(eV, mus, True) - self.N = len(poles) - if self.QTol > 0: - # compare optimal score with N poles to those obtained by - # removing one of the poles - qTm1 = qTm1[1 :].conj() ** -1. - dists = mus.reshape(-1, 1) - mus - dists[np.arange(S), np.arange(S)] = 1. - dists = np.prod(dists, axis = 1).conj() ** -1. - qComp = np.empty((self.N + 1, S), dtype = np.complex) - qComp[0] = dists * np.prod(qTm1, axis = 1) - for j in range(self.N): - qTmj = np.prod(qTm1[:, np.arange(self.N) != j], axis = 1) - qComp[j + 1] = dists * qTmj - Lqs = qComp.dot(eVG) - scores = np.real(np.sum(Lqs * _evG ** -evExp * Lqs.conj(), - axis = 1)) - evBad = scores[1 :] < self.QTol * scores[0] - nevBad = np.sum(evBad) - if nevBad: - vbMng(self, "MAIN", - ("Suboptimal pole{} detected. Reducing N by " - "{}.").format("s" * (nevBad > 1), nevBad), 10) - self.N = self.N - nevBad - poles = poles[evBad == False] - eV = poles return evG[1 :], eV + def findeveVGBarycentric(self, sampleE:Np2D, + Rscaling : Np1D = None) -> Tuple[Np1D, Np2D]: + """ + Compute eigenvalues and eigenvectors of rational denominator matrix, or + of its right chol factor if POD. + """ + RROMPyAssert(self._mode, message = "Cannot solve spectral problem.") + if self.POD == 1 or self.rationalMode[-6:] == "OUTPUT": + Rstack = sampleE + if self.forceQReal: + Rstack = np.vstack((np.real(Rstack), np.imag(Rstack))) + _, s, Vh = np.linalg.svd(Rstack, + full_matrices = Rstack.shape[0] < Rstack.shape[1]) + evG, eVG = s[::-1], Vh[::-1].T.conj() + if len(evG) < eVG.shape[1]: + evG = np.append(np.zeros(eVG.shape[1] - len(evG)), evG) + evExp, probKind = -2., "svd " + else: + if not hasattr(self, "_gram"): + vbMng(self, "INIT", "Building gramian matrix.", 10) + self._gram = self.HFEngine.innerProduct(sampleE, sampleE, + is_state = True) + if Rscaling is not None: + self._gram = (self._gram.T * Rscaling.conj()).T * Rscaling + vbMng(self, "DEL", "Done building gramian.", 10) + G = self._gram + if self.forceQReal: G = np.real(G) + evG, eVG = np.linalg.eigh(G) + evExp, probKind = -1., "eigen" + if (self.functionalSolve == "NORM" or evG[0] == 0. + or np.sum(np.abs(evG) < np.finfo(float).eps * np.abs(evG[-1]) + * len(evG)) == 1): + eV = eVG[:, 0] + else: + eV = eVG.dot(evG ** evExp * np.sum(eVG, axis = 0).conj()) + _evG = np.array(evG) + evG[1 :] -= evG[0] + eV /= np.linalg.norm(eV) + if self.npar == 1: + degE = self.N + else: + degE = [self.N] + [self.NAuxiliary] * (self.npar - 1) + cond = evG[-1] / evG[1] if evG[0] > 0 and evG[1] > 0 else np.inf + vbMng(self, "MAIN", + ("Solved {}problem of size {} (degree {}) with condition number " + "{:.4e}.").format(probKind, len(evG) - 1, degE, cond), 5) + S, mus = len(eV), self._musUniqueCN[self._reorder].flatten() + poles, qTm1 = barycentricRoots(eV, mus, True) + self.N = len(poles) + if self.QTol > 0: + # compare optimal score with N poles to those obtained by + # removing one of the poles + qTm1 = qTm1[1 :].conj() ** -1. + dists = mus.reshape(-1, 1) - mus + dists[np.arange(S), np.arange(S)] = 1. + dists = np.prod(dists, axis = 1).conj() ** -1. + qComp = np.empty((self.N + 1, S), dtype = np.complex) + qComp[0] = dists * np.prod(qTm1, axis = 1) + for j in range(self.N): + qTmj = np.prod(qTm1[:, np.arange(self.N) != j], axis = 1) + qComp[j + 1] = dists * qTmj + Lqs = qComp.dot(eVG) + scores = np.real(np.sum(Lqs * _evG ** -evExp * Lqs.conj(), + axis = 1)) + evBad = scores[1 :] < self.QTol * scores[0] + nevBad = np.sum(evBad) + if nevBad: + vbMng(self, "MAIN", + ("Suboptimal pole{} detected. Reducing N by " + "{}.").format("s" * (nevBad > 1), nevBad), 10) + self.N = self.N - nevBad + poles = poles[evBad == False] + return evG[1 :], poles + def findeveVGStandard(self, sampleE:Np2D, invD:List[Np2D], TN:Np2D, Rscaling : Np1D = None) -> Tuple[Np1D, Np2D]: """ Compute eigenvalues and eigenvectors of rational denominator matrix, or of its right chol factor if POD. """ RROMPyAssert(self._mode, message = "Cannot solve spectral problem.") - if self.POD == 1 or self.rationalMode == "STANDARD_OUTPUT": + if self.POD == 1 or self.rationalMode[-6:] == "OUTPUT": vbMng(self, "INIT", "Building generalized half-gramian.", 10) - if self.functionalSolve[:11] == "BARYCENTRIC": - Rold = np.pad(sampleE[: self.N, : self.N], [(0, 0), (1, 0)], - "constant") # adjust for bias - Rnew = sampleE[: self.N, self.N :] - S, eWidth = Rnew.shape - Rstack = np.zeros((S * eWidth, self.N + 1), dtype = np.complex) - for k in range(eWidth): - Gloc = (Rold.T - Rnew[:, k]).T - Rstack[k * S : (k + 1) * S, :] = Gloc * TN[k] - else: - Rstack = khatri_rao(sampleE, invD[0]).dot(TN) + Rstack = khatri_rao(sampleE, invD[0]).dot(TN) vbMng(self, "DEL", "Done building half-gramian.", 10) if self.forceQReal: Rstack = np.vstack((np.real(Rstack), np.imag(Rstack))) _, s, Vh = np.linalg.svd(Rstack, full_matrices = Rstack.shape[0] < Rstack.shape[1]) evG, eVG = s[::-1], Vh[::-1].T.conj() if len(evG) < eVG.shape[1]: evG = np.append(np.zeros(eVG.shape[1] - len(evG)), evG) evExp, probKind = -2., "svd " else: if not hasattr(self, "_gram"): vbMng(self, "INIT", "Building gramian matrix.", 10) self._gram = self.HFEngine.innerProduct(sampleE, sampleE, is_state = True) if Rscaling is not None: self._gram = (self._gram.T * Rscaling.conj()).T * Rscaling vbMng(self, "DEL", "Done building gramian.", 10) vbMng(self, "INIT", "Building generalized gramian.", 10) - if self.functionalSolve[:11] == "BARYCENTRIC": - Idiag = np.arange(self.N, self._gram.shape[0]) - GNN = self._gram[Idiag, Idiag] - GON = np.pad(self._gram[: self.N, self.N :], [(1, 0), (0, 0)], - "constant") # adjust for bias - GNO = GON.T.conj() - GOO = np.pad(self._gram[: self.N, : self.N], (1, 0), - "constant") # adjust for bias - G = ((TN.T.conj() * GNN).dot(TN) #L^H @ diag(GNN) @ L - - (GON * TN.T.conj()).dot(TN) #(L^H * GON) @ L - - TN.T.conj().dot(GNO * TN) #L^H @ (GNO * L) - + GOO * (TN.T.conj().dot(TN))) #GOO * (L^H @ L) - else: - G = self._gram * invD[0].T.conj().dot(invD[0]) - G = TN.T.conj().dot(G.dot(TN)) + G = self._gram * invD[0].T.conj().dot(invD[0]) + G = TN.T.conj().dot(G.dot(TN)) vbMng(self, "DEL", "Done building gramian.", 10) if self.forceQReal: G = np.real(G) evG, eVG = np.linalg.eigh(G) evExp, probKind = -1., "eigen" - if (self.functionalSolve in ["NORM", "BARYCENTRIC_NORM"] + if (self.functionalSolve == "NORM" or evG[0] == 0. or np.sum(np.abs(evG) < np.finfo(float).eps * np.abs(evG[-1]) * len(evG)) == 1): eV = eVG[:, 0] else: - eV = eVG.dot(evG ** evExp * eVG[0].conj()) + eV = eVG.dot(evG ** evExp * eVG[-1].conj()) evG[1 :] -= evG[0] eV /= np.linalg.norm(eV) if self.npar == 1: degE = self.N else: degE = [self.N] + [self.NAuxiliary] * (self.npar - 1) cond = evG[-1] / evG[1] if evG[0] > 0 and evG[1] > 0 else np.inf vbMng(self, "MAIN", ("Solved {}problem of size {} (degree {}) with condition number " "{:.4e}.").format(probKind, len(evG) - 1, degE, cond), 5) - if self.functionalSolve[:11] == "BARYCENTRIC": - mus = self._musUniqueCN[self._reorder][: self.N, 0] - eV = barycentricRoots(eV, mus) - self.N = len(eV) return evG[1 :], eV def getResidues(self, *args, **kwargs) -> Tuple[paramList, Np2D]: """ Obtain approximant residues. Returns: Matrix with residues as columns. """ return self.trainedModel.getResidues(*args, **kwargs)