Page MenuHomec4science

10-is-interpolation.tex
No OneTemporary

File Metadata

Created
Thu, Mar 13, 15:02

10-is-interpolation.tex

\section{Interpolation}
\label{sec:is:interp}\index{interpolation|(}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% finite-length signal, 5 taps
\def\ta{1 } \def\tb{2 } \def\tc{0.7 } \def\td{2.4 } \def\te{-1.5 }
%% the tap plotting string
\def\taps{-2 \ta -1 \tb 0 \tc 1 \td 2 \te}
\def\plotTaps{\dspTaps[linecolor=ColorDT]{\taps}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Interpolation is the procedure by which we convert a discrete-time sequence $\mathbf{x}$ to a continuous-time function $\mathbf{x}_c$. At the core of the interpolation procedure, as we have mentioned, is the association of a {\em physical}\/ time duration $T_s$ to the intervals between samples in the discrete-time sequence; next, we need to ``fill the gaps'' between sample values. To develop the intuition, let's consider a simple example where we want to interpolate a finite set of values as in Figure~\ref{fig:is:diffint}-(a); Figures~\ref{fig:is:diffint}-(b), (c) and (d) show three possible interpolations of the given dataset to a continuous-time signal $\mathbf{x}_c$ using $T_s = 1$.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=ColorCT](-2, \ta)(0,0)(2, \te)
\end{dspPlot}
\\ (a) & (b) \\
\begin{dspPlot}[sidegap=0,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linecolor=ColorCT](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=ColorCT](-2, \ta)(-1.75, 2)(-1.5, -1.5)(-1.25, 1.5)%
(-1, \tb)(-0.75, 0.5)(-0.5, .6)(-0.25, .5)%
(0, \tc)(0.25, -1)(0.5, 1)(0.75, -1)%
(1, \td)(1.25, 1.5)(1.5, 2.5)(1.75, 1.8)(2, \te)
\end{dspPlot}
\\ (c) & (d) \\
\end{tabular}
\caption{(a): original 5-tap discrete-time signal; (b), (c), (d): different possible interpolations of (a) to continuous time.}\label{fig:is:diffint}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
Although we haven't really formalized the requirements for the interpolation process, it is rather intuitive that all three proposed interpolations are not entirely satisfactory:
\begin{itemize}
\item in (b) the interpolation does not go through the data points; indeed, it seems reasonable to require that the interpolation include the original data set exactly;
\item in (c), while the function goes through the data points, it does not look smooth; in general, we are right to be suspicious of non-smooth curves, since no natural phenomenon can create discontinuous jumps in amplitude (achievable only via infinite speed) or curvature (achievable only via infinite acceleration);
\item in (d), while the function goes through the data points and is smooth, it seems to ``wander around'' too much, as if interpolating a more complicated dataset; indeed, we want our interpolation strategy to be ``economical'' in using the information contained in the discrete-time signal.
\end{itemize}
In the following sections we will try to make our requirements more precise and see how we can arrive at a universal interpolation scheme that is both intuitively and mathematically sound.
%To formalize the problem, and without loss of generality, we will assume in the following that $T_s = 1$ and that the finite-length discrete-time signal is defined for $n = -N, \ldots, 0, \ldots, N$ for an odd length of $M = 2N+1$ points.
\subsection{Polynomial Interpolation}
\label{sec:is:lagrange}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Lagrange polynomials
\def\lpa{%
dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul }
\def\lpb{%
dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul }
\def\lpc{%
dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul }
\def\lpd{%
dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul }
\def\lpe{%
dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul }
\def\LagPoly#1#2#3{%
\dspFunc[linecolor=#1]{x \csname lp#2\endcsname}%
%\dspText(0,1.3){\color{#1} $L_{#3}^{2}(t)$}
}
\def\lagInterp{%
x \lpa \ta mul %
x \lpb \tb mul %
x \lpc \tc mul %
x \lpd \td mul %
x \lpe \te mul %
add add add add}
\def\interpolant#1#2{%
\begin{dspClip}
\plotTaps%
\dspTaps[linecolor=ColorFour]{#1 \csname t#2\endcsname}%
\dspFunc[linewidth=0.5pt,linecolor=ColorFour]{x \csname lp#2\endcsname \csname t#2\endcsname mul}%
\end{dspClip}}
\def\polyName#1{{$x[#1]L^{(2)}_{#1}(t)$}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Given a discrete-time signal $\mathbf{x}$, once we have chosen an interpolation interval $T_s$, the first requirement for the interpolating function $\mathbf{x}_c$ is that its values at multiples of $T_s$ be equal to the corresponding points of the discrete-time sequence, i.e.
\[
x_c(nT_s) = x[n].
\]
%with our choice of $T_s = 1$ the above condition becomes simply $x(n) = x[n]$. T
The second requirement is that we want the interpolating function to be smooth and, mathematically, the smoothness of a function increases with the number of its continuous derivatives. For maximal smoothness, therefore, we require $\mathbf{x}_c \in C^{\infty}$, where $C^{M}$ is the class of functions for which all derivatives up to order $M$ exist and are continuous.
For now, let's consider the case where $\mathbf{x}$ is a finite-length signal; without loss of generality, assume the signal has odd lenght $2N+1$ and that the signal is centered in zero, as in Figure\ref{fig:is:diffint}-(a) before. To lighten the notation, let's also pick $T_s = 1$ for now.
The simplest choice for a maximally differentiable curve through a set of $2N+1$ data points is the unique polynomial interpolator of degree $2N$
\begin{equation} \label{eq:is:straightPI}
x_c(t) = a_0 + a_1 t + a_2 t^2 + \ldots + a_{2N} t^{2N}
\end{equation}
which, like all polynomials, belongs to $C^{\infty}$. Computation of the $2N+1$ coefficients $a_k$ is a classic algebraic problem, first solved in the 17th century by Newton. Numerically, one way to arrive at the solution is to work out the system of $2N+1$ equations
\begin{equation*}
\bigl\{\,x_c(n) = x[n]\,\bigr\}_{n = -N,\ldots,0,\ldots,N}
\end{equation*}
which can be carried out algorithmically using Pascal's triangle, for instance. A more interesting approach is to consider the space of finite-degree polynomials over the interval $[-N, N]$ and choose a ``smart'' basis to express $\mathbf{x}_c$. This is not the first time we encounter the idea of using a polynomial basis: in Chapter~\ref{ch:vs}, for instance, we introduced the Legendre basis to solve a function approximation problem. The interpolator in~(\ref{eq:is:straightPI}) clearly expresses $\mathbf{x}_c$ as a linear combination of the $2N+1$ {\em monomial}\/ basis vectors $\{1, t, t^2, \ldots t^{2N}\}$ but, for the task at hand, a more appropriate basis is the set of {\em Lagrange polynomials}.
The Lagrange polynomial basis for the interval $I=[-N, N]$ is the family of $2N+1$ polynomials
\begin{equation} \label{eq:is:lagPoly}
L^{(N)}_n(t) = \mathop{\prod_{k = -N}}_{k\neq n}^{N}\frac{t - k}{n - k}, \qquad n = -N, \ldots, N
\end{equation}
each of which has degree $2N$. As an example, the family of five polynomials for $N=2$ is shown in Figure~\ref{fig:is:LagBasis}. A key property of the Lagrangian basis vector is that, for integer values of their argument, we have
\begin{equation} \label{eq:is:lagInterpProp}
L^{(N)}_n(m) = \left\{
\begin{array}{ll}
1 & \mbox{if $n=m$}\\
0 & \mbox{if $n \neq m$}
\end{array}\right. \qquad\qquad -N \leq n,m \leq N.
\end{equation}
Using the above result, it is easy to verify that the polynomial interpolator for a discrete-time signal of length $2N+1$ is simply
\begin{equation} \label{eq:is:lagInterp}
x_c(t) = \sum_{n = -N}^{N} x[n]L^{(N)}_n(t)
\end{equation}
that is, a linear combination of the Lagrangian basis vectors where the scalar coefficients are the discrete-time samples themselves. Indeed, since a polynomial of degree $M$ is uniquely determined by $M$ of its values and since $x_c(n)$, because of~(\ref{eq:is:lagInterpProp}), is equal to $x[n]$ for $n = -N, -N+1, \ldots, N$, the interpolator is indeed the unique solution to the problem.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0,yticks=0.5]{-2.5,2.5}{-0.8,1.6}
\begin{dspClip}%
\LagPoly{green}{a}{-2}%
\LagPoly{blue}{b}{-1}%
\LagPoly{orange}{c}{0}%
\LagPoly{black}{d}{1}%
\LagPoly{cyan}{e}{2}%
\end{dspClip}
\end{dspPlot}
\caption{Lagrange interpolation polynomials $L_n^{(2)}(t)$ for $n=-2,\ldots,2$. Note that $L_n^{(N)}(t)$ is zero for $t$ integer except for $t = n$, where it is $1$.}\label{fig:is:LagBasis}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As an example, Figure~\ref{fig:is:weightedLag} shows how the five polynomials $L_n^{(2)}(t)$ beautifully come together to interpolate a $5$-point discrete-time signal into the smooth curve in Figure~\ref{fig:is:finalLag}. A fundamental characteristic of polynomial interpolation is that it is {\it global}: for all values of $t$, \textit{all} the original discrete-time data points contribute to the instantaneous value of $x(t)$.
Although it elegantly solves the problem of finding the smoothest curve through a finite data set, polynomial interpolation suffers from a series of drawbacks. First of all, although the Lagrangian basis provides a way to quickly compute the interpolator's coefficients, if the number of data points changes, then the set of basis vectors needs to be redetermined from scratch. From an engineering point of view it is obvious that we would like a more universal ``interpolation machine'' that does not depend on the size of the input. Additionally, and this is a problem common to all types of polynomial fitting, the method becomes numerically ill-conditioned as the polynomial degree grows large. Finally, the method produces a continuous-time interpolator that does not admit a simple frequency-domain representation, since the resulting function diverges outside the $[-N, N]$ interval.
We will now introduce an interpolation method that solves all of these problems, but the price to pay will be the partial relaxation of the smoothness constraint.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\polyName{-2} & \polyName{-1} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-2}{a}
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-1}{b}%
\end{dspPlot}
\end{tabular}
\polyName{0} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{0}{c}
\end{dspPlot}
\begin{tabular}{cc}
\polyName{1} & \polyName{2} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{1}{d}%
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{2}{e}
\end{dspPlot}
\hspace{1em}
\end{tabular}
\caption{Weighted Lagrange polynomials for the interpolation of a $5$-point signal.}\label{fig:is:weightedLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\begin{dspClip}%
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \lpa \ta mul}%
\dspFunc[linewidth=0.4pt]{x \lpb \tb mul}%
\dspFunc[linewidth=0.4pt]{x \lpc \tc mul}%
\dspFunc[linewidth=0.4pt]{x \lpd \td mul}%
\dspFunc[linewidth=0.4pt]{x \lpe \te mul}%
\plotTaps
\dspFunc[linewidth=2pt,linecolor=ColorCT,xmin=-2,xmax=2]{\lagInterp}
\end{dspClip}
\end{dspPlot}
\caption{Final Lagrange interpolation.}\label{fig:is:finalLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Kernel-Based Local Interpolation}\label{sec:is:locInterp}%
\index{interpolation!local}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% cubic interpolator
\def\cubFunA{abs dup 2 exp -2.25 mul exch 3 exp 1.25 mul 1 add add }
\def\cubFunB{abs dup dup 8 mul exch 2 exp -5 mul add exch 3 exp -4 add add -0.75 mul }
\def\cubFun#1{
#1 sub
dup dup dup dup %
-2 lt {pop pop pop pop 0} {
2 gt {pop pop pop 0 } {
-1 lt {pop \cubFunB } {
1 gt {\cubFunB }
{\cubFunA}
ifelse }%
ifelse }%
ifelse }%
ifelse }
\def\triInterp{%
x \cubFun{-2} \ta mul %
x \cubFun{-1} \tb mul %
x \cubFun{0} \tc mul %
x \cubFun{1} \td mul %
x \cubFun{2} \te mul %
add add add add}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The idea behind kernel-based interpolation is to build a continuous-time signal by summing together scaled copies of the same compact-support prototype function (the ``kernel''). Consider for instance a device that builds a continuous-time signal simply by keeping its output constant and equal to the last discrete-time input data point between interpolation intervals:
\begin{equation*}
x_0(t) = x[\, \lfloor t + 0.5 \rfloor \,], \qquad -N \leq t \leq N;
\end{equation*}
the result of this interpolation method, known as Zero-Order Hold (ZOH), is shown in Figure~\ref{fig:is:zoh} for the same 5-point signal as in Figure~\ref{fig:is:diffint}-(a). The above expression can be rewritten as
\begin{equation} \label{eq:is:zoh}
x_0(t) = \sum_{n = -N}^{N}x[n]\rect(t - n),
\end{equation}
which shows a remarkable similarity to~(\ref{eq:is:lagInterp}). The differences, however, are extremely significant:
\begin{itemize}
\item the continuous-time term in the sum (i.e the rect function) is no longer dependent on the length of the original data set
\item the dependence of the continuous-time term on interpolation interval (that is, on $n$) is only via a simple time shift
\item the value of the output at any instant $t$ is dependent only on one input data point: the interpolation is \textit{local} rather than global and can be performed in real time as the discrete-time data flows in.
\end{itemize}
The ZOH creates a continuous-time signal by stitching together delayed and scaled versions of a rectangular kernel, independently of the amount of data to interpolate: the resulting ``interpolation machine'' is both conceptually and practically a very simple device. The price to pay for this simplicity, however, is a ``jumpy'' output signal (that is, the interpolator is discontinuous).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linewidth=2pt,linecolor=ColorCT](-2,\ta)(-1.5,\ta)(-1.5,\tb)(-.5,\tb)(-.5,\tc)(0.5,\tc)(0.5,\td)(1.5,\td)(1.5,\te)(2,\te)
\end{dspPlot}
\caption{Zero-Order Hold interpolation.}\label{fig:is:zoh}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Generally speaking, local kernel-based interpolators produce a signal via what we could call a ``mixed-domain convolution''
\begin{equation} \label{eq:is:interplocal}
x(t) = \sum_{n = -N}^{N}x[n]i(t - n);
\end{equation}
where the kernel $i(t)$ is a compact-support function fulfilling the properties
\begin{equation*}
\left\{\begin{array}{ll}
i(0) & = 1 \\
i(t) & = 0 \quad \mbox{for $t$ a nonzero integer}
\end{array}\right.
\end{equation*}
(compare this to~(\ref{eq:is:lagInterpProp})). By crafting a good kernel we can achieve an interpolated signal with better continuity properties and we will now explore a few options; note that, for an arbitrary interpolation interval $T_s$, the interpolation is obtained simply by using a scaled kernel:
\begin{equation} \label{eq:is:interpkernel}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, i\left(\frac{t - nT_s}{T_s}\right).
\end{equation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{ccc}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \dspRect{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \dspTri{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \cubFun{0}}
\end{dspPlot}
\\
(a) & (b) & (c)
\end{tabular}
\caption{Interpolation kernels of order zero, one and three.}\label{fig:is:interpKernels}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Zero-Order Hold.}\index{zero-order hold}%
\index{interpolation!local!zero-order}
The kernel used in the ZOH is, as we have seen, the simple rect function shown in Figure~\ref{fig:is:interpKernels}-(a):
\begin{equation}
i_0(t) = \begin{cases}
1 & \mbox{ if } |t| < 1/2 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel is discontinuous (that is, it belongs to $C^0$) and so is the interpolated signal. Only one value of the input is used to produce the current value of the output.
\itempar{First-Order Interpolation.}\index{first-order interpolation}%
\index{interpolation!local!first-order}
The kernel used in first-order interpolation is the triangle function shown in Figure~\ref{fig:is:interpKernels}-(b):
\begin{equation}
i_1(t) = \begin{cases}
1- |t| & \mbox{ if } |t| < 1 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^1$ and the same holds for the output signal. An example of first-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:firstOrdInterp}. The interpolation strategy is akin to ``connecting the dots'' between discrete-time samples so that, at any time $t$, \textit{two} discrete-time samples contribute to the output value. If implemented in real time, first-order interpolation would therefore introduce a delay of one interpolation period.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \dspTri{-2}{1} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{-1}{1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{0}{1} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{1}{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{2}{1} \te mul}%
\plotTaps
\psline[linewidth=2pt,linecolor=ColorCT](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
\caption{First-Order interpolation; shown in light gray are the scaled and translated copies of the first-order kernel.}\label{fig:is:firstOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Although the first-order interpolator is only marginally more complex than the ZOH, the continuity of the output implies a much better quality of the result as can be appreciated visually for familiar signals as in Figure~\ref{fig:is:interpCompare}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=ColorDT]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=ColorCT]%
{x x 0 gt {-0.5} {0.5} ifelse sub truncate 0.5235 mul RadtoDeg sin}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=ColorDT]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=ColorCT]%
{x x floor sub dup 1 exch sub %
x floor 0.5235 mul RadtoDeg sin mul exch %
x ceiling 0.5235 mul RadtoDeg sin mul add}
\end{dspPlot}
\caption{Comparison between the zero- and first-order interpolation of a discrete-time sinusoidal signal.}\label{fig:is:interpCompare}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Third-Order Interpolation.}\index{third-order interpolation}%
\index{interpolation!local!third-order}
In third-order interpolation\footnote{Even-order kernels are not used in practice since they are not symmetric around a central interpolation instant.} a commonly used kernel is the cubic piecewise function shown in Figure~\ref{fig:is:interpKernels}-(c):
\begin{equation}
i_3(t) = \begin{cases}
1.25|t|^3 - 2.25|t|^2 + 1 & \mbox{for $|t| \leq 1$} \\
-0.75(|t|^3 - 5|t|^2 + 8|t| - 4) & \mbox{for $1 < |t| \leq 2$} \\
0 & \mbox{otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^2$ (first and second derivatives are continuous) and the same holds for the output signal. The kernel's support is four and, as a consequence, the values of the continuous-time signal depend on four neighboring discrete-time samples. An example of third-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:thirdOrdInterp}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \cubFun{-2} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{-1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{0} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{2} \te mul}%
\plotTaps
\dspFunc[xmin=-2,xmax=2,linewidth=2pt,linecolor=ColorCT]{\triInterp}
\end{dspPlot}
\caption{Third-Order interpolation; shown in light gray are the scaled and translated copies of the third-order kernel.}\label{fig:is:thirdOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Higher-Order Interpolation.}
Local interpolation schemes can be extended to higher order kernels and, in general, a kernel of order $2N+1$ is composed of polynomials of degree $2N+1$ over a support of length $2N+2$. The resulting interpolation belongs to $C^{2N}$ and the lack of continuity in the derivative of order $2N+1$ ultimately will cause undesired high frequency content in the output, as we will see momentarily.
\subsection{Ideal Sinc Interpolation}
\label{sec:is:sincinterp}
The tradeoff, so far, seems clear:
\begin{itemize}
\item global interpolation schemes such as polynomial interpolation provide maximum smoothness at the price of a complex procedure that depends on the length of the data set;
\item local kernel-based methods are simple to implement but ultimately lack in smoothness (which leads to unwanted spectral artifacts).
\end{itemize}
However, a small miracle is in store: in the limit, as the size of the data set grows to infinity, polynomial interpolation and kernel-based interpolation become one and the same, leading to what is perhaps one of the most remarkable mathematical results in discrete-time signal processing.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% L_0^N(t) = \prod_{k=1}^{N} [t^2/k^2 - 1]
\def\sincApp#1{%
1
1 1 #1 {%
dup mul
x dup mul
exch div
1 exch sub
mul
} for}
%
\center
\begin{dspPlot}[sidegap=0,xout=true]{-15,15}{-.3,1.1}
\dspFunc[linecolor=lightgray]{\sincApp{200}}
\dspFunc[linecolor=ColorCT,linewidth=0.8pt]{x \dspSinc{0}{1}}
\end{dspPlot}
\caption{A portion of the sinc function vs the Lagrange basis vector $L_{0}^{(200)}(t)$ (light gray).}\label{fig:is:Lag2Sinc}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Consider again the expression for the Lagrange polynomial (\ref{eq:is:lagPoly}) for $T_s = 1$ and let's try to determine what happens when the size of the data set, and therefore the degree of the Lagrange polynomials, goes to infinity:
\[
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \prod_{ {\scriptstyle k = -\infty \atop \scriptstyle k\neq n }}^{\infty} \frac{t - k}{n - k} = \ ?
\]
By using the change of variable $m = n-k$ we have
\begin{align}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) &= \prod_{{ \scriptstyle m = -\infty \atop \scriptstyle m\neq 0}}^{\infty} \frac{t - n+m}{m} \nonumber \\
& = \prod_{{\scriptstyle m = -\infty \atop\scriptstyle m\neq 0 }}^{\infty}\left(1+\frac{t - n}{m}\right) \nonumber \\
& = \prod_{m=1}^{\infty}\left(1-\left(\frac{t - n}{m}\right)^{\!2}\right) \label{eq:is:lag2sinc}
\end{align}
We can now use Euler's infinite product expansion for the sine function (a somewhat esoteric formula whose proof is in the appendix),
\begin{equation}
\sin (\pi\tau) = (\pi\tau) \prod_{k = 1}^{\infty}\left(1 - \frac{\tau^2}{k^2}\right),
\end{equation}
with which we finally obtain
\begin{equation}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \sinc\left(t - n \right)
\end{equation}
Remarkably, as $N$ goes to infinity, all Lagrange polynomials converge to simple time shifts of the same function and the function in question is the sinc; a graphical illustration of the convergence process for $L^{(N)}_0(t)$ is shown in Figure~\ref{fig:is:Lag2Sinc}. This means that the maximally smooth interpolation for an infinite-length sequence can be obtained using a kernel-based method where the kernel is the sinc function:
\begin{equation} \label{eq:is:sincInterpOne}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc(t-n)
\end{equation}
or, for an arbitrary $T_s$:
\begin{equation}\label{eq:is:sincInterp}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc\left(\frac{t - nT_s}{T_s}\right)
\end{equation}
Figure~\ref{fig:is:sincInterp} shows how the scaled and time-shifted copies of the sinc kernel come together to interpolated a discrete-time sequence; note how the interpolation property of the sinc that we introduced in Section~\ref{sec:is:SincProp} guarantees that $x_c(nT_s) = x[n]$.
Obviously, since the sinc is a two-sided, infinite support function, sinc interpolation falls in the same abstract category as ideal filters: a paradigm that we can try to approximate but that cannot be exactly implement in practice. Still, the mathematical construction represents a fundamental cornerstone of the translation machinery between discrete and continuous time and will represent the starting point for the development of sampling theory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% smooth DT signal
\def\smooth{ 10 div 360 mul dup sin exch dup 2 mul sin exch dup mul 360 div sin add add 2.2 div 0.2 add }
\def\sinterpolant#1{ \dspSinc{#1}{1} #1 \smooth mul }
\center
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\dspSignal[linecolor=blue!70]{x \smooth}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=ColorDT]{x \smooth}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{0}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{1}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{2}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{3}}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=ColorDT]{x \smooth}
\multido{\n=-4+1}{12}{%
\dspFunc[linewidth=0.5pt,linecolor=lightgray]{x \dspSinc{\n}{1} \n \smooth mul}}
\dspFunc[linecolor=ColorCT]{x \smooth}
\end{dspPlot}
\caption{Sinc interpolation: discrete-time signal (top); first four scaled copies of the sinc kernel for $n = 0,1,2,3$ (center); final interpolation (bottom).}\label{fig:is:sincInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Spectral Properties}
For an arbitrary interval $T_s$, the Fourier transform of a kernel-based interpolant can be easily computed from~(\ref{eq:is:interpkernel}) as
\begin{align}
X_c(f) &= \int_{-\infty}^{\infty} x_c(t)\,e^{-j2\pi f t} \, dt \nonumber \\
&= \sum_{n = -\infty}^{\infty} x[n] \int_{-\infty}^{\infty} i\left(\frac{t - nT_s}{T_s}\right) e^{-j2\pi f t} \, dt \nonumber \\
&= T_s \, \sum_{n = -\infty}^{\infty} x[n] e^{-j 2\pi n fT_s} I(f T_s) \nonumber \\
&= T_s \, I(f T_s) \, X(e^{j2\pi f T_s}) \\
&= \frac{1}{F_s}\, I\left(\frac{f}{F_s}\right) \, X(e^{j2\pi f/F_s}) . \label{eq:is:interpSpec}
\end{align}
The resulting spectrum is therefore the product of three factors:
\begin{enumerate}
\item a scaling factor $T_s = 1/F_s$;
\item the term $X(e^{j2\pi f/F_s})$, which is the DTFT of the original sequence, rescaled so that $\omega = \pi$ in the discrete-time spectrum is mapped to $f = F_s/2$ in the continuous-time spectrum; please note that, since the DTFT is $2\pi$-periodic, then $X(e^{j2\pi f/F_s})$ is an $F_s$-periodic function;
\item $I(f/F_s)$, the Fourier transform of the rescaled kernel, which acts as a continuous-time filter.
\end{enumerate}
\itempar{Ideal sinc interpolation.} The ideal interpolation kernel $\sinc(t/T_s)$ is $F_s$-bandlimited, with $F_s = 1/T_s$ and therefore the sinc interpolation of a discrete-time sequence is itself bandlimited:
\begin{align}
X_c(f) &= \frac{1}{F_s} \, \rect\left(\frac{1}{F_s}\right) \, X(e^{j2\pi f/F_s}) \nonumber \\
& = \begin{cases}
(1/F_s) \, X(e^{j2\pi f/F_s}) & \mbox{ for $ |f| \leq F_s/2$} \\
0 & \mbox{ otherwise}
\end{cases}. \label{eq:is:sincInterpSpec}
\end{align}
In other words, when using sinc interpolation, the continuous-time spectrum is just a scaled and stretched version of the DTFT between $-\pi$ and $\pi$. The duration of the interpolation interval $T_s$ is inversely proportional to the resulting bandwidth of the interpolated signal; intuitively, a slow interpolation ($T_s$ large) results in a spectrum concentrated around the low frequencies; conversely, a fast interpolation ($T_s$ small) results in a spread-out spectrum (more high frequencies are present); figure~\ref{fig:is:interpSpeed} illustrates the concept graphically.\footnote{
%
To find a simple everyday (yesterday's?) analogy, think of a $45$ rpm vinyl\index{vinyl} record played at either $33$~rpm (slow interpolation) or at $78$ rpm (fast interpolation) and remember the acoustic effect on the sounds.}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\plotSpec#1#2{\dspFunc[#2]{x \dspPorkpie{0}{#1} #1 div}}
\def\plotCurrent#1#2#3#4#5{%
\FPupn\o{#2 1 / 2 trunc}
\plotSpec{\o}{linecolor=ColorCF}
\dspCustomTicks[axis=x]{0 0 {-\o} #5 {\o} #4}
\dspText(-2,1.8){$T_s=$#3}
\dspCustomTicks[axis=y]{1 $T_0$ 2 $2T_0$}}
%
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{1}{1}{$T_0$}{$F_s/2=1/(2T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{2}{2}{$2T_0$}{$F_s/2=1/(4T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{3}{0.5}{$T_0/2$}{$F_s/2=1/T_0$}{$-F_s/2$}
\end{dspPlot}
\caption{Spectral characteristics of a sinc-interpolated signal: DTFT of the original discrete-time signal $x[n]$ (top); Fourier transform of its sinc interpolation with interpolation interval $T_s = T_0$ (second panel); spectrum of a sinc interpolation using a slower interpolation rate ($T_s = 2T_0$): spectrum concentrates in the low frequencies around the origin (third panel); spectrum of a sinc interpolation using a faster interpolation rate ($T_s = T_0/2$): spectrum spreads out to higher frequencies (third panel); the spectral shape and total energy remain the same.}\label{fig:is:interpSpeed}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Practical interpolation.} In practical interpolation schemes, where the kernel is a compact-suport function, the spectrum of the interpolation will be dependent on the Fourier transform of the kernel and will exhibit unwanted artifact with respect to the ideal interpolation scheme. Figure~\ref{fig:is:zohfreq}, for instance, shows the factors in~(\ref{eq:is:interpSpec}) and their product when the kernel is the zero-order hold. We can remark the following:
\begin{itemize}
\item the interpolator acts as a non-ideal lowpass filter that largely preserves the baseband copy of the periodic spectrum but only attenuates the remaining copies;
\item the interpolator distorts the baseband component since its response is not flat;
\item the spectral width of the baseband component is again inversely proportional to the interpolation period.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspTri{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{2} abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} abs mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{Zero-order hold interpolation in the frequency domain.}\label{fig:is:zohfreq}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X(e^{j2\pi f/F_s})I(f/F_s)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{2} dup mul abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} dup mul mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{First-order interpolation in the frequency domain.}\label{fig:is:fohfreq}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Higher-order interpolators ameliorate the situation; a first-order interpolator, for instance, whose kernel can be expressed as
\[
\mathbf{i}_1 = \mathbf{i}_0 \ast \mathbf{i}_0
\]
has a sharper characteristic since
\begin{equation}
I_1(f) = \sinc^2(2f).
\end{equation}
As a result, as shown in Figure~\ref{fig:is:zohfreq}, it appears that the smoothness property that made sense in the time domain also leads to a continuous-time spectrum that more closely mirrors its discrete-time counterpart by rejecting most of the out-of-band energy. It is clear that we would like the kernel to approximate as well as possible an ideal lowpass filter characteristic with cutoff $F_s/2$, which leads once again to the ideal sinc interpolator!

Event Timeline