Page MenuHomec4science

10-interpolation.tex
No OneTemporary

File Metadata

Created
Thu, Mar 13, 16:00

10-interpolation.tex

\section{Interpolation}
\label{sec:is:interp}\index{interpolation|(}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% finite-length signal, 5 taps
\def\ta{1 } \def\tb{2 } \def\tc{0.7 } \def\td{2.4 } \def\te{-1.5 }
%% the tap plotting string
\def\taps{-2 \ta -1 \tb 0 \tc 1 \td 2 \te}
\def\plotTaps{\dspTaps[linecolor=dspDTColor]{\taps}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Interpolation is the procedure with which we convert a discrete-time sequence $x[n]$ to a continuous-time function $x(t)$. At the core of the interpolation procedure, as we have mentioned, is the association of a {\em physical}\/ time duration $T_s$ to the intervals between the samples in the discrete-time sequence; next, we need to ``fill the gaps'' between sample values. To develop the intuition, let's consider a simple example where we want to interpolate a finite-length signal as in Figure~\ref{fig:is:diffint}-(a); Figures~\ref{fig:is:diffint}-(b), (c) and (d) show three possible interpolations of the given dataset to a continuous-time signal $x(t)$ using $T_s = 1$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=dspCTColor](-2, \ta)(0,0)(2, \te)
\end{dspPlot}
\\ (a) & (b) \\
\begin{dspPlot}[sidegap=0,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linecolor=dspCTColor](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=dspCTColor](-2, \ta)(-1.75, 2)(-1.5, -1.5)(-1.25, 1.5)%
(-1, \tb)(-0.75, 0.5)(-0.5, .6)(-0.25, .5)%
(0, \tc)(0.25, -1)(0.5, 1)(0.75, -1)%
(1, \td)(1.25, 1.5)(1.5, 2.5)(1.75, 1.8)(2, \te)
\end{dspPlot}
\\ (c) & (d) \\
\end{tabular}
\caption{(a): original 5-tap discrete-time signal; (b), (c), (d): different possible interpolations of (a) to continuous time.}\label{fig:is:diffint}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Although we haven't really formalized the requirements for the interpolation process, it is rather intuitive that all three proposed interpolations are not entirely satisfactory:
\begin{itemize}
\item[(b)] the interpolation does not ``go through'' the data points; indeed, it seems intuitive that the interpolation should respect the original dataset;
\item[(c)] while the function goes through the data points, it does not look smooth; in general, we are right to be suspicious of non-smooth curves, since no natural phenomenon can create discontinuous jumps in amplitude (achievable only via infinite speed) or curvature (achievable only via infinite acceleration);
\item[(d)] while the function goes through the data points and is smooth, it seems to ``wander around'' too much, as if interpolating a more complicated dataset; indeed, we want our interpolation strategy to be ``minimalistic'' in using the information contained in the discrete-time signal.
\end{itemize}
In the following sections we will try to make our requirements more precise and see how we can arrive at a universal interpolation scheme that is both intuitively and mathematically sound. To formalize the problem, and without loss of generality, we will assume in the following that $T_s = 1$ and that the finite-length discrete-time signal is defined for $n = -N, \ldots, 0, \ldots, N$ for an odd length of $M = 2N+1$ points.
\subsection{Polynomial Interpolation}
\label{sec:is:lagrange}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Lagrange polynomials
\def\lpa{%
dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul }
\def\lpb{%
dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul }
\def\lpc{%
dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul }
\def\lpd{%
dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul }
\def\lpe{%
dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul }
\def\LagPoly#1#2#3{%
\dspFunc[linecolor=#1]{x \csname lp#2\endcsname}%
%\dspText(0,1.3){\color{#1} $L_{#3}^{2}(t)$}
}
\def\lagInterp{%
x \lpa \ta mul %
x \lpb \tb mul %
x \lpc \tc mul %
x \lpd \td mul %
x \lpe \te mul %
add add add add}
\def\interpolant#1#2{%
\begin{dspClip}
\plotTaps%
\dspTaps[linecolor=dspColorFour]{#1 \csname t#2\endcsname}%
\dspFunc[linewidth=0.5pt,linecolor=dspColorFour]{x \csname lp#2\endcsname \csname t#2\endcsname mul}%
\end{dspClip}}
\def\polyName#1{{$x[#1]L^{(2)}_{#1}(t)$}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Given a discrete-time signal $x[n]$, once we have chosen an interpolation interval $T_s$, the first requirement for the interpolating function $x(t)$ is that its values at multiples of $T_s$ be equal to the corresponding points of the discrete-time sequence, i.e.
\[
x(t) \Bigr|_{t = nT_s} = x[n];
\]
with our choice of $T_s = 1$ the above condition becomes simply $x(n) = x[n]$. The second requirement is that we want the interpolating function to be smooth and, mathematically, the smoothness of a function increases with the number of its continuous derivatives. For maximal smoothness, therefore, we require $x(t)$ to be in $C^{\infty}$, where $C^{M}$ is the class of functions for which all derivatives up to order $M$ exist and are continuous.
One maximally differentiable curve through a set of $2N+1$ data points is the unique polynomial interpolator of degree $2N$
\begin{equation} \label{eq:is:straightPI}
p(t) = a_0 + a_1 t + a_2 t^2 + \ldots + a_{2N} t^{2N}
\end{equation}
which, like all polynomials, belongs to $C^{\infty}$. Computation of the interpolator's $2N+1$ coefficients is a classic algebraic problem, first solved in the 17th century by Newton. Numerically, one way to arrive at the solution is to work out the system of $2N+1$ equations
\begin{equation*}
\bigl\{\,p(n) = x[n]\,\bigr\}_{n = -N,\ldots,0,\ldots,N}
\end{equation*}
which can be carried out algorithmically using Pascal's triangle, for instance. A more interesting approach is to consider the space of finite-degree polynomials over the interval $[-N, N]$ and write $p(n)$ as a linear combination of basis vector for the space. This is not the first time we encounter the need for polynomial basis: in Chapter~\ref{ch:vs} we briefly encountered the Legendre polynomial basis in the context of approximation, while the interpolator in~(\ref{eq:is:straightPI}) is clearly expressed as a linear combination of the $2N+1$ {\em monomial}\/ basis vectors $\{1, t, t^2, \ldots t^{2N}\}$. For the task at hand, however, a more appropriate basis is the set of {\em Lagrange polynomials}.
The Lagrange polynomial basis for the interval $I=[-N, N]$ is the family of $2N+1$ polynomials
\begin{equation} \label{eq:is:lagPoly}
L^{(N)}_n(t) = \mathop{\prod_{k = -N}}_{k\neq n}^{N}\frac{t - k}{n - k}, \qquad n = -N, \ldots, N
\end{equation}
each of which has degree $2N$. As an example, the family of five polynomials for $N=2$ is shown in Figure~\ref{fig:is:LagBasis}. A key property of the Lagrangian basis vector is that, for integer values of their argument, we have
\begin{equation} \label{eq:is:lagInterpProp}
L^{(N)}_n(m) = \left\{
\begin{array}{ll}
1 & \mbox{if $n=m$}\\
0 & \mbox{if $n \neq m$}
\end{array}\right. \qquad\qquad -N \leq n,m \leq N.
\end{equation}
Using the above result, it is easy to verify that the polynomial interpolator for a discrete-time signal of length $2N+1$ is simply
\begin{equation} \label{eq:is:lagInterp}
x(t) = \sum_{n = -N}^{N} x[n]L^{(N)}_n(t)
\end{equation}
that is, a linear combination of the Lagrangian basis vectors where the scalar coefficients are the discrete-time samples themselves. Indeed, since a polynomial of degree $M$ is uniquely determined by $M$ of its values and since~(\ref{eq:is:lagInterp}) is indeed equal to $x[n]$ for $2N+1$ integer values of $t$, the interpolator is indeed the unique solution to the problem.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0,yticks=0.5]{-2.5,2.5}{-0.8,1.6}
\begin{dspClip}%
\LagPoly{green}{a}{-2}%
\LagPoly{blue}{b}{-1}%
\LagPoly{orange}{c}{0}%
\LagPoly{black}{d}{1}%
\LagPoly{cyan}{e}{2}%
\end{dspClip}
\end{dspPlot}
\caption{Lagrange interpolation polynomials $L_n^{(2)}(t)$ for $n=-2,\ldots,2$. Note that $L_n^{(N)}(t)$ is zero for $t$ integer except for $t = n$, where it is $1$.}\label{fig:is:LagBasis}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As an example, Figure~\ref{fig:is:weightedLag} shows how the five polynomials $L_n^{(2)}(t)$ beautifully come together to interpolate a $5$-point discrete-time signal into the smooth curve in Figure~\ref{fig:is:finalLag}. A fundamental characteristic of polynomial interpolation is that it is {\it global}: for all values of $t$, \textit{all} the original discrete-time data points contribute to the instantaneous value of $x(t)$.
Although it elegantly solves the problem of finding the smoothest curve through a finite data set, polynomial interpolation suffers from two serious drawbacks. Firstly, although the Lagrangian basis provides a way to quickly compute the necessary coefficients, the set of basis vectors is completely different for every different length of the input data. From an engineering point of view it is obvious that we would like a more universal interpolation machine rather than having to redesign our interpolator as a function of problem size. The second difficulty, that is common to all types of polynomial fitting, is that the method becomes numerically ill-conditioned as the polynomial degree grows large.
A way out of these problems is to relax at least partially the smoothness constraint; this leads to the simpler interpolation schemes that we will look at in the next Section.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\polyName{-2} & \polyName{-1} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-2}{a}
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-1}{b}%
\end{dspPlot}
\end{tabular}
\polyName{0} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{0}{c}
\end{dspPlot}
\begin{tabular}{cc}
\polyName{1} & \polyName{2} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{1}{d}%
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{2}{e}
\end{dspPlot}
\hspace{1em}
\end{tabular}
\caption{Weighted Lagrange polynomials for the interpolation of a $5$-point signal.}\label{fig:is:weightedLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\begin{dspClip}%
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \lpa \ta mul}%
\dspFunc[linewidth=0.4pt]{x \lpb \tb mul}%
\dspFunc[linewidth=0.4pt]{x \lpc \tc mul}%
\dspFunc[linewidth=0.4pt]{x \lpd \td mul}%
\dspFunc[linewidth=0.4pt]{x \lpe \te mul}%
\plotTaps
\dspFunc[linewidth=2pt,linecolor=dspCTColor,xmin=-2,xmax=2]{\lagInterp}
\end{dspClip}
\end{dspPlot}
\caption{Final Lagrange interpolation.}\label{fig:is:finalLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Local (Kernel-Based) Interpolation}\label{sec:is:locInterp}%
\index{interpolation!local}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% cubic interpolator
\def\cubFunA{abs dup 2 exp -2.25 mul exch 3 exp 1.25 mul 1 add add }
\def\cubFunB{abs dup dup 8 mul exch 2 exp -5 mul add exch 3 exp -4 add add -0.75 mul }
\def\cubFun#1{
#1 sub
dup dup dup dup %
-2 lt {pop pop pop pop 0} {
2 gt {pop pop pop 0 } {
-1 lt {pop \cubFunB } {
1 gt {\cubFunB }
{\cubFunA}
ifelse }%
ifelse }%
ifelse }%
ifelse }
\def\triInterp{%
x \cubFun{-2} \ta mul %
x \cubFun{-1} \tb mul %
x \cubFun{0} \tc mul %
x \cubFun{1} \td mul %
x \cubFun{2} \te mul %
add add add add}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
If we relax the smoothness constraint, we can design much simpler interpolation schemes. Consider for instance a device that builds a continuous-time signal simply by keeping its output constant and equal to the last discrete-time input data point between interpolation intervals:
\begin{equation*}
x_0(t) = x[\, \lfloor t + 0.5 \rfloor \,], \qquad -N \leq t \leq N;
\end{equation*}
the result of this interpolation method, known as Zero-Order Hold (ZOH), is shown in Figure~\ref{fig:is:zoh} for the same 5-point signal as in Figure~\ref{fig:is:diffint}-(a). The above expression can be rewritten as
\begin{equation} \label{eq:is:zoh}
x_0(t) = \sum_{n = -N}^{N}x[n]\rect(t - n),
\end{equation}
which shows a remarkable similarity to~(\ref{eq:is:lagInterp}). The differences, however, are extremely significative:
\begin{itemize}
\item the continuous-time term in the sum (i.e the rect function) is no longer dependent on the length of the original data set
\item the dependence of the continuous-time term on interpolation interval (that is, on $n$) is only via a simple time shift
\item the value of the output at any instant $t$ is dependent only on one input data point: the interpolation is \textit{local} rather than global.
\end{itemize}
In summary, the ZOH therefore creates a continuous-time signal by stitching together delayed and scaled versions of the \textit{same} interpolation function, also known as the interpolation kernel, independently of the amount of data to interpolate. The resulting ``interpolation machine'' is both conceptually and practically a much simpler device. Of course the price to pay for this simplicity is a discontinuous signal, but we can easily extend the local interpolation paradigm to improve the result.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linewidth=2pt,linecolor=dspCTColor](-2,\ta)(-1.5,\ta)(-1.5,\tb)(-.5,\tb)(-.5,\tc)(0.5,\tc)(0.5,\td)(1.5,\td)(1.5,\te)(2,\te)
\end{dspPlot}
\caption{Zero-Order Hold interpolation.}\label{fig:is:zoh}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Generally speaking, local kernel-based interpolators produce a signal via a ``mixed-domain convolution''
\begin{equation} \label{eq:is:interplocal}
x(t) = \sum_{n = -N}^{N}x[n]I(t - n);
\end{equation}
where the kernel $I(t)$ is a compact-support function fulfilling the fundamental properties
\begin{equation*}
\left\{\begin{array}{ll}
I(0) & = 1 \\
I(t) & = 0 \quad \mbox{for $t$ a nonzero integer}
\end{array}\right.
\end{equation*}
(compare this to~(\ref{eq:is:lagInterpProp})). By crafting a good kernel we can achieve better properties in the interpolated signal.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{ccc}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=dspCTColor]{x \dspRect{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=dspCTColor]{x \dspTri{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=dspCTColor]{x \cubFun{0}}
\end{dspPlot}
\\
(a) & (b) & (c)
\end{tabular}
\caption{Interpolation kernels of order zero, one and three.}\label{fig:is:interpKernels}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Zero-Order Hold.}\index{zero-order hold}%
\index{interpolation!local!zero-order}
The kernel used in the ZOH is, as we have seen, the simple rect function shown in Figure~\ref{fig:is:interpKernels}-(a):
\begin{equation}
I_0(t) = \begin{cases}
1 & \mbox{ if } |t| < 1/2 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel is discontinuous (that is, it belongs to $C^0$) and so is the interpolated signal. Only one value of the input is used to produce the current value of the output.
\itempar{First-Order Interpolation.}\index{first-order interpolation}%
\index{interpolation!local!first-order}
The kernel used in first-order interpolation is the triangle function shown in Figure~\ref{fig:is:interpKernels}-(b):
\begin{equation}
I_1(t) = \begin{cases}
1- |t| & \mbox{ if } |t| < 1 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^1$ and the same holds for the output signal. An example of first-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:firstOrdInterp}. The interpolation strategy is akin to ``connecting the dots'' between discrete-time samples so that, at any time $t$, \textit{two} discrete-time samples contribute to the output value.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \dspTri{-2}{1} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{-1}{1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{0}{1} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{1}{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{2}{1} \te mul}%
\plotTaps
\psline[linewidth=2pt,linecolor=dspCTColor](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
\caption{First-Order interpolation; shown in light gray are the scaled and translated copies of the first-order kernel.}\label{fig:is:firstOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Although the first-order interpolator is only marginally more complex than the ZOH, the continuity of the output implies a much better quality of the result as can be appreciated visually for familiar signals as in Figure~\ref{fig:is:interpCompare}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=dspDTColor]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=dspCTColor]%
{x x 0 gt {-0.5} {0.5} ifelse sub truncate 0.5235 mul RadtoDeg sin}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=dspDTColor]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=dspCTColor]%
{x x floor sub dup 1 exch sub %
x floor 0.5235 mul RadtoDeg sin mul exch %
x ceiling 0.5235 mul RadtoDeg sin mul add}
\end{dspPlot}
\caption{Comparison between the zero- and first-order interpolation of a discrete-time sinusoidal signal.}\label{fig:is:interpCompare}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Third-Order Interpolation.}\index{third-order interpolation}%
\index{interpolation!local!third-order}
In third-order interpolation\footnote{Even-order kernels are not used in practice since they are not symmetric around a central interpolation instant.} a commonly used kernel is the cubic piecewise function shown in Figure~\ref{fig:is:interpKernels}-(c):
\begin{equation}
I_3(t) = \begin{cases}
1.25|t|^3 - 2.25|t|^2 + 1 & \mbox{for $|t| \leq 1$} \\
-0.75(|t|^3 - 5|t|^2 + 8|t| - 4) & \mbox{for $1 < |t| \leq 2$} \\
0 & \mbox{otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^2$ (first and second derivatives are continuous) and the same holds for the output signal. The kernel's support is four and, as a consequence, the values of the continuous-time signal depend on four neighboring discrete-time samples. An example of third-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:thirdOrdInterp}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \cubFun{-2} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{-1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{0} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{2} \te mul}%
\plotTaps
\dspFunc[xmin=-2,xmax=2,linewidth=2pt,linecolor=dspCTColor]{\triInterp}
\end{dspPlot}
\caption{Third-Order interpolation; shown in light gray are the scaled and translated copies of the third-order kernel.}\label{fig:is:thirdOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Higher-Order Interpolation.}
Local interpolation schemes can be extended to higher order kernels and, in general, a kernel of order $2N+1$ is composed of polynomials of degree $2N+1$ over a support of length $2N+2$. The resulting interpolation belongs to $C^{2N}$ and the lack of continuity in the derivative of order $2N+1$ ultimately will cause undesired high frequency content in the output, as we will see later.
The tradeoff seems however rather clear:
\begin{itemize}
\item global interpolation schemes such as polynomial interpolation provide maximum smoothness at the price of a complex procedure that depends on the length of the data set
\item local kernel-based methods are simple to implement but ultimately lack in smoothness.
\end{itemize}
Interestingly, however, as the size of the data set increases to infinity, the two methods become one and the same! This is perhaps one of the most remarkable mathematical results in discrete-time signal processing, and one that will lead naturally to the sampling theorem.
\subsection{Sinc Interpolation}
\label{sec:is:sincinterp}
Consider again the expression for the Lagrange polynomial (\ref{eq:is:lagPoly}) and let's try to determine what happens when the size of the data set to interpolate (and therefore the degree of the Lagrange basis vectors) goes to infinity:
\[
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \prod_{ {\scriptstyle k = -\infty \atop \scriptstyle k\neq n }}^{\infty} \frac{t - k}{n - k} = \ ?
\]
By using the change of variable $m = n-k$ we have
\begin{align}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) &= \prod_{{ \scriptstyle m = -\infty \atop \scriptstyle m\neq 0}}^{\infty} \frac{t - n+m}{m} \nonumber \\
& = \prod_{{\scriptstyle m = -\infty \atop\scriptstyle m\neq 0 }}^{\infty}\left(1+\frac{t - n}{m}\right) \nonumber \\
& = \prod_{m=1}^{\infty}\left(1-\left(\frac{t - n}{m}\right)^{\!2}\right) \label{eq:is:lag2sinc}
\end{align}
We can now use Euler's infinite product expansion for the sine function (a somewhat esoteric formula whose derivation is in the appendix)
\begin{equation}
\sin (\pi\tau) = (\pi\tau) \prod_{k = 1}^{\infty}\left(1 - \frac{\tau^2}{k^2}\right)
\end{equation}
with which we finally obtain
\begin{equation}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \sinc\left(t - n \right)
\end{equation}
Remarkably, as $N$ goes to infinity, all Lagrange polynomials converge to simple time shifts of the same function and the function in question is the sinc; a graphical illustration of the convergence process for $L^{(N)}_0(t)$ is shown in Figure~\ref{fig:is:Lag2Sinc}. In other words, the maximally smooth interpolation for an infinite-length sequence can be obtained using a kernel-based method where the kernel is the sinc function:
\begin{equation} \label{eq:is:sincInterpOne}
x(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc(t-n)
\end{equation}
Obviously, since the sinc is a two-sided, infinite support function, sinc interpolation falls in the same unattainable category as ideal filters: something that we can try to approximate but that we will never be able to exactly implement in practice. Nevertheless, the mathematical construction represents a fundamental cornerstone of the translation machinery between discrete and continuous time and will represent the starting point for the development of sampling theory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% L_0^N(t) = \prod_{k=1}^{N} [t^2/k^2 - 1]
\def\sincApp#1{%
1
1 1 #1 {%
dup mul
x dup mul
exch div
1 exch sub
mul
} for}
%
\center
\begin{dspPlot}[sidegap=0,xout=true]{-15,15}{-.3,1.1}
\dspFunc[linecolor=lightgray]{\sincApp{200}}
\dspFunc[linecolor=dspCTColor,linewidth=0.8pt]{x \dspSinc{0}{1}}
\end{dspPlot}
\caption{A portion of the sinc function vs the Lagrange basis vector $L_{0}^{(200)}(t)$ (light gray).}\label{fig:is:Lag2Sinc}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As a final note, when the interpolation interval $T_s$ is not equal to one, the sinc interpolation formula takes on the more general form:
\begin{equation}\label{eq:is:sincInterp}
x(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc\left(\frac{t - nT_s}{T_s}\right)
\end{equation}
Figure~\ref{fig:is:sincInterp} shows how the scaled and time-shifted copies of the sinc kernel come together to interpolated a discrete-time sequence (in the figure, $T_s = 1$). Note how the interpolation property of the sinc that we introduced in Section~\ref{sec:is:SincProp} guarantees that $x(nT_s) = x[n]$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% smooth DT signal
\def\smooth{ 10 div 360 mul dup sin exch dup 2 mul sin exch dup mul 360 div sin add add 2.2 div 0.2 add }
\def\sinterpolant#1{ \dspSinc{#1}{1} #1 \smooth mul }
\center
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\dspSignal[linecolor=blue!70]{x \smooth}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=dspDTColor]{x \smooth}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{0}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{1}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{2}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{3}}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=dspDTColor]{x \smooth}
\multido{\n=-4+1}{12}{%
\dspFunc[linewidth=0.5pt,linecolor=lightgray]{x \dspSinc{\n}{1} \n \smooth mul}}
\dspFunc[linecolor=dspCTColor]{x \smooth}
\end{dspPlot}
\caption{Sinc interpolation: discrete-time signal (top); first four scaled copies of the sinc kernel for $n = 0,1,2,3$ (center); final interpolation (bottom).}\label{fig:is:sincInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Spectral Properties of the Sinc Interpolation.}
The interpolation kernel $\sinc(t/T_s)$ is $F_s$-bandlimited, with $F_s = 1/T_s$. Therefore the sinc interpolation of a discrete-time sequence, being a linear combination of bandlimited functions, is itself bandlimited. If the DTFT $X(e^{j\omega})$ of the discrete-time sequence is well defined, the spectrum of the interpolated signal can be obtained as follows:
\begin{align*}
X(f) & = \int_{-\infty}^{\infty} \sum_{n = -\infty}^{\infty}x[n]\, \sinc\left(\frac{t - nT_s}{T_s}\right) e^{-j2\pi f t} \, dt \\
& = \sum_{n = -\infty}^{\infty} x[n] \int_{-\infty}^{\infty} \sinc\left(\frac{t - nT_s}{T_s}\right) e^{-j2\pi f t} \, dt
\end{align*}
The second term in the sum is the Fourier Transform of the scaled and shifted sinc; using~(\ref{eq:is:protoBLfreq}) we have
\begin{align}
X(f) &= \sum_{n = -\infty}^{\infty} x[n] \left(\frac{1}{F_s}\right) \rect\left(\frac{f}{F_s}\right) e^{-j2\pi(f/F_s)n} \nonumber \\
&= \left(\frac{1}{F_s}\right)\rect\left(\frac{1}{F_s}\right) \sum_{n = -\infty}^{\infty} x[n] \, e^{-j2\pi(f/F_s)n} \nonumber \\
& = \begin{cases}
(1/F_s) \, X(e^{j2\pi f/F_s}) & \mbox{ for $ |f| \leq F_s/2$} \\
0 & \mbox{ otherwise}
\end{cases} \label{eq:is:interpSpec}
\end{align}
In other words, the continuous-time spectrum is just a scaled and stretched version of the DTFT of the discrete-time sequence between $-\pi$ and $\pi$. The duration of the interpolation interval $T_s$ is inversely proportional to the resulting bandwidth of the interpolated signal. Intuitively, a slow interpolation ($T_s$ large) results in a spectrum concentrated around the low frequencies; conversely, a fast interpolation ($T_s$ small) results in a spread-out spectrum (more high frequencies are present).\footnote{To find a simple everyday (yesterday's?) analogy, think of a $45$ rpm vinyl\index{vinyl} record played at either $33$~rpm (slow interpolation) or at $78$ rpm (fast interpolation) and remember the acoustic effect on the sounds.} Figure~\ref{fig:is:interpSpeed} illustrates the concept graphically.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\plotSpec#1#2{\dspFunc[#2]{x \dspPorkpie{0}{#1} #1 div}}
\def\plotCurrent#1#2#3#4#5{%
\FPupn\o{#2 1 / 2 trunc}
\plotSpec{\o}{linecolor=dspCTFColor}
\dspCustomTicks[axis=x]{0 0 {-\o} #5 {\o} #4}
\dspText(-2,1.8){$T_s=$#3}
\dspCustomTicks[axis=y]{1 $T_0$ 2 $2T_0$}}
%
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=dspDTFColor]{x \dspPorkpie{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{1}{1}{$T_0$}{$F_s/2=1/(2T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{2}{2}{$2T_0$}{$F_s/2=1/(4T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{3}{0.5}{$T_0/2$}{$F_s/2=1/T_0$}{$-F_s/2$}
\end{dspPlot}
\caption{Spectral characteristics of a sinc-interpolated signal: DTFT of the original discrete-time signal $x[n]$ (top); Fourier transform of its sinc interpolation with interpolation interval $T_s = T_0$ (second panel); spectrum of a sinc interpolation using a slower interpolation rate ($T_s = 2T_0$): spectrum concentrates in the low frequencies around the origin (third panel); spectrum of a sinc interpolation using a faster interpolation rate ($T_s = T_0/2$): spectrum spreads out to higher frequencies (third panel); the spectral shape and total energy remain the same.}\label{fig:is:interpSpeed}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Event Timeline