Page MenuHomec4science

No OneTemporary

File Metadata

Created
Thu, Mar 13, 12:08
This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/writing/sp4comm.multipub/110-multirate/10-mr-multirate.tex b/writing/sp4comm.multipub/110-multirate/10-mr-multirate.tex
index 76f4e20..148bfe0 100644
--- a/writing/sp4comm.multipub/110-multirate/10-mr-multirate.tex
+++ b/writing/sp4comm.multipub/110-multirate/10-mr-multirate.tex
@@ -1,1105 +1,1102 @@
\chapter{Multirate Signal Processing}
\label{ch:mr}
As we have seen, a continuous-time signal can be converted to a discrete-time sequence via sampling. By changing the value of the sampling rate we can obtain an arbitrary number of discrete-time signals from the same original continuous-time source; the number of samples per second will increase or decrease linearly with the sampling rate and, according to whether the conditions of the sampling theorem are satisfied or not, the resulting discrete-time sequences will be an exact representation of the original signal or will be affected by aliasing. Mutirate theory studies the relationship between such sequences; or, in other words, it addresses the question of whether we can transform a sampled representation into another with a different underlying sampling frequency purely from within discrete time.
The primary application of multirate theory is digital sampling rate conversion, an operation that becomes necessary when the original sampling frequency of a stored signal is incompatible with the working rate of the available D/A device. But we will see that multirate also plays an important role in a variety of applied signal processing algorithms; in particular, the technique of \textit{oversampling} is often used in situations where an increase of the data rate is used to improve the performance of the analog elements in the processing chain. Since speeding up digital computations is in general much cheaper than using high-performance analog circuits, oversampling is commonly used in consumer-level devices. Other applications of multirate range from efficient filter design, to spectral shaping for communication systems, to data compression standards. And, finally, from a more theoretical point of view, multirate is a fundamental ingredient of advanced analysis techniques which go under the name of time-frequency decompositions.
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\section{Upsampling}
+\label{sec:mr:up}\index{upsampling|(}
+
+Upsampling a sequence by an integer factor $N$ produces a higher-rate sequence by creating $N-1$ new samples for every sample in the original signal. In its basic form, an upsampler simply inserts $N-1$ zeros after every input sample, as shown in Figure~\ref{fig:mr:up}. If we denote by $\mathcal{U}_N$ the upsampling operator, we have
+\begin{equation} \label{eq:mr:up}
+ (\mathcal{U}_N \mathbf{x})[n] = x_{N\uparrow}[n] = \
+ \begin{cases}
+ x[k] & \mbox{ for $n = kN,\;\; k \in \mathbb{Z}$} \\
+ 0 & \mbox{ otherwise}
+ \end{cases} % \right.
+\end{equation}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[b]
+ \center
+ \begin{dspBlocks}{1.3}{0.6}
+ $x[n]$ & \BDupsmp{N} & $x_{N\uparrow}[n]$
+ \psset{arrows=->,linewidth=1pt}
+ \ncline{1,1}{1,2} \ncline{1,2}{1,3}
+ \end{dspBlocks}
+ \caption{Symbol for the upsampling operator.}\label{fig:mr:downSym}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+No information is lost via upsampling and the original signal can be easily recovered by discarding the extra samples (as we will see in the next section, upsampling by $N$ followed by downsampling by $N$ returns the original signal). The spectrum of an upsampled signal can be easily obtained by first considering its \ztrans\:
+\begin{align}
+ X_{N\uparrow}(z) &= \sum_{n=-\infty}^{\infty} x_{N\uparrow}[n]z^{-n} \nonumber \\
+ &= \sum_{k=-\infty}^{\infty} x[k]z^{-kN} = X(z^N)
+\end{align}
+and therefore
+\begin{equation} %\label{}
+ X_{N\uparrow}(e^{j\omega}) = X(e^{j\omega N}).
+\end{equation}
+In the frequency domain, therefore, upsampling is simply a contraction of the frequency axis by a factor of $N$. The inherent $2\pi$-periodicity of the spectrum must be taken into account so that, in this contraction, the periodic repetitions of the base spectrum are ``pulled in'' the $[-\pi, \pi]$ interval. The effects of upsampling on a signal's spectrum are shown graphically for a simple signal in Figures~\ref{fig:mr:upA} and~\ref{fig:mr:upB}; in all figures the top panel shows the original spectrum $X(e^{j\omega})$ over $[-\pi, \pi]$; the middle panel shows the same spectrum over a wider range to make the $2\pi$-periodicity explicitly; the last panel shows the upsampled spectrum $X_{N\uparrow}(e^{j\omega})$, highlighting the rescaling of the $[-N\pi, N\pi]$ interval.\index{upsampling|)} As a rule of thumb, upsampling ``brings in'' exactly $N$ copies of the original spectrum over the $[-\pi, \pi]$ interval even if, in the case of an even upsampling factor, one copy is split between the negative and positive frequencies.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[t]
+%
+\def\dtSig{5 sub 0.3 mul RadtoDeg %
+ dup cos 0.3 mul exch %
+ 0.5 mul sin 0.6 mul %
+ add 0.5 add }
+%
+ \center
+ \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=1]{-8,8}{-0.5,1.1}
+ \dspSignal[linecolor=ColorDT]{x \dtSig}
+ \end{dspPlot}
+ \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
+ \dspSignal[linecolor=ColorDT!65]{x dup cvi 4 mod 0 eq {4 div \dtSig} {pop 0} ifelse}
+ \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
+ \end{dspPlot}
+ \caption{Upsampling by $4$ in the time domain: original signal (top panel); upsampled signal, where 3 zeros have been appended to each original sample (bottom panel). Note the difference in time indexes between top and bottom panels. }\label{fig:mr:up}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[!htb]
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
+ \end{dspPlot}
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-5,5}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
+ \pnode(-2,0){leftA}\pnode( 2,0){rightA}
+ \end{dspPlot}
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x 2 mul \dspPeriodize \dspTri{0}{1}}
+ \pnode(-1,1.2){leftB}\pnode( 1,1.2){rightB}
+ \ncline[linewidth=2pt,linecolor=blue!40]{leftA}{rightA}
+ \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{leftA}{leftB}
+ \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{rightA}{rightB}
+ \end{dspPlot}
+ \caption{Upsampling by $2$.}\label{fig:mr:upA}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[!htb]
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
+ \end{dspPlot}
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-5,5}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
+ \pnode(-3,0){leftA}\pnode( 3,0){rightA}
+ \end{dspPlot}
+ \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
+ \dspFunc[linecolor=ColorDF]{x 3 mul \dspPeriodize \dspTri{0}{1}}
+ \pnode(-1,1.2){leftB}\pnode( 1,1.2){rightB}
+ \ncline[linewidth=2pt,linecolor=blue!40]{leftA}{rightA}
+ \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{leftA}{leftB}
+ \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{rightA}{rightB}
+ \end{dspPlot}
+ \caption{Upsampling by $3$.}\label{fig:mr:upB}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+\subsection{Upsampling and Interpolation}
+\label{sec:mr:upfilt}
+The upsampled signal in~(\ref{eq:mr:up}), with its $N-1$ zeros between original samples, exhibits two problems. In the time domain, the upsampled signal looks ``jumpy'' because of the periodic zero runs. This is the discrete-time equivalent to a lack of smoothness, since the signal keeps dropping back to zero, and it is apparent in the bottom panel of Figure~\ref{fig:mr:up}. In the frequency domain, simple upsampling has ``brought in'' copies of the original spectrum in the $[-\pi, \pi]$ interval, creating spurious high frequency content. These two issues are actually one and the same and they can be solved, as we will see, by using an appropriate filter.
+
+The problem of filling the gaps between nonzero samples in an upsampled sequence is, in many ways, similar to the discrete- to continuous-time interpolation problem of Section~\ref{sec:is:interp}, except that now we are operating entirely in discrete time. If we adapt the interpolation schemes that we have already studied, we have the following cases\index{interpolation!in multirate}:
+
+\itempar{Zero-Order Hold.}\index{zero-order hold!(discrete-time)}
+In this discrete-time interpolation scheme, also known as \emph{piecewise-constant interpolation}, after upsampling by $N$, we use a filter with impulse response
+\begin{equation}\label{eq:mr:zoh}
+ h_0 [n] =
+ \begin{cases}
+ 1 & \ n = 0,1, \ldots, N-1 \\
+ 0 & \mbox{ otherwise}
+ \end{cases}
+\end{equation}
+which is shown in Figure~\ref{fig:mr:zoh}-(a). This interpolation filter simply repeats the last original input samples $N$ times, giving a staircase approximation as shown in the top panel of Figure~\ref{fig:mr:upinterp}.
+
+\itempar{First-Order Hold.}\index{first-order hold!(discrete-time)}
+In this discrete-time interpolation scheme, we obtain a piecewise linear interpolation after upsampling by $N$ by using
+\begin{equation}\label{eq:mr:foi}
+ h_1 [n] =
+ \begin{cases}
+ \displaystyle 1 - \frac{|n|}{N} & \ |n| < N \\
+ 0 & \mbox{ otherwise}
+ \end{cases}
+\end{equation}
+The impulse response is the familiar triangular function\footnote{
+ Once again, let us note that the triangle is the convolution of two rects, $h_1[n] = (1/N) \bigl(h_0[\cdot] \ast h_0[\cdot] \bigr)[n]$.}
+shown in Figure~\ref{fig:mr:zoh}-(b). An example of the resulting interpolation is shown in the bottom panel of Figure~\ref{fig:mr:upinterp}.
+
+\itempar{Sinc Interpolation.}\index{sinc interpolation!(discrete-time)}
+We know that, in continuous time, the smoothest interpolation is obtained by using a sinc function. This holds in discrete-time as well, and the resulting interpolation filter is the discrete-time sinc:
+\begin{equation}
+ h[n] = \sinc\left(\frac{n}{N}\right)
+\end{equation}
+Note that the sinc above is equal to one for $n = 0$ and is equal to zero at all integer multiples of $N$, $n = kN$; this fulfills the interpolation condition, that is, after interpolation, the output equals the input at multiples of $N$: $(h \ast x_{N\downarrow})[kN] = x_{N\downarrow}[kN] = x[k]$.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[!t]
+ \center
+ \begin{tabular}{cc}
+ \begin{dspPlot}[sidegap=0.5,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-4,4}{0,1.1}
+ \dspSignal[linecolor=ColorDT]{x 0 ge {x 4 lt {1} {0} ifelse} {0} ifelse}
+ \end{dspPlot}
+ &
+ \begin{dspPlot}[sidegap=0.5,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-4,4}{0,1.1}
+ \dspSignal[linecolor=ColorDT]{x abs 4 lt {1 x abs 4 div sub} { 0} ifelse}
+ \end{dspPlot}
+ \\ (a) & (b)
+ \end{tabular}
+ \caption{Discrete-time zero-order (a) and first-order (b) interpolators for $N = 4$.}\label{fig:mr:zoh}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[!b]
+%
+\def\dtSig{5 sub 0.3 mul RadtoDeg %
+ dup cos 0.3 mul exch %
+ 0.5 mul sin 0.6 mul %
+ add 0.5 add }
+\def\upFact{4 }
+%
+ \center
+ \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
+ \dspSignal[linecolor=ColorDT!65]{x 4 div floor \dtSig}
+ \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
+ \end{dspPlot}
+ \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
+ \dspSignal[linecolor=ColorDT!65]{%
+ x \upFact div floor % last upsampling point n
+ dup \upFact mul x exch sub \upFact div % intra-interval factor p
+ 2 copy % n p n p
+ 1 exch sub exch \dtSig mul % n p (1-p)f(n)
+ 3 1 roll exch % A p n
+ 1 add \dtSig % A p f(n+1)
+ mul add}
+ \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
+ \end{dspPlot}
+ \caption{Upsampling by $4$ followed by interpolation: zero-order hold (top panel); linear interpolation (bottom panel).}\label{fig:mr:upinterp}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+The three impulse responses above are all lowpass filters; in particular, the sinc interpolator is an ideal lowpass with cutoff frequency $\omega_c = \pi/N$ while the others are approximations of the same. As a consequence, the effect of the interpolator in the frequency domain is the removal of the $N-1$ spectral copies ``drawn in'' the $[-\pi, \pi]$ interval by the upsampler. An example is shown in
+Figure~\ref{fig:mr:upfilt} where the signal in Figure~\ref{upsamplingFigC} is filtered by an ideal lowpass filter with cutoff $\pi/4$.
+
+It turns out that the smoothest possible interpolation in the time domain corresponds to the perfect removal of the spectral repetitions in the frequency domain. Interpolating with a zero-order or a first-order kernel, by contrast, only attenuates the replicas instead of performing a full removal, as we can readily see by considering their frequency responses. Since we are in discrete-time, however, there are no difficulties associated to the design of a digital lowpass filter which closely approximates an ideal filter, so that alternate kernel designs (such as optimal FIRs) can be employed.
+This is in contrast to the design of discrete---to continuous---time interpolators, which are analog designs. That is why sampling rate changes are much more attractive in the discrete-time domain.
+
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Downsampling}\index{downsampling|(}
\label{sec:mr:down}
-Downsampling by~$N$\index{downsampling|mie}\index{decimation} (also called subsampling or decimation\footnote{%
- Technically, decimation means $9$ out of $10$ and refers to a roman custom of killing every $10$th soldier of a defeated army\ldots})
+Downsampling by~$N$\index{downsampling|mie}\index{decimation} (also called subsampling or decimation)
produces a lower-rate sequence by keeping only one out of $N$ samples in the original signal. If we denote by $\mathcal{S}_N$ the downsampling operator\footnote{
- We use the letter $\mathcal{S}$ rather than $\mathcal{D}$ since the latter indicates the delay operator.},
+ We use the letter $\mathcal{S}$ rather than $\mathcal{D}$ since the latter is used for the delay operator.},
we can write
\begin{equation}
- \mathcal{S}_N \bigl\{x[\cdot] \bigr\}\,[n] = x_{N\downarrow}[n] = x[nN]
+ (\mathcal{S}_N \mathbf{x})[n] = x_{N\downarrow}[n] = x[nN]
\end{equation}
Downsampling, as shown in Figure~\ref{fig:mr:down} effectively {\em discards} $N-1$ out of $N$ samples and therefore a loss of information may be incurred. Indeed, in terms of the underlying sampling frequency, decimation produces the signal that would have been obtained by sampling $N$ times more slowly. The potential problems with this data reduction will take the form of aliasing, as we will see shortly.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[h]
+\begin{figure}[b]
\center
\begin{dspBlocks}{1.3}{0.6}
$x[n]$ & \BDdwsmp{N} & $x_{N\downarrow}[n]$
\psset{arrows=->,linewidth=1pt}
\ncline{1,1}{1,2} \ncline{1,2}{1,3}
\end{dspBlocks}
\caption{Symbol for the downsampling operator.}\label{fig:mr:upSym}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Properties of the Downsampling Operator}
-The downsampling operator is linear but it is not time invariant. This can be easily verified with an example; given the signal $x[n]$ we have
+The downsampling operator is linear but it is not time invariant. This can be easily verified with an example; if we subsample by~2 the signal $\mathbf{x}$ we obtain the sequence
\[
- \mathcal{S}_2 \bigl\{x[\cdot] \bigr\}\,[n] = x[2n] = \ldots,\, x[-4],\, x[-2], \,x[0], \,x[2], \,x[4],\, \ldots
+ \mathcal{S}_2 \mathbf{x} = \ldots,\, x[-4],\, x[-2], \,x[0], \,x[2], \,x[4],\, \ldots
\]
-but
+On the other hand, if we delay $\mathbf{x}$ by one before subsampling we obtain
\[
- \mathcal{S}_2 \bigl\{\mathcal{D}\bigl\{ x[\cdot] \bigr\}\bigr\}\,[n] = x[2n+1] = \ldots ,\, x[-5],\, x[-3], \, x[1], \, x[3], \, x[5],\, \ldots
+ \mathcal{S}_2 (\mathcal{D} \mathbf{x}) = \ldots ,\, x[-5],\, x[-3], \, x[1], \, x[3], \, x[5],\, \ldots
\]
so that clearly
\[
- \mathcal{S}_2 \bigl\{\mathcal{D}\bigl\{ x[\cdot] \bigr\}\bigr\}\,[n] \neq \mathcal{D} \bigl\{\mathcal{S}_2\bigl\{ x[\cdot] \bigr\}\bigr\}\,[n].
+ \mathcal{D} (\mathcal{S}_2 \mathbf{x}) \neq \mathcal{S}_2 (\mathcal{D} \mathbf{x}).
\]
-The downsampling operator is sometimes classified as {\em periodically time-varying\/} since
+The downsampling operator is {\em periodically time-varying\/} since, for all $k \in \mathbb{N}$,
\begin{equation}
- \mathcal{S}_N \bigl\{\mathcal{D}_{kN} \bigl\{ x[\cdot] \bigr\}\bigr\}\,[n] = \mathcal{S}_N \bigl\{x[\cdot] \bigr\}\,[n - k].
+ \mathcal{S}_N (\mathcal{D}_{kN} \mathbf{x}) = \mathcal{D}_{kN}(\mathcal{S}_N \mathbf{x}).
\end{equation}
One of the consequences of the lack of time-invariance is that complex sinusoids are not eigensequences for the downsampling operator; for instance, if $x[n] = e^{j \pi n} = (-1)^n$, we have
\begin{equation}
\label{eq:mr:12}
x_{2\downarrow}[n] = x[2n] = e^{j 2\pi n} = 1;
\end{equation}
in other words, the highest digital frequency has been mapped to the lowest frequency. This looks very much like aliasing, as we will now explore in detail.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
%
\def\dtSig{5 sub 0.3 mul RadtoDeg %
dup cos 0.3 mul exch %
0.5 mul sin 0.6 mul %
add 0.5 add }
%
\center
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
\dspSignal[linecolor=ColorDT]{x \dtSig}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
\dspSignal[linecolor=ColorDT!20]{x \dtSig}
\dspSignal[linecolor=ColorDT,plotpoints=17]{x \dtSig}
%\dspSignal{x 4 mul \dtSig}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=1]{-8,8}{-0.5,1.1}
\dspSignal[linecolor=ColorDT]{x 4 mul \dtSig}
\end{dspPlot}
\caption{Downsampling by $4$ in the time domain: original signal (top panel); samples ``killed'' by the downsampler (middle panel); downsampled signal (bottom panel). Note the difference in time indexes between top and bottom panels.}\label{fig:mr:down}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Frequency-Domain Representation}
Let's consider first the \ztrans\ of a downsampled signal:
\begin{equation}
X_{N\downarrow}(z) = \sum_{n=-\infty}^{\infty} x[nN] z^{-n}.
\end{equation}
Our goal is to find a relationship between $X_{N\downarrow}(z)$ and $X(z)$; to do so let's introduce an ``auxiliary'' \ztrans\ $A(z)$ defined as
\begin{equation}
A(z) = \sum_{n=-\infty}^{\infty} x[nN] z^{-nN};
\end{equation}
clearly
\begin{equation} %\label{}
X_{N\downarrow} (z) = A(z^{1/N})
\end{equation}
so that now we need to express $A(z)$ in terms of $X(z)$, which is easier (although not entirely straightforward). The first step is to write
\begin{equation}
A(z) = \sum_{n=-\infty}^{\infty} \xi_N[n] x[n] z^{-n}
\end{equation}
-where $\xi_N[n]$ is a ``killer sequence'' defined as
+where $\boldsymbol{\xi}_N$ is a ``kill sequence'' defined as
\[
\xi_N[n] =
\begin{cases}
1 & \mbox{ for $n$ multiple of $N$} \\
0 & \mbox{ otherwise}
\end{cases}
\]
-and shown in Figure~\label{fig:mr:xi}; indeed, $\xi_N[n]$ will ``kill off'' all the terms in the sum for which the index is not a multiple of $N$. The sequence $\xi_N[n]$ is $N$-periodic and one way to represent it is as the inverse DFS of size $N$ of a vector of all ones. In other words,
-\begin{align}
- \xi_N[n] &= \mbox{IDFS}_N\big\{
- \begin{bmatrix}
- 1 & 1 & \ldots & 1
- \end{bmatrix}^T \big\} \\
- &= \frac{1}{N}\sum_{k=0}^{N-1} e^{j\frac{2\pi}{N}nk}
-\end{align}
+and shown in Figure~\label{fig:mr:xi}; indeed, multiplication by $\boldsymbol{\xi}_N$ will ``kill off'' all the terms in the sum for which the index is not a multiple of $N$. The sequence $\boldsymbol{\xi}_N$ is $N$-periodic and one way to represent it is as the inverse DFS of size $N$ of a vector of all ones, as in~(\ref{eq:fa:unitDFT1}):
+\[
+ \boldsymbol{\xi}_N = \mbox{IDFS}_N\{\mathbf{1}\};
+\]
+explicitly:
+\begin{equation}
+ \xi_N[n] = \frac{1}{N}\sum_{k=0}^{N-1} e^{j\frac{2\pi}{N}nk}.
+\end{equation}
With this we can write
\begin{align}
A(z) &= \sum_{n=-\infty}^{\infty} \xi_N[n] x[n] z^{-n} \nonumber \\
&= \sum_{n=-\infty}^{\infty}\frac{1}{N} \sum_{k=0}^{N-1} e^{j\frac{2\pi}{N}nk} x[n] z^{-n} \nonumber \\
&= \frac{1}{N} \sum_{k=0}^{N-1} \sum_{n=-\infty}^{\infty} x[n] \big(e^{-j\frac{2\pi}{N}k} z \big)^{-n} \nonumber \\
&= \frac{1}{N} \sum_{k=0}^{N-1} X\big(e^{-j\frac{2\pi}{N}k} z \big)
\end{align}
so that finally:
\begin{equation}\label{eq:mr:dss}
X_{N\downarrow} (z) = \frac{1}{N} \sum_{k=0}^{N-1} X \big(e^{-j\frac{2\pi}{N}k} z^{\frac{1}{N}} \big)
\end{equation}
The Fourier transform of the downsampled signal is obtained by evaluating $X_{N\downarrow} (z)$ on the unit circle:
\begin{equation} %\label{}
X_{N\downarrow} (e^{j\omega}) = \frac{1}{N}\sum_{k=0}^{N-1} X\left(e^{j\frac{\omega - 2\pi k}{N}} \right).
\end{equation}
To understand the shape of the downsampled spectrum, let's first examine $A(z)$ on the unit circle:
\begin{equation} \label{eq:mr:nonscaled}
A(e^{j\omega}) = \frac{1}{N}\sum_{k=0}^{N-1} X\left(e^{j(\omega - \frac{2\pi}{N}k)} \right);
\end{equation}
we have the scaled sum of $N$ superimposed copies of the original spectrum $X(e^{j\omega})$ where each copy is shifted in frequency by a multiple of $2\pi/N$. We are in a situation similar to that of equation~(\ref{eq:is:periodizedFT}) where sampling created a periodization of the underlying spectrum; here the spectra are already inherently $2\pi$-periodic, and downsampling creates $N-1$ additional interleaved copies. The final spectrum $X_{N\downarrow} (e^{j\omega})$ is simply a stretched version of $A(e^{j\omega})$, so that the interval $[-\pi/N, \pi/N]$ becomes $[-\pi, \pi]$.
Because of the superposition, aliasing\index{aliasing!in multirate} can occur; this is a consequence of the potential loss of information that occurs when samples are discarded. For baseband signals, it is easy to verify that in order for the spectral copies in~(\ref{eq:mr:dss}) not to overlap, the maximum (positive) frequency $\omega_M$ of the original spectrum\footnote{
Here, for simplicity, we are imagining a lowpass real signal whose spectral magnitude is symmetric. More complex cases exist and some examples will be described next.}
must be less than $\pi/N$; this is the \emph{non-aliasing condition} for the downsampling operator. Conceptually, fulfillment of the non-aliasing condition indicates that the discrete-time representation of the original signal is intrinsically redundant; $(N-1)/N$ of the information can be safely discarded and this is mirrored by the fact that only $1/N$ of the spectral frequency support is nonzero. We will see shortly that, in this case, the original signal can be perfectly reconstructed with an upsampling and filtering operation.\index{downsampling|)}
-\subsection{Examples}
In Figures~\ref{fig:mr:exA} to~\ref{fig:mr:exE}) the top panel shows the original spectrum $X(e^{j\omega})$; the second panel shows the same spectrum but plotted over a wider interval so as to make its periodic nature explicit; the third panel shows in different colors the individual terms in the sum in~(\ref{eq:mr:nonscaled}); the fourth panel shows $A(e^{j\omega})$ \emph{before} scaling and stretching by $N$; finally, the last panel shows $X_{N\downarrow}(e^{j\omega})$ over the usual $[-\pi, \pi]$ interval.
\itempar{Downsampling by 2.} If the downsampling factor is $2$, the \ztrans\ and the Fourier transform of the output are simply
\begin{align*}
X_{2\downarrow}(z) &= \frac{1}{2}\, \bigl[ X(z) + X(-z) \bigr] \\
X_{2\downarrow}(e^{j\omega}) &= \frac{1}{2} \left[X\bigl(e^{j\frac{\omega}{2}} \bigr) + X\bigl(e^{j(\frac{\omega}{2} - \pi)} \bigr) \right]
\end{align*}
Figure~\ref{fig:mr:exA} shows an example for a lowpass signal whose maximum frequency is $\omega_M = \pi/2$ (i.e.\ a half-band signal). The non-aliasing condition is fulfilled and, in the superposition, the two shifted versions of the spectrum do not overlap. As the frequency axis stretches by a factor of $2$, the original half-band signal becomes full band.
Figure~\ref{fig:mr:exB} shows an example in which the non-aliasing condition is violated. In this case, $\omega_M = 2\pi/3 > \pi/2$ and the spectral copies do overlap. We can see that, as a consequence, the downsampled signal loses its lowpass characteristics. Information is irretrievably lost and the original signal cannot be reconstructed.
\itempar{Downsampling by 3.} If the downsampling factor is $ 3 $ we have
\begin{equation*}
X_{3\downarrow}(e^{j\omega}) = \frac{1}{3}\, \left[ X \bigl(e^{j\frac{\omega}{3}} \bigr) + X \bigl(e^{j(\frac{\omega - 2\pi}{3})} \bigr) + X\bigl(e^{j(\frac{\omega - 4\pi}{3})} \bigr) \right]
\end{equation*}
Figure~\ref{fig:mr:exB} shows an example in which the non-aliasing condition is violated ($\omega_M = 2\pi/3 > \pi/3$).
-\subsection{Downsampling a Highpass Signal.} Figure~\ref{fig:mr:exD} shows an example of downsampling by $2$ applied to a half-band {\em highpass} signal. Since the signal occupies only the upper half of the $[0, \pi]$ frequency band (and, symmetrically, only the lower half of the $[-\pi, 0]$ interval), the interleaved copies do not overlap and, technically, there is no aliasing. The shape of the signal, however, is changed by the downsampling operation and what started out as a highpass signal is transformed into a lowpass signal. To make the details of the transformation clearer in this example we have used a {\em complex-valued} highpass signal for which the positive and negative parts of the spectrum have different shapes; it is apparent how the original left and right spectral halves are end up in reverse positions in the final result. The original signal can be exactly reconstructed (since there is no destructive overlap between spectral copies) but the required procedure is a bit more creative and will be left as an exercise.
+\itempar{Downsampling a Highpass Signal.} Figure~\ref{fig:mr:exD} shows an example of downsampling by $2$ applied to a half-band {\em highpass} signal. Since the signal occupies only the upper half of the $[0, \pi]$ frequency band (and, symmetrically, only the lower half of the $[-\pi, 0]$ interval), the interleaved copies do not overlap and, technically, there is no aliasing. The shape of the signal, however, is changed by the downsampling operation and what started out as a highpass signal is transformed into a lowpass signal. To make the details of the transformation clearer in this example we have used a {\em complex-valued} highpass signal for which the positive and negative parts of the spectrum have different shapes; it is apparent how the original left and right spectral halves are end up in reverse positions in the final result. The original signal can be exactly reconstructed (since there is no destructive overlap between spectral copies) but the required procedure is a bit more creative and will be left as an exercise.
+
-\itempar{Antialiasing Filters in Downsampling.} We have seen in Section~\ref{sec:is:antialias} that, in order to control the error when sampling a non-bandlimited signal, our best strategy is to bandlimit the signal using a lowpass filter. The same holds when downsampling by $N$ a signal whose spectral support extends beyond $\pi/N$: before downsampling we should apply a lowpass with cutoff $\omega_c = \pi/N$ as shown in Figure~\ref{fig:mr:downfilt}. While a loss of information is still unavoidable, the filtering operation allows us to control said loss and prevent the disrupting effects of aliasing.
+\subsection{Antialiasing Filters in Downsampling}
+We have seen in Section~\ref{sec:is:antialias} that, in order to control the error when sampling a non-bandlimited signal, our best strategy is to bandlimit the signal using a lowpass filter. The same holds when downsampling by $N$ a signal whose spectral support extends beyond $\pi/N$: before downsampling we should apply a lowpass with cutoff $\omega_c = \pi/N$ as shown in Figure~\ref{fig:mr:downfilt}. While a loss of information is still unavoidable, the filtering operation allows us to control said loss and prevent the disrupting effects of aliasing.
An example of the processing chain is shown in Figure~\ref{fig:mr:exE} for a downsampling factor of~$2$; a half-band lowpass \index{half-band filter} filter is used to truncate the signal's spectrum outside of the $[-\pi/2, \pi/2]$ interval and then downsampling proceeds as usual with non-overlapping spectral copies.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\begin{dspBlocks}{1.3}{0.6}
$x[n]$ & \BDfilter{LP$\{\pi/N\}$} & \BDdwsmp{N} & $y[n]$
\end{dspBlocks}
\psset{arrows=->,linewidth=1pt}
\ncline{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}
\caption{Anti-aliasing filter before downsampling.}\label{fig:mr:downfilt}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.5}}
\dspCustomTicks[axis=x]{0.5 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.5}}
\dspCustomTicks[axis=x]{0.5 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j(\omega - 2\pi k/N)})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.5}}
\dspFunc[linecolor=ColorDF,linestyle=dashed]{x 1 sub \dspPeriodize \dspTri{0}{0.5}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$A(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x \dspPeriodize \dspTri{0}{0.5}
x 1 sub \dspPeriodize \dspTri{0}{0.5}
add 2 div}
\pnode(-.5,0){A}\pnode(0.5,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x 2 div \dspPeriodize \dspTri{0}{0.5}
x 2 div 1 sub \dspPeriodize \dspTri{0}{0.5}
add 2 div}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Downsampling by $2$; the highest frequency is $\pi/2$ and no aliasing occurs.}\label{fig:mr:exA}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.75}}
\dspCustomTicks[axis=x]{0.75 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.75}}
\dspCustomTicks[axis=x]{0.75 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j(\omega - 2\pi k/N)})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.75}}
\dspFunc[linecolor=ColorDF,linestyle=dashed]{x 1 sub \dspPeriodize \dspTri{0}{0.75}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$A(e^{j\omega})$}]{-3,3}{0,1.2}
\dspCustomTicks[axis=x]{0.5 $\pi/2$}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x \dspPeriodize \dspTri{0}{0.75} 2 div}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x 1 sub \dspPeriodize \dspTri{0}{0.75} 2 div}
\dspFunc[linecolor=ColorDF]{
x \dspPeriodize \dspTri{0}{0.75}
x 1 sub \dspPeriodize \dspTri{0}{0.75}
add 2 div}
\pnode(-.5,0){A}\pnode(0.5,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x 2 div \dspPeriodize \dspTri{0}{0.75}
x 2 div 1 sub \dspPeriodize \dspTri{0}{0.75}
add 2 div}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Downsampling by $2$; the highest frequency is larger than $\pi/2$ (here, $\omega_M = 2\pi/3$) and aliasing occurs.}\label{fig:mr:exB}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.6667}}
\dspCustomTicks[axis=x]{0.6667 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.6667}}
\dspCustomTicks[axis=x]{0.6667 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j(\omega - 2\pi k/N)})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.6667}}
\dspFunc[linecolor=ColorDF,linestyle=dashed]{x 0.6667 sub \dspPeriodize \dspTri{0}{0.6667}}
\dspFunc[linecolor=ColorDF,linestyle=dotted]{x 1.3333 sub \dspPeriodize \dspTri{0}{0.6667}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$A(e^{j\omega})$}]{-3,3}{0,1.2}
\dspCustomTicks[axis=x]{0.33333 $\pi/3$}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x \dspPeriodize \dspTri{0}{0.6667} 3 div}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x 0.6667 sub \dspPeriodize \dspTri{0}{0.6667} 3 div}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x 1.3333 sub \dspPeriodize \dspTri{0}{0.6667} 3 div}
\dspFunc[linecolor=ColorDF]{0.3333}
\pnode(-.3333,0){A}\pnode(0.3333,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{0.33333}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Downsampling by $3$; the highest frequency is larger than $\pi/3$ (here, $\omega_M = 2\pi/3$) and aliasing occurs. Notice how three spectral replicas contribute to the final spectrum.}\label{fig:mr:exC}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
%
\def\specHW{0.5 }
\def\specFunPar{abs 1 sub \specHW 2 mul div dup mul 1 exch sub }
\def\specFun{abs \specHW sub 1 \specHW sub div }
\def\periodSpec{%
1 sub 2 div dup floor sub 2 mul 1 sub dup dup % periodization
-\specHW lt {pop \specFunPar } {%
\specHW gt {\specFun } {pop 0} ifelse } %
ifelse }
%
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=2,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \periodSpec}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \periodSpec}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j(\omega - 2\pi k/N)})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \periodSpec}
\dspFunc[linecolor=ColorDF,linestyle=dashed]{x 1 sub \periodSpec}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$A(e^{j\omega})$}]{-3,3}{0,1.2}
\dspCustomTicks[axis=x]{0.5 $\pi/2$}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x \periodSpec 2 div}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x 1 sub \periodSpec 2 div}
\dspFunc[linecolor=ColorDF]{
x \periodSpec
x 1 sub \periodSpec
add 2 div}
\pnode(-.5,0){A}\pnode(0.5,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x 2 div \periodSpec
x 2 div 1 sub \periodSpec
add 2 div}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Downsampling by~$2$ of a {\em complex} highpass signal; the asymmetric spectrum helps to understand how non-destructive aliasing works.}\label{fig:mr:exD}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
%
\def\dspTriTru#1#2#3{ #1 sub abs #3 div dup 0.5 gt {pop 0} {#3 mul #2 div 1 exch sub} ifelse }
%
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{0.75}}
\dspFunc[linecolor=ColorDFilt,linestyle=dashed]{x \dspRect{0}{1}}
\dspCustomTicks[axis=x]{0.75 $\omega_M$ 0.5 $\pi/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$H(e^{j\omega})X(e^{j\omega})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTriTru{0}{0.75}{1}}
\dspCustomTicks[axis=x]{0.75 $\omega_M$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j(\omega - 2\pi k/N)})$}]{-3,3}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTriTru{0}{0.75}{1}}
\dspFunc[linecolor=ColorDF,linestyle=dashed]{x 1 sub \dspPeriodize \dspTriTru{0}{0.75}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$A(e^{j\omega})$}]{-3,3}{0,1.2}
\dspCustomTicks[axis=x]{0.5 $\pi/2$}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x \dspPeriodize \dspTriTru{0}{0.75}{1} 2 div}
\dspFunc[linecolor=ColorDF!50,linestyle=dashed]{x 1 sub \dspPeriodize \dspTriTru{0}{0.75}{1} 2 div}
\dspFunc[linecolor=ColorDF]{
x \dspPeriodize \dspTriTru{0}{0.75}{1}
x 1 sub \dspPeriodize \dspTriTru{0}{0.75}{1}
add 2 div}
\pnode(-.5,0){A}\pnode(0.5,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x 2 div \dspPeriodize \dspTriTru{0}{0.75}{1}
x 2 div 1 sub \dspPeriodize \dspTriTru{0}{0.75}{1}
add 2 div}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Downsampling by $2$ with preliminary anti-aliasing filtering.}\label{fig:mr:exE}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\section{Upsampling}
-\label{sec:mr:up}\index{upsampling|(}
-
-Upsampling by $N$ produces a higher-rate sequence by creating $N$ samples for each sample in the original signal. In its simplest form, an upsampler just inserts $N-1$ zeros after every input sample, as shown in Figure~\ref{fig:mr:up}. If we denote by $\mathcal{U}_N$ the upsampling operator, we have
-\begin{equation} \label{eq:mr:up}
- \mathcal{U}_N \bigl\{x[\cdot] \bigr\}[n] = x_{N\uparrow}[n] = \
- \begin{cases}
- x[k] & \mbox{ for $n = kN,\;\; k \in \mathbb{Z}$} \\
- 0 & \mbox{ otherwise}
- \end{cases} % \right.
-\end{equation}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[h]
- \center
- \begin{dspBlocks}{1.3}{0.6}
- $x[n]$ & \BDupsmp{N} & $x_{N\uparrow}[n]$
- \psset{arrows=->,linewidth=1pt}
- \ncline{1,1}{1,2} \ncline{1,2}{1,3}
- \end{dspBlocks}
- \caption{Symbol for the upsampling operator.}\label{fig:mr:downSym}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-Upsampling is a much ``nicer'' operation than downsampling since no information is lost; the original signal can always be exactly recovered by applying a congruent downsampling operation:
-\begin{equation}
- \mathcal{S}_N \left\{ \mathcal{U}_N \bigl\{x[\cdot] \bigr\} \right\}[n] = x[n]
-\end{equation}
-Also, the spectral description of upsampling is extremely simple; in the \ztrans\ domain we have
-\begin{align}
- X_{N\uparrow}(z) &= \sum_{n=-\infty}^{\infty} x_{NU}[n]z^{-n} \nonumber \\
- &= \sum_{k=-\infty}^{\infty} x[k]z^{-kN} = X(z^N)
-\end{align}
-and therefore
-\begin{equation} %\label{}
- X_{N\uparrow}(e^{j\omega}) = X(e^{j\omega N})
-\end{equation}
-so that upsampling is simply a contraction of the frequency axis by a factor of $N$. The inherent $2\pi$-periodicity of the spectrum must be taken into account so that, in this contraction, the periodic repetitions of the base spectrum are ``pulled in'' the $[-\pi, \pi]$ interval. The effects of upsampling on a signal's spectrum are shown graphically for a simple signal in Figures~\ref{fig:mr:upA} and~\ref{fig:mr:upB}; in all figures the top panel shows the original spectrum $X(e^{j\omega})$ over $[-\pi, \pi]$; the middle panel shows the same spectrum over a wider range to make the $2\pi$-periodicity explicitly; the last panel shows the upsampled spectrum $X_{N\uparrow}(e^{j\omega})$, highlighting the rescaling of the $[-N\pi, N\pi]$ interval.\index{upsampling|)} As a rule of thumb, upsampling ``brings in'' exactly $N$ copies of the original spectrum over the $[-\pi, \pi]$ interval even if, in the case of an even upsampling factor, one copy is split between the negative and positive frequencies.
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[t]
-%
-\def\dtSig{5 sub 0.3 mul RadtoDeg %
- dup cos 0.3 mul exch %
- 0.5 mul sin 0.6 mul %
- add 0.5 add }
-%
- \center
- \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=1]{-8,8}{-0.5,1.1}
- \dspSignal[linecolor=ColorDT]{x \dtSig}
- \end{dspPlot}
- \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
- \dspSignal[linecolor=ColorDT!65]{x dup cvi 4 mod 0 eq {4 div \dtSig} {pop 0} ifelse}
- \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
- \end{dspPlot}
- \caption{Upsampling by $4$ in the time domain: original signal (top panel); upsampled signal, where 3 zeros have been appended to each original sample (bottom panel). Note the difference in time indexes between top and bottom panels. }\label{fig:mr:up}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[!htb]
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
- \end{dspPlot}
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-5,5}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
- \pnode(-2,0){leftA}\pnode( 2,0){rightA}
- \end{dspPlot}
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x 2 mul \dspPeriodize \dspTri{0}{1}}
- \pnode(-1,1.2){leftB}\pnode( 1,1.2){rightB}
- \ncline[linewidth=2pt,linecolor=blue!40]{leftA}{rightA}
- \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{leftA}{leftB}
- \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{rightA}{rightB}
- \end{dspPlot}
- \caption{Upsampling by $2$.}\label{fig:mr:upA}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[!htb]
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
- \end{dspPlot}
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-5,5}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspTri{0}{1}}
- \pnode(-3,0){leftA}\pnode( 3,0){rightA}
- \end{dspPlot}
- \begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_{2\downarrow}(e^{j\omega})$}]{-1,1}{0,1.2}
- \dspFunc[linecolor=ColorDF]{x 3 mul \dspPeriodize \dspTri{0}{1}}
- \pnode(-1,1.2){leftB}\pnode( 1,1.2){rightB}
- \ncline[linewidth=2pt,linecolor=blue!40]{leftA}{rightA}
- \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{leftA}{leftB}
- \ncline[linewidth=1pt,linecolor=blue!40,linestyle=dashed]{->}{rightA}{rightB}
- \end{dspPlot}
- \caption{Upsampling by $3$.}\label{fig:mr:upB}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-\subsection{Upsampling and Interpolation}
-\label{sec:mr:upfilt}
-The upsampled signal in~(\ref{eq:mr:up}), with its $N-1$ zeros between original samples, exhibits two problems. In the time domain, the upsampled signal looks ``jumpy'' because of the periodic zero runs. This is the discrete-time equivalent to a lack of smoothness, since the signal keeps dropping back to zero, and it is apparent in the bottom panel of Figure~\ref{fig:mr:up}. In the frequency domain, simple upsampling has ``brought in'' copies of the original spectrum in the $[-\pi, \pi]$ interval, creating spurious high frequency content. These two issues are actually one and the same and they can be solved, as we will see, by using an appropriate filter.
-
-
-The problem of filling the gaps between nonzero samples in an upsampled sequence is, in many ways, similar to the discrete- to continuous-time interpolation problem of Section~\ref{sec:is:interp}, except that now we are operating entirely in discrete time. If we adapt the interpolation schemes that we have already studied, we have the following cases\index{interpolation!in multirate}:
-
-\itempar{Zero-Order Hold.}\index{zero-order hold!(discrete-time)}
-In this discrete-time interpolation scheme, also known as \emph{piecewise-constant interpolation}, after upsampling by $N$, we use a filter with impulse response
-\begin{equation}\label{eq:mr:zoh}
- h_0 [n] =
- \begin{cases}
- 1 & \ n = 0,1, \ldots, N-1 \\
- 0 & \mbox{ otherwise}
- \end{cases}
-\end{equation}
-which is shown in Figure~\ref{fig:mr:zoh}-(a). This interpolation filter simply repeats the last original input samples $N$ times, giving a staircase approximation as shown in the top panel of Figure~\ref{fig:mr:upinterp}.
-
-\itempar{First-Order Hold.}\index{first-order hold!(discrete-time)}
-In this discrete-time interpolation scheme, we obtain a piecewise linear interpolation after upsampling by $N$ by using
-\begin{equation}\label{eq:mr:foi}
- h_1 [n] =
- \begin{cases}
- \displaystyle 1 - \frac{|n|}{N} & \ |n| < N \\
- 0 & \mbox{ otherwise}
- \end{cases}
-\end{equation}
-The impulse response is the familiar triangular function\footnote{
- Once again, let us note that the triangle is the convolution of two rects, $h_1[n] = (1/N) \bigl(h_0[\cdot] \ast h_0[\cdot] \bigr)[n]$.}
-shown in Figure~\ref{fig:mr:zoh}-(b). An example of the resulting interpolation is shown in the bottom panel of Figure~\ref{fig:mr:upinterp}.
-
-\itempar{Sinc Interpolation.}\index{sinc interpolation!(discrete-time)}
-We know that, in continuous time, the smoothest interpolation is obtained by using a sinc function. This holds in discrete-time as well, and the resulting interpolation filter is the discrete-time sinc:
-\begin{equation}
- h[n] = \sinc\left(\frac{n}{N}\right)
-\end{equation}
-Note that the sinc above is equal to one for $n = 0$ and is equal to zero at all integer multiples of $N$, $n = kN$; this fulfills the interpolation condition, that is, after interpolation, the output equals the input at multiples of $N$: $(h \ast x_{N\downarrow})[kN] = x_{N\downarrow}[kN] = x[k]$.
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[!t]
- \center
- \begin{tabular}{cc}
- \begin{dspPlot}[sidegap=0.5,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-4,4}{0,1.1}
- \dspSignal[linecolor=ColorDT]{x 0 ge {x 4 lt {1} {0} ifelse} {0} ifelse}
- \end{dspPlot}
- &
- \begin{dspPlot}[sidegap=0.5,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-4,4}{0,1.1}
- \dspSignal[linecolor=ColorDT]{x abs 4 lt {1 x abs 4 div sub} { 0} ifelse}
- \end{dspPlot}
- \\ (a) & (b)
- \end{tabular}
- \caption{Discrete-time zero-order (a) and first-order (b) interpolators for $N = 4$.}\label{fig:mr:zoh}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[!b]
-%
-\def\dtSig{5 sub 0.3 mul RadtoDeg %
- dup cos 0.3 mul exch %
- 0.5 mul sin 0.6 mul %
- add 0.5 add }
-\def\upFact{4 }
-%
- \center
- \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
- \dspSignal[linecolor=ColorDT!65]{x 4 div floor \dtSig}
- \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
- \end{dspPlot}
- \begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=4]{-32,32}{-0.5,1.1}
- \dspSignal[linecolor=ColorDT!65]{%
- x \upFact div floor % last upsampling point n
- dup \upFact mul x exch sub \upFact div % intra-interval factor p
- 2 copy % n p n p
- 1 exch sub exch \dtSig mul % n p (1-p)f(n)
- 3 1 roll exch % A p n
- 1 add \dtSig % A p f(n+1)
- mul add}
- \dspSignal[linecolor=ColorDT,xunit=4,xmax=8,xmin=-8]{x \dtSig}
- \end{dspPlot}
- \caption{Upsampling by $4$ followed by interpolation: zero-order hold (top panel); linear interpolation (bottom panel).}\label{fig:mr:upinterp}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-The three impulse responses above are all lowpass filters; in particular, the sinc interpolator is an ideal lowpass with cutoff frequency $\omega_c = \pi/N$ while the others are approximations of the same. As a consequence, the effect of the interpolator in the frequency domain is the removal of the $N-1$ spectral copies ``drawn in'' the $[-\pi, \pi]$ interval by the upsampler. An example is shown in
-Figure~\ref{fig:mr:upfilt} where the signal in Figure~\ref{upsamplingFigC} is filtered by an ideal lowpass filter with cutoff $\pi/4$.
-
-It turns out that the smoothest possible interpolation in the time domain corresponds to the perfect removal of the spectral repetitions in the frequency domain. Interpolating with a zero-order or a first-order kernel, by contrast, only attenuates the replicas instead of performing a full removal, as we can readily see by considering their frequency responses. Since we are in discrete-time, however, there are no difficulties associated to the design of a digital lowpass filter which closely approximates an ideal filter, so that alternate kernel designs (such as optimal FIRs) can be employed.
-This is in contrast to the design of discrete---to continuous---time interpolators, which are analog designs. That is why sampling rate changes are much more attractive in the discrete-time domain.
-
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Fractional Sampling Rate Changes}
The conversion from one sampling rate to another can always take the ``obvious'' route of interpolating to continuous time and resampling the resulting signal at the desired rate; but this would require dedicated analog equipment and would introduce some quality loss because of the inevitable analog noise. We have just seen, however, that we can increase or decrease the implicit rate of a sequence by an integer factor entirely in the discrete-time domain, by using an upsampler or a downsampler and a digital lowpass filter. For fractional sampling rate changes we simply need to cascade the two operators.
The order in which upsampling and downsampling are performed in a rate changer is crucial since, in general, the operators are not commutative. It is easy to appreciate this fact by means of
a simple example:
\begin{align*}
- \mathcal{S}_2 \left\{ \mathcal{U}_2 \bigl\{x[\cdot] \bigr\} \right\}[n] &= x[n] \\
- \mathcal{U}_2 \left\{ \mathcal{S}_2 \bigl\{x[\cdot] \bigr\} \right\}[n] &=
+ (\mathcal{S}_2 ( \mathcal{U}_2 \mathbf{x}))[n] &= x[n] \\
+ (\mathcal{U}_2 ( \mathcal{S}_2 \mathbf{x}))[n] &=
\begin{cases}
x[n] & \mbox{ for $n$ even} \\
0 & \mbox{ for $n$ odd}
\end{cases}.
\end{align*}
Intuitively it's clear that, since no information is lost when using an upsampler, in a fractional sampling rate change the upsampling operation will come first. Typically, a rate change by $N/M$ is obtained by cascading an upsampler by~$N$, a lowpass filter, and a downsampler by~$M$. Since normally the upsampler is followed by a lowpass with cutoff $\pi/N$ while the downsampler is preceded by a lowpass with cutoff $\pi/M$, we can use a single lowpass whose cutoff frequency is the minimum of the two. A block diagram of this system is shown in Figure~\ref{fig:mr:frac}.\index{fractional sampling rate change} \index{rational sampling rate change}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\begin{dspBlocks}{1}{0.6}
$x[n]$ & \BDupsmp{N} & \BDfilter{LP$\{\min(\pi/N,\pi/M)\}$} & \BDdwsmp{M} & $y[n]$
\psset{arrows=->,linewidth=1pt}
\ncline{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\end{dspBlocks}
\caption{Block diagram for a fractional sampling rate change.}\label{fig:mr:frac}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As an example, suppose we want to increase the rate of a sequence originally sampled at 24~KHz up to 32~KHz. For this we need a fractional change of $32/24$ which, after simplification, corresponds to an upsampler by 4 followed by a downsampler by 3, as shown in the top panel of Figure~\ref{fig:mr:fracex}; the lowpass filter's cutoff frequency is $\pi/4$ and, in this case, the lowpass filter acts solely as an interpolator since the overall rate is increased. Conversely, if we want to convert a 32~KHz signal to a 24~KHz, that is, apply a sampling rate change by $3/4$, we can use the cascade shown in the bottom panel of Figure~\ref{fig:mr:fracex}; the cutoff frequency of the filter does not change but the filter, in this case, acts as an anti-alias.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b]
\center
\begin{dspBlocks}{1}{0.6}
$x[n]$ & \BDupsmp{4} & \BDfilter{LP$\{\pi/4\}$} & \BDdwsmp{3} & $y[n]$ \\
\\
$x[n]$ & \BDupsmp{3} & \BDfilter{LP$\{\pi/4\}$} & \BDdwsmp{4} & $y[n]$ \\
\psset{arrows=->,linewidth=1pt}
\ncline{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\ncline{3,1}{3,2} \ncline{3,2}{3,3}
\ncline{3,3}{3,4}\ncline{3,4}{3,5}
\end{dspBlocks}
\caption{Block diagram for a fractional sampling rate change.}\label{fig:mr:fracex}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Practical resamplers}
\index{resampling}
If the ratio between the two sampling frequencies cannot be decomposed into a ratio of small coprime factors, the intermediate rates in a fractional rate changer can be prohibitively high. That was the rationale, for instance, behind an infamous engineering decision taken by the audio industry in the early 90's when the first digital audio recorders (call DAT) were introduced on the market; in order to make it impossible for users to create perfect copies of CDs on digital tapes, the sampling frequency of the recorders was set to 48~KHz. Since CDs are encoded at 44.1~KHz, this required fractional change rate of 160/147. At the time, a 160-fold upsampling was simply not practical to implement so users would have to necessarily go through the analog route to copy CDs. Incidentally, although digital audio tapes have quickly faded into obsolescence, the problem of converting audio from 44.1~KHz to 48~KHz remains relevant today since the sampling frequency used in DVDs is also 48~KHz. The good news is that fractional resamplers can be implemented using local interpolation techniques rather than via a formal upsampling/downsampling chain. To understand the procedure, let's first analyze the practical version of a subsample interpolator and then apply the idea to a resampler.
\itempar{Subsample Interpolation.}
\index{subsample interpolation}
-Consider an $F_s$-bandlimited continuous-time signal $x(t)$ and its sampled version $x[n] = x(nT_s)$, with $T_s \leq 1/F_s$. Given a fractional delay $\tau T_s$, with $|\tau|< 1/2$, we want to determine the sequence
+Consider an $F_s$-bandlimited continuous-time signal $\mathbf{x}_c$ and its sampled version $\mathbf{x}$ defined by $x[n] = x_c(nT_s)$, with $T_s \leq 1/F_s$. Given a fractional delay $\tau T_s$, with $|\tau|< 1/2$, we want to determine the sequence
\[
- x_\tau[n] = x(nT_s + \tau T_s)
+ x_\tau[n] = x_c(nT_s + \tau T_s)
\]
-using only discrete-time processing; for simplicity, let's just assume $T_s=1$. We know from Section~\ref{sec:is:duality} that the theoretical way to obtain $x_\tau[n]$ from $x[n]$ is to use an ideal fractional delay filter:
+using only discrete-time processing; for simplicity, let's just assume $T_s=1$. We know from Section~\ref{sec:is:duality} that the theoretical way to obtain $\mathbf{x}_\tau$ from $\mathbf{x}$ is to use an ideal fractional delay filter:
\[
- x_\tau[n] = \left( d_\tau[\cdot] \ast x[\cdot] \right)[n]
+ \mathbf{x}_\tau = \mathbf{d}_\tau \ast \mathbf{x}
\]
where
\begin{align*}
D_\tau(e^{j\omega}) &= e^{j\omega\tau} \\
d_\tau[n] &= \sinc(n - \tau).
\end{align*}
As we have seen in Section~\ref{sec:is:sincinterp}, the sinc interpolator originates as the limit of polynomial interpolation when the number of interpolation points goes to infinity. In this case we can work backwards, and replace the sinc with a low-degree, \textit{local} Lagrange interpolation as in Section~\ref{sec:is:lagrange}. Given $L$ samples to the left and the right of $x[n]$, we can build the continuous-time signal\index{Lagrange interpolation}
\begin{equation} \label{eq:mr:LagInterp}
- x_L(n; t) = \sum_{k=-N}^{N} x[n - k] \,L_k^{(N)}(t)
+ l_x(n; t) = \sum_{k=-N}^{N} x[n - k] \,L_k^{(N)}(t)
\end{equation}
where $L_k^{(N)}(t)$ is the $k$-th Lagrange polynomial of order $2N$ defined in~(\ref{eq:is:lagPoly}). With this, we can use the approximation
\begin{equation} \label{eq:mr:subApprox}
- x_\tau[n] = x_L(n; \tau) \approx x(n + \tau).
+ x_\tau[n] = l_x(n; \tau) \approx x(n + \tau).
\end{equation}
Figure~\ref{fig:mr:lagPoly} shows for instance the quadratic Lagrange polynomials that would be used for a three-point local interpolation ($L=1$); an example of the interpolation and approximation procedures are shown graphically in Figure~\ref{fig:mr:approx}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% finite-length signal, 3 taps
\def\ta{.5 } \def\tb{2 } \def\tc{-1 }
%
%% the tap plotting string
\def\taps{-1 \ta 0 \tb 1 \tc }
\def\plotTaps{\dspTaps[linecolor=ColorDT]{\taps}}
%
%% Lagrange polynomials
\def\lpa{%
dup -1 div exch %
-1 add -2 div %
mul }
\def\lpb{%
dup 1 add -1 div exch %
-1 add %
mul }
\def\lpc{%
dup exch %
1 add 2 div %
mul }
%
\def\lagInterp{%
dup \lpa \ta mul exch %
dup \lpb \tb mul exch %
\lpc \tc mul %
add add}
%
\def\LagPoly#1#2#3#4{%
\dspFunc[linecolor=#2]{x \csname lp#3\endcsname}%
\dspText(! #1 2 sub 2 div 1.3){\color{#2} $L_{#4}^{1}(t)$}}
%
\def\sample#1#2#3{%
\psline[linewidth=\dspStemWidth,#1](#2,0)(! #2 dup #3)
\psdot[dotstyle=*,dotsize=\dspDotSize,#1](! #2 dup #3)}
%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-1.5,1.5}{-0.8,1.6}
\begin{dspClip}%
\LagPoly{1}{green}{a}{-1}%
\LagPoly{2}{blue}{b}{0}%
\LagPoly{3}{orange}{c}{1}%
\end{dspClip}
\end{dspPlot}
\caption{Quadratic (second-order) Lagrange interpolation polynomials $L_k^{(1)}(t)$.}\label{fig:mr:lagPoly}
\end{figure}
\begin{figure}[t!]
\begin{dspPlot}[sidegap=0,xticks=custom]{-1.5,1.5}{-2,3}
\begin{dspClip}%
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \lpa \ta mul}%
\dspFunc[linewidth=0.4pt]{x \lpb \tb mul}%
\dspFunc[linewidth=0.4pt]{x \lpc \tc mul}%
\dspFunc[linewidth=2pt,linecolor=ColorCT,xmin=-2,xmax=2]{x \lagInterp}%
\end{dspClip}
\plotTaps%
\sample{linecolor=ColorDT2}{0.3}{\lagInterp}%
\dspCustomTicks[axis=x]{-1 $n-1$ 0 $n$ 1 $n+1$ 0.3 $n+\tau$}%
\end{dspPlot}
- \caption{Local quadratic Lagrange interpolation $x_L(n; t)$ around $n$ and approximation $x_\tau[n] = x_L(n; \tau)$ for $\tau=0.3$}\label{fig:mr:approx}
+ \caption{Local quadratic Lagrange interpolation $l_x(n; t)$ around $n$ and approximation $x_\tau[n] = l_x(n; \tau)$ for $\tau=0.3$}\label{fig:mr:approx}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Once the value for $\tau$ is known, it can be plugged into~(\ref{eq:mr:LagInterp}) so that~(\ref{eq:mr:subApprox}) becomes
\begin{equation}\label{eq:mr:FIRapprox}
- x_\tau[n] = \sum_{k=-N}^{N} x[n - k] \,L_k^{(N)}(\tau) = \left(\hat{d}_\tau[\cdot] \ast x[\cdot]\right)[n]
+ x_\tau[n] = \sum_{k=-N}^{N} x[n - k] \,L_k^{(N)}(\tau) = \left(\mathbf{d}_\tau \ast \mathbf{x}\right)[n]
\end{equation}
which is the convolution, computed in $n$, between the input signal and the $(2N+1)$-tap FIR
\begin{equation}\label{eq:mr:FIRcoeffs}
\hat{d}_\tau[n] = \begin{cases}
L_k^{(N)}(\tau) & \mbox{for $|n| \leq L$} \\
0 & \mbox{otherwise}
\end{cases}
\end{equation}
For instance, for a quadratic interpolation as in Figure~\ref{fig:mr:approx}, the nonzero coefficients are
\begin{align*}
\hat{d}_\tau [-1] & = \tau \frac{ \tau -1 }{2} \\
\hat{d}_\tau [0] & = - (\tau +1)(\tau -1) \\
\hat{d}_\tau [ 1] & = \tau \frac{\tau +1}{2}
\end{align*}
The FIR interpolator is expressed in noncausal form purely out of convenience; in practical implementations an additional delay would be used to make the whole processing chain causal.
-\itempar{Local resampling.} The principle of the subsample interpolator we just described can be used to perform fractional resampling. Again, consider an $F_s$-bandlimited continuous-time signal $x(t)$ and its sampled version $x[n] = x(n T_1)$, with $T_1 \leq 1/F_s$. Given a second sampling period $T_2 \leq 1/F_s$, we want to estimate $y[n] = x(nT_2)$ using only $x[n]$. Call
+\itempar{Local resampling.} The principle of the subsample interpolator we just described can be used to perform fractional resampling. Again, consider an $F_s$-bandlimited continuous-time signal $\mathbf{x}_c$ and its sampled version $x[n] = x(n T_1)$, with $T_1 \leq 1/F_s$. Given a second sampling period $T_2 \leq 1/F_s$, we want to estimate $y[n] = x(nT_2)$ using only the discrete-time samples $x[n]$. Call
\begin{equation}
\beta = \frac{T_2}{T_1} = \frac{F_1}{F_2}
\end{equation}
the ratio between sampling frequencies; if $\beta < 1$ we are interpolating to a higher underlying rate (i.e. creating more samples overall) whereas if $\beta > 1$ we are downsampling (discarding samples overall). For any output index $n$ we can always write
\begin{equation}%\label{eq:mr:FIRcoeffs}
nT_2 = [m(n) + \tau(n)]\,T_1
\end{equation}
with\footnote{``nint'' denotes the nearest integer function, so that $\mbox{nint}(0.2) = \mbox{nint}(-0.4) = 0$.}
\begin{align}
m(n) &= \mbox{nint}\left( n\beta \right) \, \in \mathbb{N} \\
|\tau(n)| & = \left| n\beta - m(n) \right| \leq \frac{1}{2} \label{eq:mr:taun}
\end{align}
that is, we can associate each output sample $x_2[n]$ to a reference index $m(n)$ in the input sequence plus a fractional delay $\tau(n)$ which is kept less than one half in magnitude. With this, we can use subsample interpolation to approximate each sample
\[
y[n] \approx \sum_{k=-N}^{N} x[m(n) - k] \,\hat{d}_{\tau(n)}[k]
\]
where the coefficients $\hat{d}_{\tau(n)}[k]$, defined in~(\ref{eq:mr:FIRcoeffs}), are now dependent on the output index $n$. In theory, a new set of FIR coefficients is needed for each output sample but, if $F_1$ and $F_2$ are commensurable, we only need to precompute a finite set of interpolation filters. Indeed, if we can write
\[
\beta = \frac{A}{B}, \qquad A, B \in \mathbb{N}
\]
then it is easy to verify from~(\ref{eq:mr:taun}) that
\[
\tau(n + kB) = \tau(n) \quad \mbox{for all $k\in\mathbb{Z}$}.
\]
In other words, there are only $B$ distinct values of $\tau$ that are needed for the subsample interpolation. In the case of our CD to DVD conversion, for instance, we need 147~three-tap filters.
In practical algorithms, which need to work causally, resampling takes place incrementally; this is particularly important in digital communication system design where resampling is vital in order to compensate for slight timing differences between the transmitter's D/A and the receiver's A/D, as we will see in Chapter~\ref{ch:cs}. In general, when $\beta < 1$ we will need to sometimes reuse the same anchor sample with a different $\tau$ in order to produce more samples; conversely, for $\beta > 1$, sometimes we will need to skip one or more input samples. The first case is illustrated in Figure~\ref{fig:mr:resampleup} for $\beta = 0.78$, i.e. for a 28\% sampling rate increase, and the computation of the first few resampled values proceeds as follows:
\begin{itemize}
\item $n=0$: with initial synchronism, $m(0) = 0$, $\tau(0) = 0$ so $y[0] = x[0]$.
\item $n=1$: $m(1) = \mbox{nint}(0.78) = 1$, $\tau(1) = \beta - m(1) = -0.22$ so $y[1] = x_{-0.22}[1]$
\item $n=2$: $m(2) = \mbox{nint}(2\beta) = \mbox{nint}(1.56) = 2$, $\tau(2) = -0.44$ so $y[2] = x_{-0.44}[2]$
\item $n=3$: $m(3) = \mbox{nint}(3\beta) = \mbox{nint}(2.34) = 2$ and therefore we must \textit{reuse} the previous interpolation anchor $x[2]$; $\tau(3) = 2.34 - 2 = 0.34$ so $y[3] = x_{0.34}[2]$
\item $n=4$: $m(4) = \mbox{nint}(3.12) = 3$, $\tau(4) = 0.12$ so $y[2] = x_{0.12}[3]$
\item $\ldots$
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\def\polyFun{%
dup 2.315 mul exch %
dup dup mul -3.74583333 mul exch %
dup dup dup mul mul 2.05 mul exch %
dup dup dup dup mul mul mul -0.45416667 mul exch %
dup dup dup dup mul mul mul mul 0.035 mul %
0.5 add add add add add %
3 mul }
%
\def\subs#1{%
\psline[linecolor=ColorDT2,linewidth=\dspStemWidth](#1,0)(! #1 dup \polyFun)
\psdot[linecolor=ColorDT2,dotstyle=*,dotsize=\dspDotSize](! #1 dup \polyFun)}
%
\center
\begin{dspPlot}[sidegap=0.2,yticks=none,xout=true]{0,4}{0,3}
\dspFunc[linecolor=ColorCT,linestyle=dashed,xmin=-.12,xmax=0]{x \polyFun}
\dspFunc[linecolor=ColorCT,linestyle=dashed,xmin=4,xmax=4.1]{x \polyFun}
\dspFunc[linecolor=ColorCT,xmin=0,xmax=4]{x \polyFun}
\multips(0,0)(0.78,0){6}{\psline[linewidth=0.5pt,linestyle=dashed,linecolor=lightgray](0,3)}
\subs{0}
\subs{0.78}
\subs{1.56}
\subs{2.34}
\subs{3.12}
\subs{3.9}
\dspSignal[linecolor=ColorDT]{x \polyFun}
\rput*[b](.89,.38){$\tau(1)$}\psline[arrows=->](1,.3)(.78,.3)
\rput*[b](1.78,.38){$\tau(2)$}\psline[arrows=->](2,.3)(1.56,.3)
\rput*[b](2.18,.38){$\tau(3)$}\psline[arrows=->](2,.3)(2.34,.3)
\rput*[b](3.18,.38){$\tau(4)$}\psline[arrows=->](3,.3)(3.12,.3)
\end{dspPlot}
\caption{On-the-fly fractional resampling with a 28\% frequency increase ($\beta = 0.78$).}\label{fig:mr:resampleup}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The second case is illustrated in Figure~\ref{fig:mr:resampledown} for $\beta = 1.22$ (i.e. an 18\% rate reduction) and the computation of the first few resampled values proceeds as follows:
\begin{itemize}
\item $n=0$: with initial synchronism, $m(0) = 0$, $\tau(0) = 0$ so $y[0] = x[0]$.
\item $n=1$: $m(1) = \mbox{nint}(1.22) = 1$, $\tau(1) = \beta - m(1) = 0.22$ so $y[1] = x_{0.22}[1]$
\item $n=2$: $m(2) = \mbox{nint}(2\beta) = \mbox{nint}(2.44) = 2$, $\tau(2) = 0.44$ so $y[2] = x_{0.44}[2]$
\item $n=3$: $m(3) = \mbox{nint}(3\beta) = \mbox{nint}(3.66) = 4$ and therefore we must \textit{skip} $x[3]$; $\tau(3) = 3.66 - 4 = -0.34$ so $y[3] = x_{-0.34}[4]$
\item $n=4$: $m(4) = \mbox{nint}(3.12) = 3$, $\tau(4) = 0.12$ so $y[2] = x_{0.12}[3]$
\item $\ldots$
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b]
\def\polyFun{%
dup 2.315 mul exch %
dup dup mul -3.74583333 mul exch %
dup dup dup mul mul 2.05 mul exch %
dup dup dup dup mul mul mul -0.45416667 mul exch %
dup dup dup dup mul mul mul mul 0.035 mul %
0.5 add add add add add %
3 mul }
%
\def\subs#1{%
\psline[linecolor=ColorDT2,linewidth=\dspStemWidth](#1,0)(! #1 dup \polyFun)
\psdot[linecolor=ColorDT2,dotstyle=*,dotsize=\dspDotSize](! #1 dup \polyFun)}
%
\center
\begin{dspPlot}[sidegap=0.2,yticks=none,xout=true]{0,4}{0,3}
\dspFunc[linecolor=ColorCT,linestyle=dashed,xmin=-.12,xmax=0]{x \polyFun}
\dspFunc[linecolor=ColorCT,linestyle=dashed,xmin=4,xmax=4.1]{x \polyFun}
\dspFunc[linecolor=ColorCT,xmin=0,xmax=4]{x \polyFun}
\multips(0,0)(1.22,0){4}{\psline[linewidth=0.5pt,linestyle=dashed,linecolor=lightgray](0,3)}
\subs{0}
\subs{1.22}
\subs{2.44}
\subs{3.66}
\dspSignal[linecolor=ColorDT]{x \polyFun}
\rput*[b](1.11,.38){$\tau(1)$} \psline[arrows=->](1,.3)(1.22,.3)
\rput*[b](2.22,.38){$\tau(2)$} \psline[arrows=->](2,.3)(2.44,.3)
\rput*[b](3.83,.38){$\tau(3)$} \psline[arrows=<-](3.66,.3)(4,.3)
\end{dspPlot}
\caption{On-the-fly fractional resampling with a 18\% frequency reduction ($\beta = 1.22$).}\label{fig:mr:resampledown}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Oversampling}
\label{sec:mr:oversampling}
The term ``oversampling'' describes a situation in which a signal's sampling rate is deliberately increased beyond the minimum value required by the sampling theorem in order to improve the performance of A/D and D/A converters at a lower cost than would be required by the use of more sophisticated analog circuitry.
\subsection{Oversampled A/D Conversion}\index{oversampling!in A/D conversion}
-The sampling theorem guarantees that we can losslessly convert an $F_s$-band\-limited signal $x(t)$ into a sequence $x[n]$, provided that the sampling frequency is larger than $F_s$. In a full A/D conversion, therefore, the only remaining source of distortion is introduced by quantization. Assuming a uniformly distributed input and a matching uniform quantizer, in Section~\ref{sec:da:quantization}, we have modeled this distortion as an additive noise source:
+The sampling theorem guarantees that we can losslessly convert an $F_s$-band\-limited signal $\mathbf{x}_c$ into a discrete-time sequence, provided that the sampling frequency is larger than $F_s$. In a full A/D conversion, therefore, the only remaining source of distortion is introduced by quantization. Assuming a uniformly distributed input and a matching uniform quantizer, in Section~\ref{sec:da:quantization}, we have modeled this distortion as an additive noise source,
\begin{equation}\label{eq:mr:overAD}
- \hat{x}[n] = x[n] + e[n].
+ \hat{\mathbf{x}} = \mathbf{x + e},
\end{equation}
-In the above expression $e[n]$ is a white process, \textit{uncorrelated with $x[n]$}, and whose uniform power spectral density
+where $\mathbf{e}$ is a white process, \textit{uncorrelated with $\mathbf{x}$}, and whose uniform power spectral density
\[
P_e(e^{j\omega}) = \sigma^2_e = \frac{\Delta^2}{12}
\]
depends only on $\Delta$, the width of quantization interval or, equivalently, on the number of bits per sample. This is represented pictorially in the top panel of Figure~\ref{fig:mr:overAD} which shows the PSDs of a critically sampled signal and that of the associated quantization noise; the total SNR is the ratio between the areas under the two curves. One way to improve the SNR, as we know, is to increase $R$, the number of bits per sample used by the quantizer; unfortunately, the number of electronic components in an A/D converter grows proportionally to $R^2$, and so this is an expensive option.
-A clever digital workaround is provided by the observation that the sampling frequency used to convert $x(t)$ to $x[n]$ does not appear explicitly in~(\ref{eq:mr:overAD}) and therefore if we oversample the analog signal by, say, a factor of two, we will ``shrink'' the support of the signal's PSD without affecting the noise; this is shown in the middle panel of Figure~\ref{fig:mr:overAD}. Increasing the sampling rate does not modify the power of the signal, so the overall SNR does not change: in the figure, note how the shrinking support is matched by a proportional increase in amplitude for the signal's PSD\footnote{
- Although we haven't explicitly talked about sampling random processes, the formula for the spectrum of a sampled signal in~(\ref{eq:is:noAliasingSpecEq}) formally holds for power spectral densities as well, and the change in amplitude is due to the sampling frequency appearing as a scaling factor.}.
-At this point, however, we are in the digital domain and therefore it is simple (and cheap) to filter the out-of-band noise with a sharp lowpass filter, as represented by a dashed line in Figure~\ref{fig:mr:overAD}. In the example, this will halve the total power of the noise, increasing the SNR by a factor of 2 (that is, by 3~dB). We can now digitally downsample the result to obtain a critically sampled input signal with an improved SNR as illustrated in the bottom panel of Figure~\ref{fig:mr:overAD}.
+A clever digital workaround is provided by the observation that the sampling frequency used to convert $\mathbf{x}_c$ into a digital signal does not appear explicitly in~(\ref{eq:mr:overAD}) and therefore if we oversample the analog signal by, say, a factor of two, we will ``shrink'' the support of the signal's PSD without affecting the noise; this is shown in the middle panel of Figure~\ref{fig:mr:overAD}. Increasing the sampling rate does not modify the power of the signal, so the overall SNR does not change: in the figure, note how the shrinking support is matched by a proportional increase in amplitude for the signal's PSD\footnote{
+ Although we haven't explicitly introduced a sampling theorem for random processes, the formula for the spectrum of a sampled signal in~(\ref{eq:is:noAliasingSpecEq}) formally holds for power spectral densities as well, and the change in amplitude is due to the sampling frequency appearing as a scaling factor.}.
+At this point, however, we are in the digital domain and therefore it is simple (and cheap) to filter the out-of-band noise with a sharp lowpass filter with a magnitude response close to the dashed line in Figure~\ref{fig:mr:overAD}. In the example, this will halve the total power of the noise, increasing the SNR by a factor of 2 (that is, by 3~dB). We can now digitally downsample the result to obtain a critically sampled input signal with an improved SNR as illustrated in the bottom panel of Figure~\ref{fig:mr:overAD}. To foster the intuition, we can look at the process from the time domain: at a high sampling rate, neighboring samples can be seen as repeated measurements of the same value (the signal varies slowly compared to the speed of sampling) affected by quantization noise that is supposed to be independent from sample to sample; the lowpass filter acts as a local averager which reduces the variance of the noise for the subsampled sequence.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1]{-1,1}{0,2.2}
\psline[linewidth=1.2pt,linecolor=gray](-1,0.5)(1,0.5)
\psframe[linewidth=0pt,fillstyle=vlines,hatchcolor=gray](-1,0.5)(1,0)
\dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=4]{-1,1}{0,2.2}
\psline[linewidth=1.2pt,linecolor=gray](-1,0.5)(1,0.5)
\psframe[linewidth=0pt,fillstyle=vlines,hatchcolor=gray](-1,0.5)(1,0)
\dspFunc[linecolor=ColorDF,xmin=-.5,xmax=0.5]{x \dspPorkpie{0}{.5} 2 mul}
\dspFunc[linecolor=ColorDFilt,linestyle=dashed]{x \dspRect{0}{1} 2 mul}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=4]{-1,1}{0,2.2}
\psline[linewidth=1.2pt,linecolor=gray](-1,0.25)(1,0.25)
\psframe[linewidth=0pt,fillstyle=vlines,hatchcolor=gray](-1,0.25)(1,0)
\dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{1}}
\end{dspPlot}
\caption{Oversampling for A/D conversion: signal's PSD and quantization error's PSD (gray) for critically sampled signal (top panel); oversampled signal and lowpass filter (middle panel); filtered and downsampled signal (bottom panel).}\label{fig:mr:overAD}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-In general, the block diagram for an oversampled A/D is shown in Figure~\ref{fig:mr:overADblock} and, theoretically, the SNR of the quantized signal improves by 3~dB for every doubling of the sampling rate with respect to the baseline SNR provided by the quantizer. In practice, things are not so straightforward. The technique is predicated on two fundamental assumptions: the independence between signal and noise and the decorrelation (whiteness) between successive noise samples; intuitively, it is clear that both hypotheses no longer hold when the sampling rate increases. In fact, with high oversampling factors, successive samples become \textit{very} correlated (in the limit, if one samples fast enough, a lot of samples will just have the same value) and therefore most of the quantization noise falls within the band of the signal. More efficient oversampling technique use feedback and nonlinear processing to push more and more of the quantization noise out of band; these converters, known under the name of Sigma-Delta quantizers, are however very difficult to analyze theoretically.
+In general, the block diagram for an oversampled A/D is as in Figure~\ref{fig:mr:overADblock} and, theoretically, the SNR of the quantized signal improves by 3~dB for every doubling of the sampling rate with respect to the baseline SNR provided by the quantizer. However, as we just illustrated with a time-domain analysis, the technique is predicated on two fundamental assumptions, the statistical independence between signal and noise and the absence of correlation between successive noise samples, both of which become invalid as the sampling rate increases. With high oversampling factors the correlation between successive noise samples increases and therefore most of the quantization noise will leak in the band of the signal. More efficient oversampling technique use feedback and nonlinear processing to push more and more of the quantization noise out of band; these converters, known under the name of Sigma-Delta quantizers, are however very difficult to analyze theoretically.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\begin{dspBlocks}{0.8}{0.3}
$x(t)$~ & \BDsamplerFramed[0.46em] & \BDfilter{$\mathcal{Q}\{\cdot\}$} & \BDfilter{LP$\{\pi/N\}$} & \BDdwsmp{$N$} & $x[n]$ \\
& $F_o = NF_s$
\end{dspBlocks}
\psset{linewidth=1pt}
\ncline{-}{1,1}{1,2} \ncline{->}{1,2}{1,3}
\ncline{->}{1,3}{1,4}\ncline{->}{1,4}{1,5}
\ncline{->}{1,5}{1,6}
\caption{Oversampled A/D conversion.}\label{fig:mr:overADblock}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Oversampled D/A Conversion}
\index{oversampling!in D/A conversion}
-Ideally, D/A conversion would require a sinc interpolator or, in other words, an ideal analog lowpass filter with cutoff frequency $F_s/2$. Again, in Chapter~\ref{ch:is} we showed how the sinc represented an ideal kernel-based interpolator, that is, an interpolator whose shape is not dependent on the interpolation instant but also with infinite smoothness. By relaxing the smoothness constraint, intepolation becomes a much easier job and, in fact, most practical interpolators simply implement a zero-order hold. Recall from Section~\ref{sec:is:locInterp} that the continuous-time signal returned by a kernel-based interpolator $i(t)$ is
+All practical D/A converters use a kernel-based interpolator; recall from Section~\ref{sec:is:locInterp} that the interpolated continuous-time signal in this case is
\[
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, i\left(\frac{t - nT_s}{T_s}\right);
\]
-assuming that $x[n]$ is absolutely summable it is immediate to show that\index{spectrum!of kernel interpolation}
+where $i(t)$ is the shape of the kernel and $T_s$ is the interpolation period. In the frequency domain, the spectrum is
\begin{equation}\label{eq:mr:interpSpec}
X_c(f) = \frac{1}{F_s}\,X\left(e^{j2\pi f/F_s}\right)\, I\left(f/F_s\right)
\end{equation}
-with, as usual, $F_s = 1/T_s$. The above expression is the product of two terms; the first is the periodic digital spectrum, rescaled so that $\pi \rightarrow F_s/2$, and the second is the frequency response of the interpolation kernel. When using a sinc, the frequency response $I(f)$ is an ideal lowpass with cutoff frequency $F_s/2$, which ``kills off'' all the spectral repetitions outside of the baseband; the result is exemplified in Figure~\ref{fig:mr:sincInterp}, where the top panel shows a digital spectrum, the middle panel shows the two terms of Equation~(\ref{eq:mr:interpSpec}), and the bottom panel shows the final analog spectrum.
+with, as usual, $F_s = 1/T_s$. The above expression is the product of two terms; the first is the periodic digital spectrum, rescaled so that $\pi \rightarrow F_s/2$, and the second is the frequency response of the interpolation kernel.
+
+An ideal D/A converter would require a sinc interpolator, which we know not to be realizable in practice, but which would completely remove the out-of-band components from the spectrum interpolated signal as shown in Figure~\ref{fig:mr:sincInterp}, where the top panel shows a digital spectrum, the middle panel shows the two terms of Equation~(\ref{eq:mr:interpSpec}), and the bottom panel shows the final analog spectrum.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspTri{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X(e^{j2\pi f/F_s})I(f/F_s)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspRect{0}{2}}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspTri{0}{1}}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{Sinc interpolation in the frequency domain.}\label{fig:mr:sincInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspTri{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X(e^{j2\pi f/F_s})I(f/F_s)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{2} abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} abs mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{Zero-order hold interpolation in the frequency domain.}\label{fig:mr:zohInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=none]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} abs mul}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x abs 1 ge {0} {1 x \dspSinc{0}{2} abs div 0.7 mul} ifelse}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{Analog filter needed to compensate for the ZOH interpolation.}\label{fig:mr:zohComp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-With a realizable interpolator, however, the frequency response of the interpolation kernel cannot be uniformly zero outside of the $[-F_s/2, F_s/2]$ interval, and its transition band cannot be infinitely sharp. As a consequence, the spectral copies to the left and to the right of the baseband will ``leak'' in the reconstructed analog signal. As an example, the results of using a zero-order hold for interpolation are shown in Figure~\ref{fig:mr:sincInterp}, in which the contents of the three panels are as in Figure~\ref{fig:mr:sincInterp}. The final spectrum is affected in two ways:
+At the other end of the interpolation gamut lies the zero-order hold which, as we have seen, is easy to implement but has terrible spectral properties; the problems are shown in Figure~\ref{fig:mr:sincInterp}, in which the contents of the three panels are as in Figure~\ref{fig:mr:sincInterp}. The ZOH-interpolated spectrum is affected in two ways:
\begin{enumerate}
\item there is significant out-of-band energy due to the spectral copies that are only attenuated and not eliminated by the interpolator;
\item there is in-band distortion due to the fact that the interpolator has a non-flat passband.
\end{enumerate}
-In theory, to compensate for these two problems, we would need an \textit{analog} filter whose frequency response is sketched in Figure~\ref{fig:mr:zohComp}; such a filter, however, and assuming its design was even doable, would be a costly device since analog filter with arbitrary responses are in general quite complex to implement. Please note that the present analysis remains valid also with higher order kernels, such as the first- and third-order interpolators detailed in Section~\ref{sec:is:locInterp}, since their frequency response is similar to that of the zero-order hold.
+Yet, the zero-order-hold is so easy and cheap to implement that most D/A circuits use it exclusively.
+In theory, to compensate for the resulting problems, we would need an \textit{analog} filter whose frequency response is sketched in Figure~\ref{fig:mr:zohComp}; such a filter, however, even assuming we could design it exactly, would be a costly device since an analog filter with a sophisticated response requires high-precision electronic components.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\begin{dspBlocks}{0.8}{0.3}
$x[n]$~ & \BDupsmp{$N$} & \BDfilter{$F(e^{j\omega})$} & \BDfilter{$I(f)$} & $x(t)$ \\
& & & $F_o = NF_s$
\end{dspBlocks}
\psset{linewidth=1pt}
\ncline{-}{1,1}{1,2} \ncline{->}{1,2}{1,3}
\ncline{->}{1,3}{1,4}\ncline{->}{1,4}{1,5}
\ncline{->}{1,5}{1,6}
\caption{Oversampled A/D conversion.}\label{fig:mr:overDAblock}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-Again, rather than using complicated and expensive analog filters, the performance of the D/A converter can be dramatically
-improved if we are willing to perform the conversion at a higher rate than the nominal sampling frequency, as shown in Figure~\ref{fig:mr:overDAblock}. The digital signal is first oversampled by a factor of $N$ and then filtered with a discrete-time filter $F(e^{j\omega})$ that combines the out of band frequency rejection of a sharp lowpass with cutoff $\pi/N$ with an in-band frequency shaping that mimics the characteristic of the filter in Figure~\ref{fig:mr:zohComp}. Since the filter is digital, there is no difficulty in designing, say, an unconditionally stable FIR with the desired characteristic.
+Again, rather than using complicated and expensive analog filters, the performance of the D/A converter can be dramatically improved if we are willing to perform the conversion at a higher rate than the nominal sampling frequency, as shown in Figure~\ref{fig:mr:overDAblock}. The digital signal is first oversampled by a factor of $N$ and then filtered with a discrete-time filter $F(e^{j\omega})$ that combines the out of band frequency rejection of a sharp lowpass with cutoff $\pi/N$ with an in-band frequency shaping that mimics the characteristic of the filter in Figure~\ref{fig:mr:zohComp}. Since the filter is digital, there is no difficulty in designing, say, an unconditionally stable FIR with the desired characteristic.
The oversampled D/A procedure using a zero-order hold and an oversampling factor $N=2$ is illustrated in Figure~\ref{fig:mr:overDA}. The top panels shows the DTFT of the original signal; the spectrum that enters the interpolator is
\[
X_o(e^{j\omega}) = X_{N\uparrow}(e^{j\omega})\,F(e^{j\omega}) = X(e^{j\omega N})\,F(e^{j\omega});
\]
the two terms of the above expression are shown in the second panel. We can decompose the filter $F(e^{j\omega})$ as
\[
F(e^{j\omega}) = N\,\rect(\omega N/(2\pi))\,C(e^{j\omega})
\]
where the rect\footnote{
We use an ideal filter for convenience but of course, in practical implementations, this would be a realizable sharp lowpass.}
matches the upsampler's rate and where $C(e^{j\omega})$ compensates for the nonflat in-band characteristics of the interpolator; the resulting spectrum is shown in the third panel. Note how oversampling has created some ``room'' to the left and the right of the spectrum; this will be important for the analog part of the D/A. If we now interpolate $x_o[n]$ at a rate of $F_o = NF_s$~Hz, we have
\begin{align}
X_o(f) &= \frac{1}{F_o}\,X_o(e^{j2\pi f / F_o})\, I_0\left(\frac{f}{F_o}\right) \nonumber \\
&= \frac{N}{F_o}\, \left[ X(e^{j\omega K})\, \rect\left(\frac{\omega N}{2\pi}\right)\, C(e^{j\omega}) \right]_{\omega = 2\pi f/F_o}\, I_0\left(\frac{f}{F_o}\right) \nonumber \\
&= \frac{1}{F_s}\, X(e^{j2\pi f/F_s})\, \rect\left(\frac{f}{F_s}\right)\, C(e^{j2\pi f/F_o})\, I_0\left(\frac{f}{F_o}\right)
\end{align}
The fourth panel in Figure~\ref{fig:mr:overDAblock} shows the digital spectrum mapped on the real frequency axis and the magnitude response of the zero-order hold; note that now the first zero crossing for the latter occurs at $NF_s$ (compare that to Figure~\ref{fig:mr:zohInterp}). Since $C(e^{j\omega})$ is designed to compensate for $I_0(f)$, we have that
\[
X_o(f) = X(f) \quad \mbox{for $f\in [-F_s, F_s]$}
\]
-that is, the interpolation is equivalent to a sinc interpolation over the baseband. The only remaining problem is the spurious high frequency content at multiples of $F_o$, as shown in the last panel of Figure~\ref{fig:mr:overDAblock}. This needs to be eliminated with an analog filter but, because of the room between spectral replicas created by oversampling, the required filter can have a wide transition band as shown in Figure~\ref{fig:mr:overDAlast} and therefore can be implemented very cheaply using for instance a low-order Butterworth lowpass.
+that is, the interpolation is equivalent to a sinc interpolation over the baseband. The only remaining problem is the spurious high frequency content at multiples of $F_o$, as shown in the last panel of Figure~\ref{fig:mr:overDAblock}. This needs to be eliminated with an analog filter but, because of the room between spectral replicas created by oversampling, the required filter can have a wide transition band as shown in Figure~\ref{fig:mr:overDAlast} and therefore can be implemented very cheaply using for instance a low-order Butterworth lowpass. Finally, note that the present analysis remains valid also with higher order kernels, such as the first- and third-order interpolators detailed in Section~\ref{sec:is:locInterp}, since their frequency response is similar to that of the zero-order hold.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=2,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspTri{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=2,ylabel={$X_{2\uparrow}(e^{j\omega})F(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x 2 mul \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorDFilt,linestyle=dashed]{x abs 0.5 ge {0} {1 x \dspSinc{0}{1} abs div 0.7 mul} ifelse}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X_o(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x abs 0.5 ge {0} {x 2 mul \dspTri{0}{1} x \dspSinc{0}{1} abs div } ifelse}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xtype=freq,xticks=custom,yticks=1,ylabel={$X_o(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriod{2} dup \dspTri{0}{1} exch .5 mul \dspSinc{0}{1} abs dup 0 gt {} {pop 1} ifelse div}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{4} abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s$ 2 $F_o/2$ 4 $F_o$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xtype=freq,xticks=custom,yticks=1,ylabel={$X_o(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriod{2} \dspTri{0}{1} x \dspSinc{0}{4} abs mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s$ 2 $F_o/2$ 4 $F_o$}
\end{dspPlot}
\caption{Oversampled D/A for $N=2$.}\label{fig:mr:overDA}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[xtype=freq,xtype=freq,xticks=custom,yticks=1]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriod{2} \dspTri{0}{1} x \dspSinc{0}{4} abs mul}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x 0.8 mul 4 exp 1 add 1 exch div}
\dspCustomTicks[axis=x]{0 0 1 $F_s$ 2 $F_o/2$ 4 $F_o$}
\end{dspPlot}
\caption{Final analog filtering in oversampled D/A.}\label{fig:mr:overDAlast}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
diff --git a/writing/sp4comm.multipub/110-multirate/90-mr-examples.tex b/writing/sp4comm.multipub/110-multirate/90-mr-examples.tex
index 0aaa502..4136ec4 100644
--- a/writing/sp4comm.multipub/110-multirate/90-mr-examples.tex
+++ b/writing/sp4comm.multipub/110-multirate/90-mr-examples.tex
@@ -1,127 +1,126 @@
\section{Examples}
\label{sec:mr:examples}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{example}{Radio over the phone}
-
+\begin{example}[Radio over the phone.]
In the early days of radio (up to the 1950's), a main station would often have the necessity of providing audio content ahead of
time to the ancillary broadcasters in its geographically distributed network. Before digital signal processing even existed, and before high-bandwidth communication lines became a possibility, the only form of point-to-point real-time communications was a standard telephone line. Telephone lines, however, have a much smaller bandwidth \index{bandwidth!of telephone channel} \index{telephone channel!bandwidth} than what is required to broadcast good quality audio, as we all have experienced when listening to mind-numbing jingles while on hold on a call. To overcome this problem, the original audio tape would be played at a lower speed so that the resulting bandwidth could fit into the available band. At the other end of the line, a tape would record the signal while also rolling at a lower speed and then, when played back normally, the original audio would emerge. For a continuous-time signal we know that a change in time scale produces an inversely proportional change in the frequency support, as pointed out in~(\ref{eq:is:scalingFT}):
\[
\mbox{FT} \bigl\{ x (at) \bigr\} = \frac{1}{a}\, X(f/a).
\]
so that by slowing down the tape by a factor of two ($\alpha = 1/2$) the spectral occupancy of the signal would be halved.
Today, with digital signal processing at our disposal, we have many more choices and here we will explore the difference between a discrete-time version of the analog scheme of yore and a full-fledged digital communication system such as the one we will study in detail in Chapter~\ref{ch:cs}. Assume we have a DVD-quality audio signal $s[n]$; the signal is finite-length and it corresponds to $30$~minutes of playback time. Recall that ``DVD-quality''\index{DVD} means that the audio is sampled at $48$~KHz with $24$~bits per sample and using $24$~bits means that we can practically neglect the SNR introduced by the quantization. We want to send this signal over a telephone line knowing that the line is bandlimited to a maximum positive frequency of $3840$~Hz and that the impairment introduced by the transmission can be modeled as a source of noise that results in a SNR of $40$~dB for the received signal.\index{telephone channel!SNR}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspBlocks}{0.8}{0.5}
$s[n]$~ & \BDfilterMulti{multirate \\ converter (TX)} & \BDfilterMulti{48KHz D/A} & \raisebox{-1.2em}{ \includegraphics[height=3em]{\localpath{figs/phone.eps}}} \\
%
\raisebox{-1.2em}{ \includegraphics[height=3em]{\localpath{figs/phone.eps}}} & \BDfilterMulti{48KHz A/D} & \BDfilterMulti{multirate \\ converter (TX)} & $\hat{s}[n]$
\\
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2} \ncline{->}{1,2}{1,3}
\ncline{->}{1,3}{1,4}
\ncline{->}{2,1}{2,2} \ncline{->}{2,2}{2,3}
\ncline{->}{2,3}{2,4}
\end{dspBlocks}
\caption{Transmission scheme for high-quality audio over a phone line.}\label{fig:mr:radioPhone}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Consider the transmission scheme in Figure~\ref{fig:mr:radioPhone} where the D/A is obviously designed to match the original audio and therefore operates at $48$~KHz. The (positive) bandwidth of the DVD-audio signal is $24$~KHz, while the telephone channel is limited to $3840$~Hz so we need to shrink the spectrum of the audio signal using multirate processing. Since we're slowing down the signal we need a net increase of the number of samples and the upsampling factor is
\[
\frac{24,000}{3,840} = 6.25;
\]
this can be achieved with a combination of a $25$-times upsampler followed by lowpass filter and a $4$-times downsampler as in
Figure~\ref{fig:mr:radioPhoneChanger} where $L_{TX}(z)$; the filter's cutoff frequency is $\pi/25$ and its gain is $L_0 = 4/25$. At the
receiver the chain is inverted, with an upsampler by four, followed by a lowpass filter with cutoff frequency $\pi/4$ and gain $L_0 = 25/4$ followed by a $25$-times downsampler.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[!h]
\center
\begin{dspBlocks}{1}{0.6}
$x[n]$ & \BDupsmp{25} & \BDfilter{LP$\{\pi/25\}$} & \BDdwsmp{4} & $y[n]$
\psset{arrows=->,linewidth=1pt}
\ncline{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\end{dspBlocks}
\caption{A $6.25$-times upsampler.}\label{fig:mr:radioPhoneChanger}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Because of the upsampling, it will take a little over three hours to send the slowed-down signal over the phone ($6.25 \times 30 = 187.5$ minutes). The quality of the received signal is determined by the SNR of the telephone line; the in-band noise is unaffected by multirate processing and so the final audio will have an overall SNR of $40$~dBs.
We can compare the above solution to a fully digital communication scheme. A standard voiceband modem \index{modem} can reliably transmit about $32$~kbits per second over a telephone line whose bandwidth and SNR are the ones described above. The 30-minute DVD-audio file contains $(30 \times 60 \times 48,000 \times 24)$~bits which will therefore require a transmission time of approximately $18$ hours. The upside, however, is that the received audio will indeed be identical to the source, i.e.\ it will have the same SNR as the original DVD-quality audio. If we are willing to sacrifice quality for time we could quantize the original signal at $8$~bits per sample, so that the SRN is approximately $48$~dB, and the transmission time would reduce to $6$~hours. Clearly, a modern audio transmission system would employ some advanced data compression scheme to reduce the necessary throughput.
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{example}{Spectral cut and paste}
+\begin{example}[Spectral cut and paste]
By using a suitable combination of upsampling and downsampling we can implement some nice tricks, such as swapping the upper and lower parts of a signal's spectrum. Consider a discrete-time signal $x[n]$ whose spectrum is shown in the top panel of Figure~\ref{fig:mr:swap}. The bottom panel shows the result of processing the signal with the network in Figure~\ref{fig:mr:swapCircuit}, where $L(z)$ is an ideal half-band \index{half-band filter} lowpass with cutoff $\pi/2$ and
and $H(z)$ is a complementary half-band highpass.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[!t]
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=2]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x \dspTri{0}{0.5}
x \dspQuad{1}{0.5}
x \dspQuad{-1}{0.5}
add add}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=2]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x \dspQuad{0}{0.5}
x \dspTri{1}{0.5}
x \dspTri{-1}{0.5}
add add}
\end{dspPlot}
\caption{Swapping the high and low parts of a spectrum.}\label{fig:mr:swap}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b]
\center\small
\begin{dspBlocks}{0.5}{0.4}
& & \BDfilter{$L(z)$} & \BDdwsmp{$2$} & \BDupsmp{$2$} & \BDfilter{$H(z)$} & & \\
$x[n]$~ & \BDsplit & & & & & \BDadd & $y[n]$ \\
& & \BDfilter{$H(z)$} & \BDdwsmp{$2$} & \BDupsmp{$2$} & \BDfilter{$L(z)$} & & \\
\psset{arrows=->,linewidth=1pt}
\ncline{-}{2,1}{2,2} \ncline{-}{1,2}{3,2}
\ncline{-}{1,6}{1,7}\ncline{-}{3,6}{3,7}
\ncline{1,7}{2,7} \ncline{3,7}{2,7}
\ncline{1,2}{1,3}\ncline{1,3}{1,4}\ncline{1,4}{1,5}\ncline{1,5}{1,6}
\ncline{3,2}{3,3}\ncline{3,3}{3,4}\ncline{3,4}{3,5}\ncline{3,5}{3,6}
\ncline{2,7}{2,8}
\end{dspBlocks}
\caption{Spectral ``swapper''.}\label{fig:mr:swapCircuit}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Further Reading}
\label{sec:mr:fr}
Historically, the topic of different sampling rates in signal processing was first treated in detail in R.\ E.\ Crochiere and
L.\ R.\ Rabiner's, \textit{Multirate Digital Signal Processing\/} (Prentice-Hall, 1983). With the advent of filter banks and
wavelets, more recent books give a detailed treatment as well, such as P.\ P.\ Vaidyanathan,
\textit{Multirate Systems and Filter Banks\/} (Prentice Hall, 1992), and M.\ Vetterli and J.\ Kovacevic's, \textit{Wavelets and Subband Coding\/} (Prentice Hall, 1995). The latter is available in open
access at www.waveletsandsubbandcoding.org
diff --git a/writing/sp4comm.multipub/110-multirate/99-mr-exercises.tex b/writing/sp4comm.multipub/110-multirate/99-mr-exercises.tex
index 5158d72..2b4c3c9 100644
--- a/writing/sp4comm.multipub/110-multirate/99-mr-exercises.tex
+++ b/writing/sp4comm.multipub/110-multirate/99-mr-exercises.tex
@@ -1,232 +1,231 @@
\section{Exercises}
\label{sec:mr:ex}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Multirate identities}
Prove the following two identities:
\begin{enumerate}
\item Downsampling by $2$ followed by filtering by $H(z)$ is equivalent to filtering by $H(z^2)$ followed by downsampling by $2$.
\item Filtering by $H(z)$ followed by upsampling by $2$ is equivalent to upsampling by $2$ followed by filtering by $H(z^2)$.
\end{enumerate}
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Multirate systems}
-Consider the input-output characteristic of the following multirate systems. Remember that, technically, one cannot talk of transfer functions in the case of multirate systems since sampling rate changes are not time invariant. It may happen, though, that by carefully designing the processing chain, the input-output characteristic does indeed implement a time-invariant transfer function.
+Consider the input-output characteristic of the following multirate systems. Remember that, technically, one cannot talk of transfer functions in the case of multirate systems since sampling rate changes are not time-invariant. It may happen, though, that by carefully designing the processing chain, the input-output characteristic does indeed implement a time-invariant transfer function.
\begin{enumerate}
\item Find the overall transformation operated by the following system:
{
\center\small
\begin{dspBlocks}{0.3}{0.4}
$x[n]$~ & \BDupsmp{$2$} & \BDfilter{$H_2(z^2)$} & \BDdwsmp{$2$} & %
\BDupsmp{$2$} & \BDfilter{$H_1(z^2)$} & \BDdwsmp{$2$} & $y[n]$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\ncline{1,5}{1,6}\ncline{1,6}{1,7}
\ncline{->}{1,7}{1,8}
\end{dspBlocks}
}
\item In the system below, if $H(z)=E_0(z^2)+z^{-1}E_1(z^2)$ for some $E_{0,1}(z)$, prove that $Y(z)=X(z)E_0(z)$.
{
\center\small
\begin{dspBlocks}{0.5}{0.4}
$x[n]$~ & \BDupsmp{$2$} & \BDfilter{$H(z)$} & \BDdwsmp{$2$} & $y[n]$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{->}{1,4}{1,5}
\end{dspBlocks}
}
\item Let $H(z)$, $F(z)$ and $G(z)$ be filters satisfying
\begin{align*}
& H(z)G(z)+H(-z)G(-z) = 2 \\
& H(z)F(z)+H(-z)F(-z) = 0
\end{align*}
Prove that one of the following systems is unity and the other zero:
{
\center\small
\begin{dspBlocks}{0.5}{0.4}
$x[n]$~ & \BDupsmp{$2$} & \BDfilter{$G(z)$} & \BDfilter{$H(z)$} & \BDdwsmp{$2$} & $y[n]$ \\
$x[n]$~ & \BDupsmp{$2$} & \BDfilter{$F(z)$} & \BDfilter{$H(z)$} & \BDdwsmp{$2$} & $y[n]$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\ncline{->}{1,5}{1,6}
\ncline{->}{2,1}{2,2} \ncline{2,2}{2,3}
\ncline{2,3}{2,4}\ncline{2,4}{2,5}
\ncline{->}{2,5}{2,6}
\end{dspBlocks}
}
\end{enumerate}
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Multirate Signal Processing}
Consider a real-valued discrete-time signal $x[n]$ with the following spectrum:
\begin{center}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=4]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{
x \dspRect{0}{0.5}
x \dspTri{0.5}{0.25} x \dspTri{-0.5}{0.25} add
x \dspQuad{1}{0.25} x \dspQuad{-1}{0.25} add
add add}
\end{dspPlot}
\end{center}
Now consider the following multirate processing scheme in which $L(z)$ is an ideal \emph{lowpass} filter with cutoff
frequency $\pi/2$ and $H(z)$ is an ideal \emph{highpass} filter with cutoff frequency $\pi/2$:
\begin{center}
\small
\psset{linewidth=1pt}
\psset{unit=.6mm}
\begin{pspicture}(0,0)(220,100)
\rput[l](0,50){\rnode{input}{$x[n]$}}
\pnode(20,50){branch1}
\cnode*(20.6,50){1.5pt}{toto}
\ncline{input}{branch1}
\rput(40,70){\rnode{fil1}{\psframebox{$L(z)$}}}
\rput(40,30){\rnode{fil2}{\psframebox{$H(z)$}}}
\ncbar[angle=180,armA=12]{<->}{fil1}{fil2}
\rput(70,70){\rnode{dw1}{\pscirclebox{$\downarrow 2$}}}
\rput(70,30){\rnode{dw2}{\pscirclebox{$\downarrow 2$}}}
\ncline{->}{fil1}{dw1} \ncline{->}{fil2}{dw2}
\rput(100,70){\rnode{up1}{\pscirclebox{$\uparrow 4$}}}
\rput(100,30){\rnode{up2}{\pscirclebox{$\uparrow 4$}}}
\ncline{->}{dw1}{up1} \ncline{->}{dw2}{up2}
\pnode(120,70){branch2} \ncline{up1}{branch2}
\rput(140,80){\rnode{fil3}{\psframebox{$L(z)$}}}
\rput(140,60){\rnode{fil4}{\psframebox{$H(z)$}}}
\ncbar[angle=180,armA=12]{<->}{fil3}{fil4}
\rput(170,80){\rnode{out1}{$y_1[n]$}}
\rput(170,60){\rnode{out2}{$y_2[n]$}} \ncline{->}{fil3}{out1}
\ncline{->}{fil4}{out2}
\pnode(120,30){branch3} \ncline{up2}{branch3}
\rput(140,40){\rnode{fil5}{\psframebox{$L(z)$}}}
\rput(140,20){\rnode{fil6}{\psframebox{$H(z)$}}}
\ncbar[angle=180,armA=12]{<->}{fil5}{fil6}
\rput(170,40){\rnode{out3}{$y_3[n]$}}
\rput(170,20){\rnode{out4}{$y_4[n]$}} \ncline{->}{fil5}{out3}
\ncline{->}{fil6}{out4}
\end{pspicture}
\end{center}
Plot the four spectra $Y_1(e^{j\omega})$, $Y_2(e^{j\omega})$, $Y_3(e^{j\omega})$, $Y_4(e^{j\omega})$.
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Digital processing of continuous-time signals}
-In your grandmother's attic you just found a treasure: a collection of super-rare $78$ rpm vinyl\index{vinyl} jazz records.
-The first thing you want to do is to transfer the recordings to compact discs, so you can listen to them without wearing out the originals. Your idea is obviously to play the record on a turntable and use an A/D converter to convert the line-out signal into a discrete-time sequence, which you can then burn onto a CD. The problem is, you only have a ``modern'' turntable, which plays records at $33$ rpm. Since you're a DSP wizard, you know you can just go ahead, play the $78$ rpm record at $33$~rpm and sample the output of the turntable at $44.1$~KHz. You can then manipulate the signal in the discrete-time domain so that, when the signal is recorded on a CD
+In your grandmother's attic you just found a treasure: a collection of super-rare $78$ rpm vinyl\index{vinyl} jazz records. The first thing you want to do is to transfer the recordings to compact discs, so you can listen to them without wearing out the originals. Your idea is obviously to play the record on a turntable and use an A/D converter to convert the line-out signal into a discrete-time sequence, which you can then burn onto a CD. The problem is, you only have a ``modern'' turntable, which plays records at $33$ rpm. Since you're a DSP wizard, you know you can just go ahead, play the $78$ rpm record at $33$~rpm and sample the output of the turntable at $44.1$~KHz. You can then manipulate the signal in the discrete-time domain so that, when the signal is recorded on a CD
and played back, it will sound right.
In order to design a system that performs the above conversion consider the following:
\begin{itemize}
\item Call $s(t)$ the continuous-time signal encoded on the $78$ rpm vinyl (the jazz music).
\item Call $x(t)$ the continuous-time signal you obtain when you play the record at $33$ rpm on the modern turntable.
\item Let $x[n] = x(nT_s)$, with $T_s = 1/44,100$.
\end{itemize}
Answer the following questions:
\begin{enumerate}
\item Express $x(t)$ in terms of $s(t)$.
\item Sketch the Fourier transform $X(f)$ when $S(f)$ is like so:
\begin{center}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=none]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPorkpie{0}{0.25}}
\dspCustomTicks{0 0 0.25 $F_{\max}$}
\end{dspPlot}
\end{center}
The highest nonzero frequency of $S(f)$ is $F_{\max} = 16,000$~Hz (old records have a smaller bandwidth than modern ones).
\item Design a system to convert $x[n]$ into a sequence $y[n]$ so that, when you interpolate $y[n]$ to a continuous-time signal $y(t)$ with interpolation period $T_s$, you obtain $Y(f) = S(f)$.
\item What if you had a turntable which plays records at $45$ rpm? Would your system be different? Would it be better?
\end{enumerate}
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Another fractional delay.}
Consider the following block diagram:
\begin{center}
\small
\begin{dspBlocks}{0.5}{0.4}
$x[n]$~ & \BDupsmp{$M$} & \BDfilter{LP$\{\pi/M\}$} & \BDfilter{$z^{-L}$} & \BDdwsmp{$M$} & $y[n]$
\ncline{->}{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{1,4}{1,5}
\ncline{->}{1,5}{1,6}
\end{dspBlocks}
\end{center}
and show that this system implements a fractional delay (i.e.\ show that the transfer function of the system is that of a pure delay, where the delay is not necessarily an integer).
To see a practical use of this structure, consider a data transmission system over an analog channel. The transmitter builds a discrete-time signal $s[n]$; this is converted to an analog signal $s_c(t)$ via an interpolator with period $T_s$, and finally $s_c(t)$ is transmitted over the channel. The signal takes a finite amount of time to travel all the way to the receiver; say that the
transmission time over the channel is $t_0$~seconds: the received signal $\hat{s}_c(t)$ is therefore just a delayed version of the transmitted signal,
\[
\hat{s}_c(t) = s_c (t-t_0)
\]
At the receiver, $\hat{s}_c(t)$ is sampled with a sampler with period $T_s$ so that no aliasing occurs to obtain $\hat{s}[n]$.
\begin{enumerate}
\item Write out the Fourier Transform of $\hat{s}_c(t)$ as a function of $S_c(f)$.
\item Write out the DTFT of the received signal sampled with rate $T_s$, $\hat{s}[n]$.
\item Now we want to use the above multirate structure to compensate for the transmission delay. Assume $t_0 = 4.6\,T_s$; determine the values for $M$ and $L$ in the above block diagram so that $\hat{s}[n] = s[n - D]$, where $D \in \mathbb{N}$ has the smallest possible value (assume an ideal lowpass filter in the multirate structure).
\end{enumerate}
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Multirate filtering}
Assume $H(z)$ is an ideal lowpass filter with cutoff frequency $\pi/10$. Consider the system described by the following block diagram:
\begin{center}
\small
\begin{dspBlocks}{0.5}{0.4}
$x[n]$~ & \BDupsmp{$M$} & \BDfilter{$H(z)$} & \BDdwsmp{$M$} & $y[n]$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2} \ncline{1,2}{1,3}
\ncline{1,3}{1,4}\ncline{->}{1,4}{1,5}
\end{dspBlocks}
\end{center}
\begin{enumerate}
\item Compute the transfer function of the system for $M = 2$.
\item Compute the transfer function of the system for $M = 5$.
\item Compute the transfer function of the system for $M = 9$.
\item Compute the transfer function of the system for $M = 10$.
\end{enumerate}
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{exercise}{Oversampled sequences}
Consider a real-value sequence $x[n]$ for which
\[
X(e^{j \omega })=0 \qquad \quad \frac{\pi}{3} \leq | \omega | \leq \pi
\]
One sample of $x[n]$ may have been corrupted and we would like to approximately or exactly recover it. We denote $n_0$ the time
index of the corrupted sample and $\hat{x}[n]$ the corresponding corrupted sequence.
\begin{enumerate}
\item Specify a practical algorithm for exactly or approximately recovering $x[n]$ from $\hat{x}[n]$ if $n_0$ is known.
\item What would you do if the value of $n_0$ is not known?
\item Now suppose we have $k$ corrupted samples at either known or unknown locations. What is the condition that $X(e^{j \omega })$ must satisfy to be able to exactly recover $x[n]$? Specify the algorithm.
\end{enumerate}
\end{exercise}
diff --git a/writing/sp4comm.multipub/20-signals/90-dt-examples.tex b/writing/sp4comm.multipub/20-signals/90-dt-examples.tex
index 423d095..399d8e6 100644
--- a/writing/sp4comm.multipub/20-signals/90-dt-examples.tex
+++ b/writing/sp4comm.multipub/20-signals/90-dt-examples.tex
@@ -1,166 +1,166 @@
\section{Examples}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{example}[Discrete time in the Far West]
\label{exa:dt:wagonwheel}\index{wagonwheel effect|mie}
\index{wagon wheel effect} \index{complex exponential!aliasing}
The fact that the ``fastest'' digital frequency is $2\pi$ can be readily appreciated when watching an old movie, and in particular a good old western. Consider the classic scene of a stagecoach leaving town: the wagon's spoked wheels start to turn forward, faster and faster as the coach gains speed, but then they appear to stop and to start turning backwards. The phenomenon is known as ``the wagonwheel effect'' and it is an instance of a digital angular frequency wrapping around its possible maximum value.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\def\wheel#1#2{
\pscircle(#1,50){#2}
}
\def\spokes#1#2#3#4{
\def\a{#3}
\def\c{#1}
\def\r{#2}
\FPupn\xa{\r{} \a{} sin mul \c{} add}
\FPupn\xb{-\r{} \a{} sin mul \c{} add}
\FPupn\ya{\r{} \a{} cos mul 50 add}
\FPupn\yb{-\r{} \a{} cos mul 50 add}
\psline[#4](\xa, \ya)(\xb, \yb)
\FPupn\xa{-\r{} \a{} cos mul \c{} add}
\FPupn\xb{\r{} \a{} cos mul \c{} add}
\FPupn\ya{\r{} \a{} sin mul 50 add}
\FPupn\yb{-\r{} \a{} sin mul 50 add}
\psline[#4](\xa, \ya)(\xb, \yb)
\ifx&#4&
\qdisk(\xb, \yb){2pt}
\else
\relax
\fi
}
\psset{unit=0.4mm, linewidth=1pt}
\framebox{
\begin{pspicture}(0, 0)(300, 100)
\wheel{50}{30}
\spokes{50}{30}{0}{}
\wheel{150}{30}
\spokes{150}{30}{0}{linecolor=gray}
\spokes{150}{30}{-0.2}{}
\wheel{250}{30}
\spokes{250}{30}{0}{linecolor=lightgray}
\spokes{250}{30}{-0.2}{linecolor=gray}
\spokes{250}{30}{-0.4}{}
\end{pspicture}}
\vspace{2em}
\framebox{
\begin{pspicture}(0, 0)(300, 100)
\wheel{50}{30}
\spokes{50}{30}{0}{}
\wheel{150}{30}
\spokes{150}{30}{0}{linecolor=gray}
\spokes{150}{30}{-1.4}{}
\wheel{250}{30}
\spokes{250}{30}{0}{linecolor=lightgray}
\spokes{250}{30}{-1.4}{linecolor=gray}
\spokes{250}{30}{-2.8}{}
\end{pspicture}}
\caption{Forward motion (top panel, $\omega_0 = \pi/15$) and apparent backward motion (bottom panel, $\omega_0 = 0.9\,\pi/2$).}
\label{fig:dt:wagonwheel}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To better understand the phenomenon, remember that each frame in the film is a static snapshot of a rotating wheel. In stationary conditions (i.e., the wheel's motion is constant) if the angular speed of the wheel is $\omega_0$ radians per second, and if the frame rate of the movie is $F$ frames per second, the angular displacement between two successive frames is $\Delta = \omega_0/F$.
If the wheel has $K$ spokes (and has perfect circular symmetry), two wheel positions $2\pi/K$ radians apart will be undistinguishable. That gives us a first upper limit on the maximum angular speed of
\[
\omega_1 < 2\pi \frac{F}{K} \quad \mbox{rad/sec}
\]
Things are however slightly more complicated because of the perceptual mechanisms of the human visual system. The illusion of movement in motion pictures relies on the brain's ability to ``fill in the gaps'' between successive static images provided that the images come fast enough: anything below approximately~12 frames per second will not create the illusion of motion. The full mechanism at work is very complex but one of the underlying principles is that the brain will apply a ``plausibility test'' to the virtual motion and go with the most realistic hypothesis. When looking at a revolving spoked wheel, the brain will perceive a rotation in the direction that minimizes the apparent motion of the spokes between successive frames. So, for small angular displacements as in the top panel of Figure~\ref{fig:dt:wagonwheel} the motion will appear counterclockwise; for displacements larger than $\omega_1/2$, on the other hand, the motion will appear clockwise as in the bottom panel of Figure~\ref{fig:dt:wagonwheel}. In both figures, the current position is drawn in black and previous position are drawn in progressively lighter shades of gray; the current position for the reference spoke is indicated by a dot. This observation gives us a second upper limit on the maximum angular speed for ``forward'' motion:
\[
\omega_1 < \pi \frac{F}{K} \quad \mbox{rad/sec}
\]
In practice things are actually a bit more complicated still. For instance, when the angular speed is equal to $\omega_1$ above, the perceived image may be that of a stationary wheel with twice the number of spokes, i.e., the preferred interpretation on the part of the brain is to superimpose two successive frames into a slightly blurred but still image rather than postulating such a wide motion. The net effect is that, at speeds close to $\omega_1$, the wheel appears to have twice the number of spokes and therefore the maximum forward speed is $\omega_2 = \omega_1/2$.
Finally, consider a ``real'' wheel with a diameter of $d$ meters. The linear speed of the wagon for an angular speed of $\omega_2$ is
\[
v = \frac{d}{2}\, \frac{\pi}{2}\, \frac{F}{K}
\]
For an 8-spoke wheel with a diameter of one meter in a movie shot at 24fps, the maximum forward velocity is reached when the wagon travels at approximately 8~Km/h. Since the apparent forward/backward switch in rotation takes place each time the angular velocity reaches a multiple of $\omega_2$, by the time the stagecoach reaches full speed the wagonwheel effect has happened several times.
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{example}{Building periodic signals}\label{exa:dt:periodization}\index{periodization|mie}
+\begin{example}[Building periodic signals] \label{exa:dt:periodization}\index{periodization|mie}
Given an infinite sequence $\mathbf{x}$ and an integer $N > 0$ we can always formally write
\begin{equation}\label{eq:dt:periodization}
\tilde{\mathbf{y}} = \sum_{k=-\infty}^{\infty} \mathcal{D}^{kN} \mathbf{x}
\end{equation}
or, explicitly,
\[
\tilde{y}[n] = \sum_{k=-\infty}^{\infty} x[n - kN]
\]
The signal $\tilde{\mathbf{y}}$, if it exists, is an $N$-periodic sequence, {}``manufactured''{} by superimposing infinite copies of the original signal spaced $N$~samples apart. We can distinguish three cases:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[p]
\center
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.2}
\dspSignal[linecolor=ColorDT]{x \dspTri{0}{10}}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.2}
\dspSignalOpt[linecolor=ColorDT]{/S 35 def}{0 -5 1 5 {S mul x add \dspTri{0}{10} add} for}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.2}
\dspSignalOpt[linecolor=ColorDT]{/S 19 def}{0 -5 1 5 {S mul x add \dspTri{0}{10} add} for}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.2}
\dspSignalOpt[linecolor=ColorDT]{/S 15 def}{0 -5 1 5 {S mul x add \dspTri{0}{10} add} for}
\end{dspPlot}
\caption{Periodization of a simple finite-support signal (support length $L=19$); original signal (top panel) and
periodized versions with $N = 35 > L$, $N = 19 = L$, $N = 15 < L$ respectively.}\label{fig:dt:periodization}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{enumerate}
\item If $\mathbf{x}$ is finite-support and $N$ is larger than the size of the support, then the copies in the sum do not overlap, as illustrated in the second panel of Figure~\ref{fig:dt:periodization}; when $N$ is exactly equal to the size of the support, then $\tilde{\mathbf{y}}$ corresponds to the periodic extension of the nonzero part of the signal, as shown in the third panel of Figure~\ref{fig:dt:periodization}.
\item If $\mathbf{x}$ is finite-support and $N$ is smaller than the size of the support then the copies in the sum do overlap and the shape of the periodized signal will no longer match the shape of the original, as illustrated in the last panel of Figure~\ref{fig:dt:periodization}; since, for each $n$, the value of $\tilde{y}[n]$ is the sum of a finite number of terms, the periodized signal always exists.
\item If $\mathbf{x}$ has infinite support, then each value of $\tilde{\mathbf{y}}$ is the sum of an infinite number of terms and therefore the periodized signal only exists if the sum converges; a sufficient condition for this to happen is that the original sequence be absolutely summable. An example is shown in Figure~\ref{fig:dt:periodizationExp} for an exponential decay of the form $x[n] = \alpha^{-n}\, u[n]$. The periodization formula yields
\[
\tilde{y}[n] = \sum_{k=-\infty}^{\infty} \alpha^{-(n - kN)}u[n-kN] = \sum_{k=-\infty}^{\lfloor n/N \rfloor} \alpha^{-(n - kN)}
\]
since $u[n-kN] = 0$ for $k \geq \lfloor n/N \rfloor$. Now write $n = mN + i$ with $m = \lfloor n/N \rfloor$ and $i = n \mod N$ to obtain
\[
\tilde{y}[n] = \sum_{k=-\infty}^{m} \alpha^{-(m - k)N -i} = \alpha^{-i} \sum_{h=0}^{\infty} \alpha^{-hN};
\]
for $|\alpha|>1$ the sum converges to
\[
\tilde{y}[n] = \frac{ \alpha^{-(n \mod N)}}{1 - \alpha^{-N}}
\]
which is indeed $N$-periodic.
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\def\period{21 }
\def\alphaVal{1.1 }
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.4}
\dspSignal[linecolor=ColorDT]{x 0 ge {\alphaVal x neg exp} {0} ifelse}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xticks=10]{-40,40}{0,1.4}
\dspSignal[linecolor=ColorDT]{\alphaVal x x \period div floor \period mul sub neg exp
1 \alphaVal -\period exp sub
div}
\end{dspPlot}
\caption{Periodization of $x[n] = (\alphaVal)^{-n}\,u[n]$ with $N = \period$; original signal (top panel) and
periodized version (bottom panel).}\label{fig:dt:periodizationExp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Further Reading}
\textit{The} reference on discrete-time signals is undoubtedly the classic \textit{Discrete-Time Signal Processing\/},
by A.\ V.\ Oppenheim and R.\ W.\ Schafer (Prentice-Hall, last edition in 1999). Other books of interest for an introductory overview include: B. Porat, \textit{A Course in Digital Signal Processing\/} (Wiley, 1997) and R. L. Allen and D. W. Mills' \textit{Signal Analysis\/} (IEEE Press, 2004).
diff --git a/writing/sp4comm.multipub/20-signals/99-dt-exercises.tex b/writing/sp4comm.multipub/20-signals/99-dt-exercises.tex
index 789f97f..00b26fa 100644
--- a/writing/sp4comm.multipub/20-signals/99-dt-exercises.tex
+++ b/writing/sp4comm.multipub/20-signals/99-dt-exercises.tex
@@ -1,236 +1,232 @@
\section{Exercises}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ifexercises{%
-
\begin{exercise}{Review of complex numbers.}
\begin{enumerate}
\item Let $s[n] = \frac{1}{2^n} + j\frac{1}{3^n}$. Compute $\sum_{n=1}^\infty s[n].$
\item Do the same with $s[n] =\left(\frac{j}{3}\right)^n$.
\item Characterize the set of complex numbers satisfying $z^*=z^{-1}$.
\item Find 3 distinct complex numbers $\{z_0, z_1, z_2\}$ for which $z_i^3 = 1$.
\item Compute the infinite product $\prod_{n=1}^\infty e^{j\pi/2^n}$
\end{enumerate}
\end{exercise}
-
}\fi
\ifanswers{%
-
\begin{solution}{}
\begin{enumerate}
\item A classic result for the partial sum of a geometric series states that:
\[
\sum_{i=0}^{N} z^k=
\begin{cases}
\frac{1-z^{N+1}}{1-z} & \text{for } z\neq 1\\
N+1 & \text{for } z=1\,.
\end{cases}
\]
For a simple proof, we can start from the following equations:
\begin{align*}
s &= 1+z+z^2+\ldots +z^N\,, \\
-z s & =-z-z^2-\ldots -z^N-z^{N+1}\,.
\end{align*}
which, added together, yield
\[
(1-z)s=1-z^{N+1} \Rightarrow s=\frac{1-z^{N+1}}{1-z}\,.
\]
From this we can easily derive
\[
\sum_{k=N_1}^{N_2} z^k = z^{N_1} \sum_{k=0}^{N_2-N_1}z^k = \frac{z^{N_1}-z^{N_2+1}}{1-z}\,.
\]
We can now write
\begin{align*}
\sum_{n=1}^{N} s[n]&=\sum_{n=1}^{N} 2^{-n} + j\sum_{n=1}^{N} 3^{-n}\\
&=\frac{1}{2}\cdot \frac{1-2^{-N}}{1-2^{-1}} + j \frac{1}{3}\cdot
\frac{1-3^{-N}}{1-3^{-1}} = (1-2^{-N})+j\frac{1}{2}(1-3^{-N})\,.
\end{align*}
Since
\[
\lim_{N \to \infty} 2^{-N} = \lim_{N \to \infty} 3^{-N} = 0\,.
\]
we have
\[
\sum_{n=1}^{\infty} s[n]= 1+\frac{1}{2}j\,.
\]
\item We have
\[
\sum_{k=1}^{N}s[k]=\frac{j}{3}\cdot\frac{1-(j/3)^N}{1-j/3}\,.
\]
Since $|\frac{j}{3}|=\frac{1}{3} <1$, $\lim_{N\to\infty}(j/3)^N=0$. Therefore
\[
\sum_{k=1}^{\infty} s[k]=\frac{j}{3-j} = \frac{j(3+j)}{10} = -\frac{1}{10}+j\cdot \frac{3}{10}\,.
\]
\item If $z^* = z^{-1}$ then
\[
z z^* = 1, \quad\forall~z\neq 0.
\]
Since $z z^* = |z|^2$ the condition is equivalent to $|z|^2 =1$; this is valid for all complex number that lie on the unit circle in the complex plane.
\item Since $e^{j2k\pi}=1$, for all $k\in\mathbb{Z}$, $z_k=e^{j\frac{2k\pi}{3}}$ will satisfy $z_k^3=1$. The sequence $e^{j\frac{2k\pi}{3}}$ is periodic with period 3, so there are only three distinct values as $k$ takes on all possible values and these are:
\[
z_0=1, \quad z_1=e^{j\frac{2\pi}{3}} \quad \mbox{and} \quad z_2=e^{j\frac{4\pi}{3}}.
\]
\item We have
\[
\prod_{n=1}^{N}e^{j\frac{\pi}{2^n}}=e^{j\pi\sum_{n=1}^{N}2^{-n}}=e^{j\pi\frac{1}{2}\cdot\frac{1-2^{-N}}{1-1/2}}\,.
\]
Since $\lim_{N\to\infty}2^{-N}=0$,
\[
\prod_{n=1}^{\infty}e^{j\frac{\pi}{2^n}}=e^{j\pi}=-1.
\]
\end{enumerate}
\end{solution}
-
}\fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ifexercises{%
-
\begin{exercise}{Periodic signals}
For each of the following discrete-time signals, state whether the signal is periodic and, if so, specify the
period:
\begin{enumerate}
\item $x[n] = e^{j\frac{n}{\pi}}$
\item $x[n] = \cos(n)$
\item $x[n] = \displaystyle \sqrt{\cos \left(\pi \frac{n}{7} \right)}$
\item $x[n] = \displaystyle \sum_{k = -\infty}^{+\infty}y[n - 100\,k]$, with $y[n]$ absolutely summable.
\end{enumerate}
\end{exercise}
}\fi
\ifanswers{%
+\begin{solution}{}
To verify periodicity in discrete time, in all cases we need to verify if the equation $x[n+N] = x[n]$ has a solution for $N$ integer.
\begin{enumerate}
\item $x[n+N] = e^{j\frac{n+N}{\pi}} = e^{j\frac{n}{\pi}}e^{j\frac{N}{\pi}} = e^{j\frac{N}{\pi}}x[n] $. Therefore we need to check if $e^{j\frac{N}{\pi}} = 1$ for any $N\in \mathbb{Z}$. Clearly this is not the case since $e^{j\alpha} = 1$ only for $\alpha = 2k\pi$ for any integer $k$; in this case $\alpha = N/\pi$ so that it should be $N = 2k\pi^2$, which is never an integer.
\item using the same reasoning as before, we can see that a $2\pi$-periodic function cannot have an integer-valued period.
\item since the square root does not affect periodicity, we just need to verify if $\cos(\pi(n+N)/7) = \cos(\pi n/7)$ for any integer $N$. When $N = 14$ it is $\cos(\pi(n+14)/7) = \cos(\pi n/7 + 2\pi) = \cos(\pi n/7)$. So the signal is periodic with period 14.
\item In this case $x[n]$ is a periodized sequence, as discussed in Example~\ref{exa:dt:periodization} and its period is $N=100$.
\end{enumerate}
-
+\end{solution}
}\fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ifexercises{%
-
\begin{exercise}{Delay Operator in Matrix Form} \label{dt:ex:matrixop1}
We have seen in Section~\ref{sec:dt:operators} that the (circular) delay operator for finite-length signals can be expressed as a matrix-vector multiplication. For example, for signals of length four, the delay by one is represented by the matrix
\[
\mathbf{D} = \begin{bmatrix}
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{bmatrix}.
\]
\begin{enumerate}
\item Verify that the delay-by-two operator, $\mathcal{D}^2$ is indeed expressed by the square of the matrix $\mathbf{D}$.
\item Verify that the advancement by one, $\mathcal{D}^{-1}$ is indeed expressed by the inverse of the matrix $\mathbf{D}$.
\item Verify that the advancement by two, $\mathcal{D}^{-2}$ is expressed by the matrix $\mathbf{D}^{-2}$.
\item Write the $4\times 4$ matrix $\mathbf{D}_L$ that expresses a \textit{logical} shift by one to the right of a vector, that is, a matrix that implements the delay operator using the finite-support extension model for finite-length signals.
\item Show mathematically why the logical shift is not an invertible operator, as opposed to the circular shift.
\end{enumerate}
\end{exercise}
-
}\fi
\ifanswers{%
+\begin{solution}{}
\begin{enumerate}
\item Indeed,
\[
\mathbf{D}^2 = \begin{bmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}
\]
\item It is easy to verify that
\[
\mathbf{D}^{-1} = \begin{bmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{bmatrix}
\]
\item immediate
\item
\[
\mathbf{D}_L = \begin{bmatrix}
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{bmatrix}
\]
\item The determinant of the matrix is zero, so the matrix cannot be inverted.
\end{enumerate}
-
-
+\end{solution}
}\fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ifexercises{%
-
\begin{exercise}{Other Operators in Matrix Form} \label{dt:ex:matrixop2}
Assume we are in the space of length-$4$ signals and express in matrix form the differentiation operator using the circular shift model for the delay.
\begin{enumerate}
\item Is the operator invertible?
\item Give an intuitive explanation as to why it is not.
\item Now let's use the finite-support model for the delay operator and rewrite the matrix for the differentiation. What is the inverse matrix and what operator does it represent?
\item Why is the differentiation now invertible?
\end{enumerate}
\end{exercise}
}\fi
\ifanswers{%
+\begin{solution}{}
The differentiation operator can be expressed as $\mathcal{V} = \mathcal{I} - \mathcal{D}$, where $\mathcal{I}$ is the identity operator, $\mathcal{I}\mathbf{x} = \mathbf{x}$. In matrix form, for signals of length four, this translates to the matrix
\[
\mathbf{V} = \begin{bmatrix}
1 & 0 & 0 & -1 \\
-1 & 1 & 0 & 0 \\
0 & -1 & 1 & 0 \\
0 & 0 & -1 & 1
\end{bmatrix}.
\]
\begin{enumerate}
\item The operator is not invertible since the matrix is not full rank; this is readily apparent since the last column is a the negative of the sum of the other three.
\item The differentiation operator returns the same result for signals that differ by a constant offset, i.e., applying the operator to $\begin{bmatrix} a & b & c & d \end{bmatrix}$ and to $\begin{bmatrix} a + C & b + C & c + C & d + C \end{bmatrix}$ returns $\begin{bmatrix} a-d & b-a & c-b & d-c \end{bmatrix}$ in both cases. This ambiguity prevents invertibility.
\item Using the finite support model we obtain
\[
\mathbf{V}_L = \begin{bmatrix}
1 & 0 & 0 & 0 \\
-1 & 1 & 0 & 0 \\
0 & -1 & 1 & 0 \\
0 & 0 & -1 & 1
\end{bmatrix}.
\]
The matrix is now full rank and its inverse is
\[
\mathbf{S} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1
\end{bmatrix}
\]
which is the matrix associated to the integration operator.
- \item Since $\mathbf{V}_L$ leaves the first element unchanged, it no longer has the offset ambiguity caused by the circular shift. \end{enumerate}
+ \item Since $\mathbf{V}_L$ leaves the first element unchanged, it no longer has the offset ambiguity caused by the circular shift.
+ \end{enumerate}
+\end{solution}
}\fi
diff --git a/writing/sp4comm.multipub/40-fourier/10-fa-DFT.tex b/writing/sp4comm.multipub/40-fourier/10-fa-DFT.tex
index 975ea06..4f49f30 100644
--- a/writing/sp4comm.multipub/40-fourier/10-fa-DFT.tex
+++ b/writing/sp4comm.multipub/40-fourier/10-fa-DFT.tex
@@ -1,651 +1,651 @@
\section{The Discrete Fourier Transform (DFT)}
\label{sec:fa:dft}
Consider $\mathbb{C}^N$, the space of complex-valued signals of finite length $N$; what are all the possible sinusoidal signals that span a \textit{whole} number of periods over the $[0, N-1]$ interval? We will presently show that:
\begin{itemize}
\item there are exactly $N$ such sinusoids
\item their frequencies are all harmonically related, i.e. they are all multiples of the fundamental frequency $2\pi/N$:
\begin{equation}\label{eq:fa:fund_freqs}
\omega_k = \frac{2\pi}{N}k, \quad k = 0, 1, \ldots, N-1;
\end{equation}
\item the set of $N$ length-$N$ complex exponentials at frequencies $\omega_k$ form a set of orthogonal vectors and therefore a basis for $\mathbb{C}^N$
\end{itemize}
With this basis, we are be able to express \textit{any} signal in $\mathbb{C}^N$ as a linear combination of $N$ harmonically-related sinusoids; the set of $N$ coefficients in the linear combination are called the \textit{Discrete Fourier Transform} of the signal, which can be easily computed algorithmically for any input data vector.
\subsection{The Fourier Basis for $\mathbb{C}^N$}
The fundamental oscillatory signal, the discrete-time complex-exponential $e^{j\omega n}$, is equal to $1$ for $n=0$; therefore, if the signal is to span a whole number of periods over $N$ points, we must have
\[
e^{j\omega N} = 1.
\]
In the complex field, the equation $z^N = 1$ has $N$ distinct solutions, given by the $N$ roots of unity
\[
z_k = e^{j\frac{2\pi}{N}k}, \quad k=0, \ldots, N-1,
\]
and so the $N$ possible frequencies that fulfill the $N$-periodicity requirements are those given in~(\ref{eq:fa:fund_freqs}). We can now use these frequencies to define a set $\bigl\{ \mathbf{w}_k \bigr\}_k$ containing $N$ signals of length $N$, where
\begin{equation}
w_k[n] = e^{j\frac{2\pi}{N}kn} \quad n, k = 0, 1, \dots, N-1.
\end{equation}
The real and imaginary parts of $\mathbf{w}_k$ for $N = 32$ and for some values of $k$ are plotted in Figures~\ref{fig:fa:basis_vector_0} to~\ref{fig:fa:basis_vector_31}; note how $\mathbf{w}_k = \mathbf{w}_{N-k}^*$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\basisVector#1{
\begin{figure}[p]
\center
\begin{dspPlot}[height=0.3\dspWidth,xticks=5,yticks=1,ylabel={$\Real{\mathbf{w}_{#1}}$}]{0, 31}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x 6.28 32 div #1 mul mul RadtoDeg cos}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=5,yticks=1,ylabel={$\Imag{\mathbf{w}_{#1}}$}]{0, 31}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x 6.28 32 div #1 mul mul RadtoDeg sin}
\end{dspPlot}
\caption{Fourier basis vector $\mathbf{w}_{#1}$.}\label{fig:fa:basis_vector_#1}
\end{figure}
}
\basisVector{0}
\basisVector{1}
\basisVector{2}
\basisVector{3}
\basisVector{30}
\basisVector{31}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The vectors in $\bigl\{ \mathbf{w}_k \bigr\}_k$ are mutually orthogonal:
\begin{align}\label{eq:fa:orthogonality}
\langle \mathbf{w}_h, \mathbf{w}_k \rangle
&= \sum_{n=0}^{N-1} e^{j\frac{2\pi}{N}hn} \bigl(e^{j\frac{2\pi}{N}kn} \bigr)^* \nonumber \\
&= \sum_{n=0}^{N-1} e^{j\frac{2\pi}{N}(h-k)n} \nonumber \\[0.5em]
&= \begin{cases}
N & \mbox{ for } h = k \\[0.6em]
\displaystyle \frac{1-e^{j\frac{2\pi}{N}(h-k)N}}{1-e^{j\frac{2\pi}{N}(h-k)}} = 0 & \mbox{ for } k \neq h
\end{cases}.
\end{align}
Because of orthogonality, as explained in section~\ref{sec:vs:bases}, the vectors form a basis for $\mathbb{C}^N$, called the \textit{Fourier basis} for the space of finite-length signals. In compact form, we can express the orthogonality of the Fourier vectors as:
\begin{equation}\label{eq:fa:orthogonalityCompact}
\langle \mathbf{w}_h, \mathbf{w}_k \rangle = N\,\delta[h-k].
\end{equation}
Clearly the basis is not orthonormal; while it could be normalized by multiplying each vector by $1/\sqrt{N}$, in signal processing practice it is customary to keep the normalization factor explicit in the change of basis formulas. We too will follow this convention, that exists primarily for computational reasons.
\subsection{The DFT as a Change of Basis}
In section~\ref{sec:vs:dt_space} we have illustrated how we can efficiently perform an orthonormal change of basis in $\mathbb{C}^N$; the Discrete Fourier Transform is such a transformation, allowing us to move from the time domain, represented by the canonical basis $\bigl\{ \boldsymbol{\delta}_k \bigr\}_k$, to the frequency domain, spanned by the Fourier basis $\bigl\{ \mathbf{w}_k \bigr\}_k$. Here, since the Fourier basis is orthogonal but not orthonormal, we simply need to slightly adjust the formulas in section~\ref{sec:vs:dt_space} to take into account the required normalization factors.
Given a vector $\mathbf{x} \in \mathbb{C}^N$ and the Fourier basis $\bigl\{ \mathbf{w}_k \bigr\}_k$, %the \textit{synthesis formula} allows us to
we can always express $\mathbf{x}$ as the following linear combination of basis vectors:
\begin{equation}\label{eq:fa:synthesis}
\mathbf{x} = \frac{1}{N}\sum_{k=0}^{N-1} X[k] \, \mathbf{w}_k.
\end{equation}
Using the orthogonality of the Fourier basis in~(\ref{eq:fa:orthogonalityCompact}), and taking the left inner product of the left- and right-hand sides of~(\ref{eq:fa:synthesis}) with each $\mathbf{w}_k$, we can see that the $N$~complex scalars $X[k]$, called the \textit{Fourier coefficients}, can be obtained simply as
\begin{equation}\label{eq:fa:fouriercoefficient}
X[k] = \langle \mathbf{w}_k, \mathbf{x} \rangle
\end{equation}
The coefficients capture the similarity between $\mathbf{x}$ and each of the basis vectors via an inner product; they are themselves a vector $\mathbf{X} \in \mathbb{C}^N$ and they can be computed all at once via the following matrix-vector multiplication, also known as the \textit{analysis formula}:
\[
\mathbf{X = Wx};
\]
the matrix $\mathbf{W}$ is built by stacking the Hermitian transposes of the basis vectors as
\begin{equation}
\def\arraystretch{1.5}
\mathbf{W} = \left[ \begin{array}{c}
\mathbf{w}_0^H \\ \hline
\mathbf{w}_1^H \\ \hline
\vdots \\ \hline
\mathbf{w}_{N-1}^H
\end{array} \right] = \def\arraystretch{1.4} \begin{bmatrix}
W_N^0 & W_N^0 & W_N^0 & \ldots & W_N^0 \\
W_N^0 & W_N^1 & W_N^2 & \ldots & W_N^{N-1} \\
W_N^0 & W_N^2 & W_N^4 & \ldots & W_N^{2(N-1)} \\
& & & \ldots & \\
W_N^0 & W_N^{N-1} & W_N^{2(N-1)} & \ldots & W_N^{(N-1)^2}
\end{bmatrix}
\end{equation}
where, for convenience, we have introduced the scalar $W_N = e^{-j\frac{2\pi}{N}}$. Again using~(\ref{eq:fa:orthogonalityCompact}), it is immediate to show that $\mathbf{W}^H\mathbf{W} = \mathbf{I}\;N$, where $\mathbf{I}$ is the identity matrix. Since the change of basis is invertible, both $\mathbf{x}$ and $\mathbf{X}$ represent the same information, albeit from two different ``points of view'': $\mathbf{x}$ lives in the time domain, while $\mathbf{X}$ lives in the frequency domain.
In order to ``go back'' we can use the synthesis formula in matrix form, taking into account the explicit normalization factor:
\begin{equation} \label{eq:fa:idft_matrix}
\mathbf{x} = \frac{1}{N}\mathbf{W}^H\mathbf{X}.
\end{equation}
Some examples of Fourier matrices for low-dimensional spaces are:
\begin{align}
\mathbf{W}_2 &=
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
\mathbf{W}_3 &=
\begin{bmatrix}
1 & 1 & 1 \\
1 & W_3 & W_3^2 \\
1 & W_3^2 & W_3^4
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 & 1 \\
1 & W_3 & W_3^2 \\
1 & W_3^2 & W_3
\end{bmatrix} \nonumber \\
&=
\begin{bmatrix}
1 & 1 & 1 \\
1 & \frac{-1 - j\sqrt{3}}{2} & \frac{-1 + j\sqrt{3}}{2} \\
1 & \frac{-1 + j\sqrt{3}}{2} & \frac{-1 - j\sqrt{3}}{2}
\end{bmatrix} \\
\mathbf{W}_4 &=
\begin{bmatrix}
1 & 1 & 1 & 1 \\
1 & W_4 & W_4^2& W_4^3 \\
1 & W_4^2 & W_4^4 & W_4^6 \\
1 & W_4^3 & W_4^6 & W_4^9
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 & 1 & 1 \\
1 & W_4 & W_4^2& W_4^3 \\
1 & W_4^2 & 1 & W_4^2\\
1 & W_4^3 & W_4^2 & W_4
\end{bmatrix} \nonumber \\
&=
\begin{bmatrix}
1 & 1 & 1 & 1 \\
1 & -j & -1 & j \\
1 & -1 & 1 & -1 \\
1 & j & -1 & -j
\end{bmatrix}
\end{align}
Please note that:
\begin{itemize}
\item the elements in the first row and the first columns are all equal to one since $W_N^0 = 1$ for all $N$;
\item powers of $W_N$ can be computed modulo $N$ because of the ``aliasing'' property of complex exponentials: $W^n_N = W_N^{n \mod N}$;
\item the matrices display a regular internal structure, which allows for extremely efficient computation methods such as the Fast Fourier Transform (FFT) algorithm (see Section~\ref{sec:fa:fft}).
\end{itemize}
\begin{comment}
In summary, the Discrete Fourier Transform is a change of basis in $\mathbb{C}^N$; from the canonical basis (the time domain) we move to the Fourier basis (the frequency domain). The elements of the vector in the frequency domain are obtained via the inner products
\begin{equation} \label{eq:fa:dft_inner_prods}
X_k = \langle \mathbf{x}, \mathbf{w}_k \rangle, \qquad k = 0, 1, \ldots, N-1
\end{equation}
or, more compactly, via~(\ref{eq:fa:dft_matrix}). The elements $X_k$ are referred to as the \emph{spectrum} of the signal. The transform is perfectly invertible and, to move back to the time domain, we use the linear combination
\begin{equation}
\mathbf{x} = \frac{1}{N}\sum_{k=0}^{N-1} X_k \mathbf{w}_k.
\end{equation}
or, more compactly, the matrix-vector product in~(\ref{eq:fa:idft_matrix}).
\end{comment}
\subsection{The DFT in Algorithmic Form}
The analysis and synthesis formula of the previous section can be written out explicitly in terms of the samples in the original vector and the vector of Fourier coefficients. This formulation highlights the algorithmic nature of the DFT and provides a straightforward way to implement the transform numerically.
The DFT coefficients can be computed using the following formula:
\begin{equation}\label{eq:fa:dft}
X[k] = \sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi}{N}nk}, \qquad \quad k = 0, \ldots, N-1
\end{equation}
while the inverse DFT is computed from the Fourier coefficients as
\begin{equation}\label{eq:fa:idft}
x[n] = \frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j\frac{2\pi}{N}nk}, \qquad \quad n = 0, \ldots, N-1.
\end{equation}
The explicit formulas allows us to appreciate the highly structured form of the summations, which is exploited in the various efficient algorithmic implementations available in most numerical packages.
%Another useful notation that we will use in the following is
%\begin{align}
% X[k] &= \DFT{x[n]} \\
% x[n] &= \IDFT{X[k]}.
%\end{align}
\subsection{DFT of Elementary Signals}
In the following examples we will compute the DFT of some elementary signals in $\mathbb{C}^{N}$, illustrating the examples for $N=64$; the associated plots will show the DFT coefficients in magnitude and phase.
\itempar{Impulse.} The DFT of the discrete-time delta is the constant signal $\mathbf{1}$ since
\[
\sum_{n=0}^{N-1}\delta[n]e^{-j\frac{2\pi}{N}nk} = e^{-j\frac{2\pi}{N}nk}\bigl|_{n=0} = 1 \quad \forall k.
\]
For the shifted delta $\boldsymbol{\delta}_m = \mathcal{D}_m\{\boldsymbol{\delta}\}$, the $k$-th coefficient of the DFT is
\[
\sum_{n=0}^{N-1}\delta[n-m]e^{-j\frac{2\pi}{N}nk} = e^{-j\frac{2\pi}{N}mk}
\]
so that, in general,
\begin{equation}\label{eq:fa:DFTdelta}
\DFT{\boldsymbol{\delta}_m} = \mathbf{w}^*_m;
\end{equation}
the DFT of an element of the canonical basis in time is the conjugate of the corresponding element of the Fourier basis. Note that, in the DFT of an impulse, all the coefficients have unit magnitude, that is, the most ``concentrated'' signal in time has nonzero Fourier coefficients at every frequency. This inverse relationship between time and frequency supports is a general property of Fourier analysis and will reappear frequently in the rest of the book.
\itempar{Rectangular signal.} Consider the step signal $\mathbf{x}$ defined by
\begin{equation}
x[n] = \begin{cases}
1 & \mbox{for $0 \leq n < M$} \\
0 & \mbox{for $M \leq n < N$,}
\end{cases} \label{eq:fa:step}
\end{equation}
shown in Figure~\ref{fig:fa:ex4} for $M=5$ and $N=64$. We can express the signal as
\[
\mathbf{x} = \sum_{m=0}^{M-1} \boldsymbol{\delta}_m
\]
and, exploiting~(\ref{eq:fa:DFTdelta}) and the linearity of the DFT, obtain
\[
\DFT{\mathbf{x}} = \mathbf{X} = \sum_{m=0}^{M-1} \mathbf{w}^*_m.
\]
The coefficients can be computed explicitly as:
\begin{align}
X[k] &= \sum_{n=0}^{M-1}e^{-j\frac{2\pi}{N}nk} \label{eq:fa:stepDFT} = \frac{1 - e^{-j\frac{2\pi}{N}Mk}}{1 - e^{-j\frac{2\pi}{N}k}} \\[1em]
&= \frac{e^{-j\frac{\pi}{N}Mk} \, (e^{j\frac{\pi}{N}Mk} - e^{-j\frac{\pi}{N}Mk})}{e^{-j\frac{\pi}{N}k} \, (e^{j\frac{\pi}{N}k} - e^{-j\frac{\pi}{N}k})} \nonumber \\[1em]
&= \frac{\sin(\pi Mk/N)}{\sin(\pi k/N)} \, e^{j\frac{\pi}{N}(M-1)k}.
\end{align}
In the derivation above, we have manipulated the expression for $X[k]$ into a product of a real-valued term (which captures the magnitude) and a pure phase term; this allows us to easily plot the DFT as in Figure~\ref{fig:fa:ex4}. Note that, while the phase grows linearly with $k$, it is customary in phase plots to ``wrap'' its value over a $2\pi$-wide interval; in signal processing the interval of reference is $[-\pi, \pi]$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[p]
\def\M{5 }
\center
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=1,ylabel={$x[n]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x \M lt {1} {0} ifelse}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=5,ylabel={$|X[k]|$}]{0, 63}{0, 7}
\dspTapsAt[linecolor=ColorDF]{0}{5.0000 4.9519 4.8093 4.5768 4.2620 3.8750 3.4283 2.9362 2.4142 1.8786 1.3458 0.8317 0.3512 0.0824 0.4576 0.7655 1.0000 1.1576 1.2379 1.2435 1.1796 1.0539 0.8765 0.6590 0.4142 0.1558 0.1024 0.3473 0.5665 0.7491 0.8862 0.9712 1.0000 0.9712 0.8862 0.7491 0.5665 0.3473 0.1024 0.1558 0.4142 0.6590 0.8765 1.0539 1.1796 1.2435 1.2379 1.1576 1.0000 0.7655 0.4576 0.0824 0.3512 0.8317 1.3458 1.8786 2.4142 2.9362 3.4283 3.8750 4.2620 4.5768 4.8093 4.9519}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,xout=true,yticks=custom,ylabel={$\angle X[k]$}]{0, 63}{-1.2, 1.2}
\dspTapsAt[linecolor=ColorDF]{0}{0.0000 -0.0625 -0.1250 -0.1875 -0.2500 -0.3125 -0.3750 -0.4375 -0.5000 -0.5625 -0.6250 -0.6875 -0.7500 0.1875 0.1250 0.0625 -0.0000 -0.0625 -0.1250 -0.1875 -0.2500 -0.3125 -0.3750 -0.4375 -0.5000 -0.5625 0.3750 0.3125 0.2500 0.1875 0.1250 0.0625 0.0000 -0.0625 -0.1250 -0.1875 -0.2500 -0.3125 -0.3750 0.5625 0.5000 0.4375 0.3750 0.3125 0.2500 0.1875 0.1250 0.0625 0.0000 -0.0625 -0.1250 -0.1875 0.7500 0.6875 0.6250 0.5625 0.5000 0.4375 0.3750 0.3125 0.2500 0.1875 0.1250 0.0625}
\dspCustomTicks[axis=y]{-1 $-\pi$ 1 $\pi$}
\end{dspPlot}
\caption{DFT of a step signal.}\label{fig:fa:ex4}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Constant signal.} By setting $M=N$ in~(\ref{eq:fa:stepDFT}) we obtain the DFT of the constant signal $\mathbf{x=1}$; the sum, as in~(\ref{eq:fa:orthogonality}), shows the orthogonality of the roots of unity and we have
-\[
+\begin{equation} \label{eq:fa:unitDFT1}
X[k] = \sum_{n=0}^{M-1}e^{-j\frac{2\pi}{N}nk}
= \begin{cases}
N & \mbox{ for } k=0 \\
0 & \mbox{ otherwise}
\end{cases}
-\]
+\end{equation}
so that
-\begin{equation}
+\begin{equation} \label{eq:fa:unitDFT2}
\DFT{\boldsymbol{1}} = N\boldsymbol{\delta}.
\end{equation}
The above result, up to a normalization factor, is the dual of what we obtained when we computed the DFT of the delta signal; in this case the time-domain signal with the largest support yields a transform that has a single nonzero coefficient.
\itempar{Harmonic sinusoids.} The fundamental frequency for $\mathbb{C}^{N}$ is $2\pi/N$ and a complex exponential at a multiple of this frequency will coincide with a Fourier basis vector. Given the signal $\mathbf{x}$ defined by
\[
x[n] = e^{j\frac{2\pi}{N}mn}
\]
it is clearly $\mathbf{x} = \mathbf{w}_m$ and its DFT coefficients can be easily computed using~(\ref{eq:fa:orthogonalityCompact}) as
\[
X[k] = \langle \mathbf{w}_m, \mathbf{w}_k \rangle = N\delta[m-k]
\]
so that
\begin{equation}
\DFT{\mathbf{w}_m} = N\boldsymbol{\delta}_m;
\end{equation}
up to a normalization factor, this is the dual of~(\ref{eq:fa:DFTdelta}). Note that the result in the previous section, for the constant signal $\mathbf{x=1}$, is just a particular case of the above relationship when $m=0$.
We can now easily compute the DFT of standard trigonometric functions in which the frequency is harmonically related to the fundamental frequency for the space. Consider for instance the signal $\mathbf{x}$ defined by
\[
x[n] = \cos \left(\frac{\pi}{8} n \right), \qquad n=0,1,2,\ldots, 63.
\]
With a simple manipulation we can write:
\begin{align*}
\cos \left(4\frac{2\pi}{64} n \right) &= \frac{1}{2}\left( e^{j4\frac{2\pi}{64}n} + e^{-j4\frac{2\pi}{64}n}\right) \\
&= \frac{1}{2}\left( e^{j4\frac{2\pi}{64}n} + e^{j60\frac{2\pi}{64}n}\right)
\end{align*}
where we have used the fact that we can take all frequency indexes modulo 64 because of the aliasing property of complex exponentials; we can therefore express $\mathbf{x}$ as
\[
\mathbf{x} = \frac{1}{2}\mathbf{w}_4 + \frac{1}{2}\mathbf{w}_{60}.
\]
Since the signal is the sum of two Fourier basis vectors, orthogonality implies that only two of the inner products in~(\ref{eq:fa:fouriercoefficient}) will be nonzero:
\begin{align*}
X[4] &= \langle \mathbf{w}_4, \mathbf{x} \rangle = \langle \frac{1}{2}\mathbf{w}_4, \mathbf{w}_4 \rangle = 32 \\
X[60] &= \langle \mathbf{w}_{60}, \mathbf{x} \rangle = \langle \frac{1}{2}\mathbf{w}_{60}, \mathbf{w}_{60} \rangle = 32.
\end{align*}
The DFT of the signal is plotted in Figure~\ref{fig:fa:ex1}; the spectrum shows how the entire frequency content of the signal is concentrated over two single frequencies. Since the original signal is real-valued, the DFT component at $k=60$ ensures that the imaginary parts in the reconstruction formula cancel out; this symmetry is a general property of the Fourier transform that we will examine in more detail later.
Consider now a slight variation of the previous signal obtained by introducing a phase offset:
\[
x[n] = \cos \left(\frac{\pi}{8} n + \frac{2\pi}{3} \right), \qquad n=0,1,2,\ldots, 63.
\]
Again, we can easily manipulate the signal to obtain
\[
\mathbf{x} = \frac{e^{j2\pi/3}}{2}\mathbf{w}_4 + \frac{e^{-j2\pi/3}}{2}\mathbf{w}_{60}
\]
so that the resulting DFT coefficients are all zero except for
\begin{align*}
X[4] &= 32\,e^{j2\pi/3} \\
X[60] &= 32\,e^{-j2\pi/3}.
\end{align*}
The resulting DFT is plotted in Figure~\ref{fig:fa:ex2}; the magnitude does not change but the phase offset is reflected in nonzero values for phase at $k=4, 60$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[p]
\center
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=1,ylabel={$x[n]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x 6.28 63 div 4 mul mul RadtoDeg cos}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=32,ylabel={$|X[k]|$}]{0, 63}{0, 34}
\dspSignal[linecolor=ColorDF]{x 4 eq {32} {x 60 eq {32} {0} ifelse} ifelse}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,xout=true,yticks=custom,ylabel={$\angle X[k]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDF]{0}
\dspCustomTicks[axis=y]{-1 $-\pi$ 1 $\pi$}
\end{dspPlot}
\caption{DFT of $x[n] = \cos\big( (\pi/8) n \big)$.}\label{fig:fa:ex1}
\end{figure}
\begin{figure}[p]
\center
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=1,ylabel={$x[n]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x 6.28 63 div 4 mul mul 1.05 add RadtoDeg cos}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=32,ylabel={$|X[k]|$}]{0, 63}{0, 34}
\dspSignal[linecolor=ColorDF]{x 4 eq {32} {x 60 eq {32} {0} ifelse} ifelse}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,xout=true,yticks=custom,ylabel={$\angle X[k]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDF]{x 4 eq {0.6666} {x 60 eq {-0.6666} {0} ifelse} ifelse}
\dspCustomTicks[axis=y]{-1 $-\pi$ 1 $\pi$}
\end{dspPlot}
\caption{DFT of $x[n] = \cos\big( (\pi/8) n + (2\pi/3) \big)$.}\label{fig:fa:ex2}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Non-harmonic sinusoids}
Consider now a sinusoid whose frequency is {\em not} a multiple of the fundamental frequency for the space, such as
\[
x[n] = \cos \left(\frac{\pi}{5} n \right), \qquad n=0,1,2,\ldots, 63.
\]
In this case we cannot decompose the signal into a sum of basis vectors and we must therefore explicitly compute all the DFT coefficients. We could do this algebraically and work out the resulting geometric sums as we did for the step signal. More conveniently, however, since the DFT is at heart a \textit{numerical} algorithm, we can just use a standard numerical package (Numpy, Matlab, Octave) and use the built-in \texttt{fft()} function. The resulting DFT is shown in Figure~\ref{fig:fa:ex3}; note how here \textit{all} the DFT coefficients are nonzero. While the magnitude is larger for frequencies close to that of the original signal ($6\pi/64 < \pi/5 < 7\pi/64$), to reconstruct $\mathbf{x}$ exactly, we need a contribution from each one of the basis vectors.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[p]
\center
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=1,ylabel={$x[n]$}]{0, 63}{-1.2, 1.2}
\dspSignal[linecolor=ColorDT]{x 3.14 5 div mul RadtoDeg cos}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,yticks=32,ylabel={$|X[k]|$}]{0, 63}{0, 34}
\dspTapsAt[linecolor=ColorDF]{0}{1.8090 1.8933 2.1689 2.7286 3.8577 6.7026 23.9761 16.4050 6.3256 4.0049 2.9757 2.3952 2.0227 1.7638 1.5736 1.4285 1.3143 1.2226 1.1476 1.0856 1.0338 0.9902 0.9534 0.9223 0.8961 0.8741 0.8559 0.8410 0.8292 0.8203 0.8140 0.8103 0.8090 0.8103 0.8140 0.8203 0.8292 0.8410 0.8559 0.8741 0.8961 0.9223 0.9534 0.9902 1.0338 1.0856 1.1476 1.2226 1.3143 1.4285 1.5736 1.7638 2.0227 2.3952 2.9757 4.0049 6.3256 16.4050 23.9761 6.7026 3.8577 2.7286 2.1689 1.8933}
\end{dspPlot}
\begin{dspPlot}[height=0.3\dspWidth,xticks=10,xout=true,yticks=custom,ylabel={$\angle X[k]$}]{0, 63}{-1.2, 1.2}
\dspTapsAt[linecolor=ColorDF]{0}{0.0000 0.0809 0.1571 0.2255 0.2854 0.3376 0.3832 -0.5763 -0.5399 -0.5067 -0.4761 -0.4476 -0.4206 -0.3950 -0.3704 -0.3467 -0.3238 -0.3015 -0.2797 -0.2583 -0.2373 -0.2166 -0.1962 -0.1760 -0.1560 -0.1362 -0.1165 -0.0969 -0.0774 -0.0580 -0.0386 -0.0193 0.0000 0.0193 0.0386 0.0580 0.0774 0.0969 0.1165 0.1362 0.1560 0.1760 0.1962 0.2166 0.2373 0.2583 0.2797 0.3015 0.3238 0.3467 0.3704 0.3950 0.4206 0.4476 0.4761 0.5067 0.5399 0.5763 -0.3832 -0.3376 -0.2854 -0.2255 -0.1571 -0.0809}
\dspCustomTicks[axis=y]{-1 $-\pi$ 1 $\pi$}
\end{dspPlot}
\caption{DFT of $x[n] = \cos\big( (\pi/5) n \big)$.}\label{fig:fa:ex3}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Physical Interpretation}
An instructive way to explain the physical interpretation of the DFT is to start with the synthesis formula~(\ref{eq:fa:idft}). The expression tells us that we can build \textit{any} signal in $\mathbb{C}^N$ as a suitable combination of $N$ complex sinusoids with harmonically related frequencies; the magnitude and initial phase of each oscillation are given by the complex-valued Fourier coefficients. Consider a ``complex exponential generator'' as in Figure~\ref{fig:fa:gen}; the oscillator works at a frequency $(2\pi/N)k$ and it can be tuned in magnitude and phase. The DFT synthesis formula can be represented graphically as in Figure~\ref{fig:fa:idft}, that is, as a bank of $N$ oscillators working in parallel; to reproduce any signal $\mathbf{x}$ from its DFT $\mathbf{X}$:
\begin{itemize}
\item set the amplitude $A_k$ of the $k$-th generator to $\bigl|X[k]\bigr|$, i.e.\ to the magnitude of the $k$-th DFT coefficient;
\item set the phase $\phi_{k}$ of the $k$-th generator to $\measuredangle{X[k]}$, i.e.\ to the phase of the $k$-th DFT coefficient;
\item start all the generators at the same time and sum their outputs for $N$ cycles
\item divide the result by $N$.
\end{itemize}
This ``machine'' shows that each Fourier coefficient captures ``how much of'' and ``how in phase'' an oscillation at frequency $2\pi/k$ is contained in $\mathbf{x}$; this is consistent with the fact that each $X[k]$ is computed as the inner product between $\mathbf{x}$ and $\mathbf{w}_k$, and that the inner product is a measure of similarity.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\Gen#1{%
\raisebox{-1.2em}{%
\psset{xunit=1em,yunit=1em,linewidth=1.5pt}%
\pspicture(-1,0)(7.1,-3)%
\rput[r]{0}(0,-1.5){$A_{#1}$}\rput[r]{0}(0,-2.5){$\phi_{#1}$}
\rput[l](1.9,-1.5){\psframebox[framesep=.2]{\parbox{4em}{~~\pscirclebox{\Large$\mathbf{\sim}$}$_{~~#1}$}}}
\psline[linewidth=0.8pt]{->}(0.1,-1.5)(1.8,-1.5)
\psline[linewidth=0.8pt]{->}(0.1,-2.5)(1.8,-2.5)
\endpspicture}}
\begin{figure}[b]
\center
\begin{dspBlocks}{2}{0.2}
\Gen{\,k} & $A_k\,e^{j(\frac{2\pi}{N}kn + \phi_k)}$
\ncline{->}{1,1}{1,2}
\end{dspBlocks}
\caption{A tunable sinusoidal generator for $\mathbb{C}^N$.}\label{fig:fa:gen}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspBlocks}{2}{0.2}
\rnode{A}{\Gen{\,0}} & & \\
\Gen{\,1} & & \\
\Gen{\,2} & & \BDadd & $x[n]$ \\
\hspace{1em}$\ldots$ & & \\
\Gen{N-2} & & \\
\rnode{B}{\Gen{N-1}} & &
\ncline{1,1}{1,2}\ncline{->}{1,2}{3,3}
\ncline{2,1}{2,2}\ncline{->}{2,2}{3,3}
\ncline{3,1}{3,2}\ncline{->}{3,2}{3,3}
\ncline{5,1}{5,2}\ncline{->}{5,2}{3,3}
\ncline{6,1}{6,2}\ncline{->}{6,2}{3,3}
\ncline{->}{3,3}{3,4}\taput{$1/N$}
\end{dspBlocks}
\caption{The DFT synthesis as a block diagram.}\label{fig:fa:idft}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Note that each sinusoidal generator produces a length-$N$ signal whose energy is
\[
\sum_{n=0}^{N-1} \big| Ae^{j(2\pi/N)kn} \big|^2 = N|A|^2;
\]
consequently, the square magnitude of a DFT coefficient $|X[k]|^2$ is proportional, via to a scale factor $N$, to the energy of $\mathbf{x}$ at the frequency $(2\pi/N)k$: the magnitude of the DFT therefore shows how the global energy of the original signal is distributed in the frequency domain. The phase of each DFT coefficient specifies the initial phase of each oscillator in the reconstruction formula, i.e. the \emph{relative alignment} of each complex exponential at the onset of the signal. While this does not affect the energy distribution in frequency, it does have a significant impact on the \textit{shape} of the signal in the time domain as we will see in more detail shortly.
On a historical note, far from being a pure academic exercise, the synthesis of signals via a bank of oscillators is a topic that attracted great interest and efforts long before the advent of digital computers. Figure~\ref{fig:fa:tidemachine} shows a mechanical implementation of the inverse DFT as we just described; in this case the device was designed in order to anticipate the evolution of the sea tide. The original design of these tide prediction machines dates back to Lord Kelvin in the 1860s.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\paperwidth]{\localpath{figs/tidemachine.eps}}
\caption{A mechanical IDFT machine from the early 20th century.}\label{fig:fa:tidemachine}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Reading a DFT Plot}
The DFT, being a change of basis in $\mathbb{C}^N$, maps a length-$N$ vector $\mathbf{x}$ onto another length-$N$ vector $\mathbf{X}$. The graphical representation of $\mathbf{x}$ is intuitively clear as a succession of values in time; for the DFT vector, as we said, we choose a graphical representation that splits the information into magnitude and phase. The (discrete) horizontal axis for the plot corresponds to the DFT coefficients' index $k$ which uniquely identifies the underlying frequency of the Fourier vector $\omega_k = (2\pi/N)k$; as we move along the horizontal axis from left to right, therefore, we will encounter coefficients that correspond to increasingly faster frequencies up to the midpoint $\lfloor k/2 \rfloor$, after which the frequencies will start to decrease again as shown in Figure~\ref{fig:fa:dftread}. The first half of the Fourier coefficients correspond to counterclockwise rotations, while the second half to clockwise rotations, as we saw in Section~\ref{sec:fa:cexp}.
\itempar{Symmetries.} When the original signal $\mathbf{x}$ is real-valued, the DFT is hermitian-symmetric:
\[
X[k] = X^*[N-k];
\]
this implies that the magnitude DFT of a real signal is symmetric.
\itempar{Labeling the frequency axis.}
Each DFT coefficient is the inner product between the ``input'' signal and a sinusoidal signal with frequency $\omega_k = 2\pi k /N$, that is, each DFT coefficients indicates how much of the input signal is
{Energy distribution}
Recall Parseval's Theorem: $\|\mathbf{x}\|^2 = \sum|\alpha_k|^2$
\[
\sum_{n=0}^{N-1}|x[n]|^2 = \frac{1}{N}\sum_{k=0}^{N-1}|X[k]|^2
\]
square magnitude of $k$-th DFT coefficient \\ proportional to signal's energy at
frequency $\omega = (2\pi/N)k$
sinusoid: single freq
step: low freq
delta: all freq
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[yticks=custom,xticks=custom,xout=true,height=\dspHeightCol,ylabel={$|X[k]|$}]{0, 63}{0, 50}
\dspTapsAt[linecolor=ColorDF]{0}{1.8090 1.8933 2.1689 2.7286 3.8577 6.7026 23.9761 16.4050 6.3256 4.0049 2.9757 2.3952 2.0227 1.7638 1.5736 1.4285 1.3143 1.2226 1.1476 1.0856 1.0338 0.9902 0.9534 0.9223 0.8961 0.8741 0.8559 0.8410 0.8292 0.8203 0.8140 0.8103 0.8090 0.8103 0.8140 0.8203 0.8292 0.8410 0.8559 0.8741 0.8961 0.9223 0.9534 0.9902 1.0338 1.0856 1.1476 1.2226 1.3143 1.4285 1.5736 1.7638 2.0227 2.3952 2.9757 4.0049 6.3256 16.4050 23.9761 6.7026 3.8577 2.7286 2.1689 1.8933}
\dspCustomTicks[axis=x]{0 0 32 $N/2$ 63 $N-1$}
\psset{braceWidthInner=3pt,braceWidthOuter=3pt,braceWidth=1pt}
\def\y{32}
\pnode(0,\y){CC1} \pnode(31,\y){CC2}
\pnode(32,\y){C1} \pnode(63,\y){C2}
\psbrace[linecolor=blue,ref=C,nodesep=-1.5ex,rot=-90](CC2)(CC1){frequencies $< \pi$ (counterclockwise)}
\psbrace[linecolor=blue,ref=C,nodesep=-1.5ex,rot=-90](C2)(C1){frequencies $> \pi$ (clockwise)}
\def\y{-15}
\pnode(0,\y){L1}\pnode(8,\y){L2}
\pnode(55,\y){L3}\pnode(63,\y){L4}
\pnode(23,\y){H1}\pnode(40,\y){H2}
\psbrace[linecolor=green,ref=C,nodesep=2ex,rot=90](L1)(L2){low frequencies (slow)}
\psbrace[linecolor=green,ref=C,nodesep=2ex,rot=90](L3)(L4){low frequencies (slow)}
\psbrace[linecolor=red,ref=C,nodesep=2ex,rot=90](H1)(H2){high frequencies (fast)}
\end{dspPlot}
\vspace{3em}
\caption{Reading a DFT plot.}\label{fig:fa:dftread}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xticks=32,xout=true,ylabel={$|X[k]|$}]{0, 63}{0, 66}
\dspSignal{x 0 eq {64} {0} ifelse}
\end{dspPlot}
\end{figure}
\begin{figure}
\begin{dspPlot}[xticks=32,xout=true,ylabel={$|X[k]|$}]{0, 63}{0, 66}
\dspSignal{x 32 eq {64} {0} ifelse}
\end{dspPlot}
\end{figure}
\subsection{The DFT as an Analysis Tool}
\itempar{Denoising}
\itempar{Detecting musical pitches}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The DFS (Discrete Fourier Series)}
Back to the theoretical side, consider again the DFT reconstruction formula in~(\ref{eq:fa:idft}) (or, equivalently, the ``machine'' in Figure~\ref{fig:fa:gen}); normally, the expression is defined for $0 \leq n < N$ but, if the index $n$ is outside of this interval, we can always write $n = mN + i$ with $m \in \mathbb{Z}$ and $i= n \mod N$. With this,
\begin{equation}
x[n] = \frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j\frac{2\pi}{N}ik}\,e^{j2\pi mk} = x[n \mod N].
\end{equation}
In other words, due to the aliasing property of the complex exponential, the inverse DFT formula generates an infinite, \emph{periodic} sequence of period $N$; this should not come as a surprise, given the $N$-periodic nature of the basis vectors for $\mathbb{C}^N$. Similarly, the DFT analysis formula remains valid if the frequency index $k$ is allowed to take values outside the $[0, N-1]$ interval and the resulting sequence of DFT coefficients is also an $N$-periodic sequence.
The natural Fourier representation of periodic signals is called the Discrete Fourier Series (DFS) and its explicit analysis and synthesis formulas are identical to~(\ref{eq:fa:dft}) and~(\ref{eq:fa:idft}), modified only with respect to the range of the indexes, which now span $\mathbb{Z}$; the DFS represents a change of basis in the space of periodic sequences $\tilde{\mathbb{C}}^N$. Since there is no mathematical difference between the DFT and the DFS, it is important to remember that even in the space of finite-length signals everything is implicitly $N$-periodic.
\subsection{Circular shifts revisited}
In Section~\ref{sec:dt:operators} we stated that circular shifts are the ``natural'' way to interpret how the delay operator applies to finite-length signals; considering the inherent periodicity of the DFT/DFS, the reason should now be clear. Indeed, the delay operator is always well-defined for a periodic signal $\mathbf{\tilde{x}}$ and, given its DFS $\mathbf{\tilde{X}}$, the $k$-th DFS coefficient of the sequence shifted by $m$ is easily computed as
\begin{align}
\DFS{\mathcal{D}_m \bigl\{\mathbf{\tilde{x}}\bigr\}}[k] &= \sum_{n = 0}^{N-1} \tilde{x}[n-m] \, e^{-j\frac{2\pi}{N}nk} \nonumber \\
&= e^{-j\frac{2\pi}{N}mk}\, \tilde{X}[k];
\end{align}
in other words, a delay by $m$ samples in the time domain becomes a linear phase shift by $-2\pi m/N$ in the frequency domain.
With a finite-length signal $\mathbf{x}$, for which time shifts are not well defined, we can still always compute the DFT, multiply the DFT coefficients by a linear phase shift and compute the inverse DFT. The result is always well defined and, by invoking the mathematical equivalence between DFT and DFS, it is straightforward to show that
\[
\frac{1}{N} \sum_{k = 0}^{N-1} \left(e^{-j\frac{2\pi}{N}mk} X[k]\right) \, e^{j\frac{2\pi}{N}nk} = x[(n-m) \mod N],
\]
which justifies the circular interpretation for shifts of finite-length signals.
\subsection{DFT of multiple periods}
All the information carried by a $N$-periodic discrete-time signal is contained in $N$ consecutive samples and therefore its complete frequency representation is provided by the DFS, which coincides with the DFT of one period. This intuitive fact is confirmed if we try to compute the DFT of $L$ consecutive periods:
\begin{align*}
\sum_{n=0}^{LN-1} \tilde{x}[n] e^{-j\frac{2\pi}{LN}nk} &= \sum_{p=0}^{L-1} \sum_{n=0}^{N-1} \tilde{x}[n + pN] e^{-j\frac{2\pi}{LN}(n+pN)k} \\
&= \sum_{p=0}^{L-1} \sum_{n=0}^{N-1} \tilde{x}[n] e^{-j\frac{2\pi}{LN}nk} e^{-j\frac{2\pi}{L}pk} \\
&= \left(\sum_{p=0}^{L-1} e^{-j\frac{2\pi}{L}pk} \right) \sum_{n=0}^{N-1} \tilde{x}[n]\, e^{-j\frac{2\pi}{LN}nk} \\
&= \begin{cases}
L\, X[k/L] & \mbox{if $k = 0, L, 2L, 3L, \ldots$} \\
0 & \mbox{otherwise}
\end{cases}
\end{align*}
where we have exploited once again the orthogonality of the roots of unity:
\[
\sum_{p=0}^{L-1} e^{-j\frac{2\pi}{L}pk} = \begin{cases}
L & \mbox{if $k$ multiple of $L$}\\
0 & \mbox{otherwise}
\end{cases}.
\]
The above results shows that the DFT of $L$ periods is obtained simply by multiplying the DFT coefficients of one period by $L$ and appending $L-1$ zeros after each one of them.
\subsection{Pushing the DFS to the limit}
In the next section we will derive a complete frequency representation for aperiodic, infinite-length signals, which is still missing; but right now, the DFS can help us gain some initial intuition if we imagine such signals as the limit of periodic sequences when the period grows to infinity. Consider an aperiodic, infinite-length and absolutely summable sequence $\mathbf{x}$; given any integer $N > 0$ we can always build an $N$-periodic sequence $\mathbf{\tilde{x}}_N$, with \index{periodization}
\begin{equation}\label{eq:fa:periodization}
\tilde{x}_N[n] = \sum_{m = -\infty}^{\infty} x[n + pN];
\end{equation}
the convergence of the sum for all $n$ is guaranteed by the absolute summability of $\mathbf{x}$ (see also Example~\ref{exa:dt:periodization}). As $N$ grows larger, the copies in the periodization will be spaced more and more apart and, in the limit,
\[
\lim_{N \rightarrow \infty} \mathbf{\tilde{x}}_N = \mathbf{x}.
\]
For all finite values of $N$, the natural frequency representation for $\mathbf{\tilde{x}}_N$ is its DFS, which can be computed as
\begin{equation}
\tilde{X}_N[k] = \sum_{n=0}^{N-1} \tilde{x}[n] \, e^{-j\frac{2\pi}{N}nk} = \sum_{p = -\infty}^{\infty} \left( \sum_{n=0}^{N-1} x[n + pN] \, e^{-j\frac{2\pi}{N}(n + pN)k} \right);
\end{equation}
in the above, we have used the definition of $\mathbf{\tilde{x}}_N$ and exploited the fact that $e^{-j(2\pi /N)nk} = e^{-j(2\pi /N)(n+pN)k}$. Now, for every value of $p$ in the outer sum, the argument of the inner sum varies between $pN$ and $pN + N - 1$ so that the double sum can be simplified to
\begin{equation}
\tilde{X}_N[k] = \sum_{n = -\infty}^{\infty} x[n] \, e^{-j\frac{2\pi}{N}kn}.
\end{equation}
If we define the following \textit{function} of a real-valued variable $\omega$
\begin{equation} \label{eq:fa:dtftAnteLitteram}
X(\omega) = \sum_{n = -\infty}^{\infty} x[n] \, e^{-j\omega n}
\end{equation}
it is immediate to see that, for every value of the period $N$, the DFS coefficients of $\mathbf{\tilde{x}}_N$ are given by regularly spaced samples of $X(\omega)$ computed at multiples of $2\pi/N$:
\[
\tilde{X}_N[k] = \left. X(\omega)\right|_{\omega = \frac{2\pi}{N}k};
\]
Figure~\ref{fig:fa:dsf2dtft} shows some examples for different values of $N$. As $N$ grows large, the set of samples will grow denser in the $[0, 2\pi]$ interval; since, in the limit, $\mathbf{\tilde{x}}_N$ tends to $\mathbf{x}$, it appears that $X(\omega)$ would indeed be a suitable frequency-domain representation for $\mathbf{x}$, as we will show momentarily.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
%
%% absolutely summable signal
\def\sig#1{dup \dspPorkpie{#1}{10} 0.2 mul exch #1 sub 2 div dup mul 1 add 1 exch div sub -1 mul }
%% widening periodization
\def\perSigPlot#1{%
\hspace{-3em}
\begin{dspPlot}[width=0.5\dspWidth,height=0.5\dspHeight,xticks=50,xout=true,yticks=none,sidegap=0,ylabel={$\mathbf{\tilde{x}}_{#1}$}]{-50,50}{-.2, 1.2}
\dspSignal[linecolor=ColorDT]{0 -80 #1.0 80 {/i exch def x \sig{i} add } for }
%\dspText(30,0.5){$N=#1$}
\end{dspPlot}
&
\hspace{-5em}
\begin{dspPlot}[width=0.5\dspWidth,height=0.5\dspHeight,xtype=freq,xticks=1,yticks=none,ylabel={$\mathbf{\tilde{X}}_{#1}$}]{0,2}{0, 1.4}
\dspFunc[linecolor=ColorDF!30]{x \dspPeriodize \dspPorkpie{0}{.8}}
\dspSignal[plotpoints=#1,linecolor=ColorDF]{x \dspPeriodize \dspPorkpie{0}{.8}}
\end{dspPlot}}
%
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\hspace{-3em}
\begin{dspPlot}[width=0.5\dspWidth,height=0.5\dspHeight,xticks=50,xout=true,yticks=none,sidegap=0,ylabel={$\mathbf{x}$}]{-50,50}{-.2, 1.2}
\dspSignal[linecolor=ColorDT]{x \sig{0}}
\end{dspPlot}
&
\hspace{-5em}
\begin{dspPlot}[width=0.5\dspWidth,height=0.5\dspHeight,xtype=freq,xticks=1,yticks=none,ylabel={$X(\omega)$}]{0,2}{0, 1.4}
\dspFunc[linecolor=ColorDF]{x \dspPeriodize \dspPorkpie{0}{.8}}
\end{dspPlot}
\\ \\
\perSigPlot{10}
\\
\perSigPlot{20}
\\
\perSigPlot{40}
\\
\perSigPlot{80}
\end{tabular}
\caption{top row: original infinite-length, absolutely summable signal (left) and the function $X(\omega)$ defined in~(\ref{eq:fa:dtftAnteLitteram}); rows 2-5: periodized signal $\mathbf{\tilde{x}}_N$ and its DFS for increasing values of $N$; all DFS values coincide with samples of $X(\omega)$.}\label{fig:fa:dsf2dtft}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
diff --git a/writing/sp4comm.multipub/90-sampling/00-is-intro.tex b/writing/sp4comm.multipub/90-sampling/00-is-intro.tex
index 974aff9..2fc331e 100644
--- a/writing/sp4comm.multipub/90-sampling/00-is-intro.tex
+++ b/writing/sp4comm.multipub/90-sampling/00-is-intro.tex
@@ -1,252 +1,252 @@
\setcounter{chapter}{8}
\chapter{Interpolation and Sampling}
\label{ch:samp}\label{ch:is}
A signal, as we stated in the beginning of this book, is the description of a phenomenon evolving over time. The {\it language} used in such a description is based on an agreed-upon model of the world and so far our paradigm of choice has been discrete time, that is, a model in which events can be successfully described as countable sequences of complex numbers.
In physics and engineering, however, the ``traditional'' model of the world is one in which physical quantities such as time possess an infinite granularity and are therefore best described in terms of real numbers. This so-called {\it continous-time}\/ paradigm speaks the language of calculus and, in it, signals are described by functions of a real-valued variable. This model is extraordinarily versatile and it is still the framework of choice in theoretical disciplines such as physics or in applied domains like electronics or mechanics.
In this chapter we will finally build a solid mathematical bridge between these two worlds, the discrete- and continuous-time paradigms. We will soon see that for most signals of interest, these two models are equivalent, in the sense that the description of a signal can be freely translated from one language to the other and back again with no loss of information.
Historically, most of mankind's attempts to record and understand the world have taken place in discrete time: {\it time series}, as we have seen in the introduction, represented the only available way to commit a phenomenon to paper (or clay tablet); it was only during the 17th century that a great intellectual drive to idealized abstraction put calculus at the forefront of scientific research, where it solidly remains even today. Still, because of the current ubiquity of digital processing devices, the granular nature of our data records has become ``native'' again; we will therefore start our journey in the world of continuous time by considering interpolation first, that is, the way to turn a sequence of samples into a function of a real variable.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\psset{unit=3.5cm}%
\begin{pspicture}(-1,-1)(1,1)%
\uput[0](0.6,0){$x[n]$}
\uput[90](0,0.6){sampling}
\uput[180](-0.6,0){$x(t)$}
\uput[270](0,-0.6){interpolation}
\psarc[linewidth=3pt,linecolor=gray]{<-}(0,0){0.73}{10}{65}
\psarc[linewidth=3pt,linecolor=gray]{<-}(0,0){0.73}{115}{170}
\psarc[linewidth=3pt,linecolor=gray]{<-}(0,0){0.73}{190}{235}
\psarc[linewidth=3pt,linecolor=gray]{<-}(0,0){0.73}{305}{350}
\end{pspicture}
\caption{From discrete to continuous time... and back.}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries and Notation}
-This chapter, all math notwithstanding, is ideally supposed to read like a story. We start from our familiar sequences, explore a few practical ways to convert them into functions, and then we happen upon the absolutely perfect way to do so. From there, we try to invert this perfect method so that we can represent functions as sequences. And finally, we bring this ideal method down to earth and turn it into a practical tool. To best navigate this story, we need to first put some terminology and notation in place.
+This chapter is supposed to read like a story, all mathematical details notwithstanding: we start the journey with our familiar discrete-time sequences and we explore some of the ways in which we can transform them into functions; we finally discover the perfect method to do so, a method so good, in fact, that it can be turned the other way around and used to represent \textit{functions} as discrete-time sequences! But alas, as for all perfect things in signal processing, the method is only an ideal abstraction: and so we set ourselves to the task of coming back down to earth and turn it into a practical tool. To best navigate this story, we need to first put some terminology and notation in place.
\itempar{Interpolation.}\index{interpolation}
Interpolation comes into play when discrete-time signals need to be converted to continuous-time signals. The need arises at the interface between the digital world and the analog world; as an example, consider a discrete-time waveform synthesizer which is used to drive an analog amplifier and loudspeaker.
In this case, it is useful to express the input to the amplifier as a function of a real variable, defined over the entire real line; this is because the behavior of analog circuitry is best modeled by continuous-time functions. We will see that at the core of the interpolation process is the association of a time duration $T_s$, measured in physical units of seconds, to the intervals between samples of the original sequence. The fundamental questions concerning interpolation involve the spectral properties of the interpolated function with respect to those of the original sequence.
\itempar{Sampling.}\index{sampling}
Sampling is the method by which a continuous-time phenomenon is described by a discrete-time sequence. The simplest systems just record the value of a physical variable at successive instants in time and collate the resulting values in a discrete-time sequence. In this book we will only consider \emph{uniform sampling\/}, that is, sampling operations in which the time instants are uniformly spaced $T_s$ seconds apart; $T_s$ is called the \emph{sampling period} while its inverse, $F_s = 1/T_s$ is called the \emph{sampling frequency}\index{sampling!frequency} of the system\footnote{Other, more esoteric sampling schemes exist, where the sampling instants are nonuniform or where the sampling instants are determined by the signal's amplitude crossing certain thresholds -- the sampling literature is extremely vast!}. The fundamental question here is whether no information is lost by using a discrete-time representation instead of a continuous-time one. We will show that this is indeed the case for virtually all signals of interest, which means that all the processing tools developed in the discrete-time domain can be successfully applied to continuous-time signals as well.
\itempar{Notation.}
In the rest of this Chapter we will encounter a series of variables which are all interrelated and whose different forms will be used interchangeably according to convenience. They are summarized in Table~\ref{tab:is:vars} for a quick reference.
\begin{table}[h!]
\vskip-3mm
\renewcommand{\arraystretch}{1.6}
\begin{center}\small
\begin{tabular}[h]{|c|l|l|c|}
\hline
\bf Name & \bf Description & \bf Units & \bf Relations \\
\hline
$T_s$ & interpolation/sampling period & seconds & $T_s = 1/F_s$ \\
\hline
$F_s$ & sampling frequency & Hertz & $F_s = 1/T_s$ \\
\hline
$f_N$ & Nyquist frequency & Hertz & $f_N = F_s/2$ \\
\hline
\end{tabular}
\end{center}
\caption{Notation used in the Chapter.}\label{tab:is:vars}
\end{table}
Finally, Table~\ref{tab:is:DTvsCT} concisely sums up the differences between the mathematical tools used in discrete and continuous time.
\begin{table}[h!]
\vskip-3mm
\renewcommand{\arraystretch}{1.6}
\begin{center}\small
\begin{tabular}[h]{|l|l|}
\hline
\bf discrete time & \bf continuous time \\
\hline
countable integer index $n$ & real-valued time $t$ (sec) \\
\hline
sequences $x[n] \in \ell_2(\mathbb{Z})$ & functions $x(t) \in L_2(\mathbb{R})$ \\
\hline
frequency $\omega \in [-\pi, \pi]$ & frequency $f \in \mathbb{R}$ (Hz) \\
\hline
DTFT: $\ell_2(\mathbb{Z}) \mapsto L_2([-\pi,\pi])$ & FT: $L_2(\mathbb{R}) \mapsto L_2(\mathbb{R})$ \\
\hline
\end{tabular}
\end{center}
\caption{Discrete vs. continuous time}\label{tab:is:DTvsCT}
\end{table}
\section{Continuous-Time Signals}
In this section we will take a quick tour of the key concepts associated to continuous-time signals and to continuous-time signal processing, which we simply illustrate here without formal proofs. As we will see, most ideas can be readily understood as simple counterparts to the discrete-time properties and theorems that we have studied so far.
-Continuous-time signals are represented by complex-valued functions of a real variable $t$ which usually represents time, expressed in physical units of seconds. In general, no stringent requirements are imposed on the functions; we will see that in practice it is reasonable to expect a certain degree of smoothness and that, just as in the discrete-time case, a common condition on an aperiodic signal is that the function be square integrable which corresponds to the signal having finite energy.
+Continuous-time signals are represented by complex-valued functions of a real variable $t$ which usually represents time, expressed in physical units of seconds. We will use the standard signal notation $\mathbf{x}$ to indicate the entire signal and $x(t)$ to express its value in $t$; when necessary for clarity, the subscript ``$c$'' will be used to explicitly indicate a continuous-time quantity, as in $\mathbf{x}_c$. In general, no stringent requirements are imposed on the functions; we will see that in practice it is reasonable to expect a certain degree of smoothness and that, just as in the discrete-time case, a common condition on an aperiodic signal is that the function be square integrable which corresponds to the signal having finite energy.
\subsection{Inner Product and Convolution}
-We have already encountered some examples of continuous-time signals in conjunction with Hilbert spaces; in Section~\ref{sec:vs:funcs}, for instance, we introduced $L_2(\mathbb{[a,b]})$, the space of square integrable functions over an interval; shortly, we will introduce the space of bandlimited functions. In continuous as in discrete time, we will consider signals as elements in an appropriate vector space; it is tacitly assumed that all signals are at least in $L_2(\mathbb{R})$ where the inner product is defined as
+We have already encountered some examples of continuous-time signals in conjunction with Hilbert spaces; in Section~\ref{sec:vs:funcs}, for instance, we introduced $L_2([a,b])$, the space of square integrable functions over an interval; shortly, we will introduce the space of bandlimited functions. In continuous as in discrete time, we will consider signals as elements in an appropriate vector space; it is tacitly assumed that all signals are at least in $L_2(\mathbb{R})$ where the inner product is defined as
\index{inner product!for functions}
\begin{equation}\label{eq:is:inner}
\bigl\langle \mathbf{x, y} \bigr\rangle = \int_{-\infty}^{\infty} x^*(t)y(t) \, dt.
\end{equation}
The squared norm of a signal (that is, its overall energy) is the self inner product, as per usual:
\[
\|\mathbf{x}\|^2 = \langle \mathbf{x, x} \rangle;
\]
finite energy is therefore equivalent to square integrability:
\[
\mathbf{x} \in L_2(\mathbb{R}) \quad \iff \quad \|\mathbf{x}\|^2 = \int_{-\infty}^{\infty} |x(t)|^2 \, dt < \infty.
\]
The \emph{convolution} \index{convolution!in continuous time} of two real-valued continuous-time signals is defined as:
\begin{align}
(\mathbf{x \ast h})(t) & = \int_{-\infty}^{\infty} x(t - \tau) h(\tau) \, d\tau \\
&= \bigl\langle x(t-\tau), h(\tau) \bigr\rangle
\end{align}
The continuous-time convolution operator, just as in discrete time, is linear and time invariant. It represents the operation of processing a signal through a continuous-time LTI system $\mathcal{H}$, whose impulse response is in this case a continuous-time function (see Figure~\ref{fig:is:CTfilter}). Typical examples of continuous-time filters are represented by electronic circuits containing resistors, capacitors and coils or by mechanical systems using springs and weights. The inner workings of these devices are described by differential equations (rather than by CCDE's) and their behavior is usually analyzed using a mathematical tool called Laplace transform (instead of the $z$-transform). In the following, we will not study continuous-time system in detail, other than by characterizing their effects in the frequency domain.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspBlocks}{1}{1}
$x(t)$~~ & \rnode{F}{\BDfilter{$\mathcal{H}$}} & ~~$y(t)$
\ncline{->}{1,1}{1,2}\ncline{->}{1,2}{1,3}
\end{dspBlocks}
\caption{Continuous-time filtering.}\label{fig:is:CTfilter}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Frequency-Domain Representation}
In continuous time, the spectral representation of a signal is given by its Fourier Transform (FT). The Fourier transform\index{Fourier transform (continuous time)} of a function $x(t)$ and its inversion formula are defined as
\begin{align}
X(f) &= \int_{-\infty}^{\infty} x(t)\, e^{-j2\pi f t} \, dt \\
x(t) &= \int_{-\infty}^{\infty} X(f)\, e^{j2\pi f t}\, df
\end{align}
In the above expressions, if $t$ measures time in seconds, then $f$ measures frequency in Hertz. Another common representation for the Fourier transform uses angular frequency in radians/sec; this is indicated by using the variable $\Omega = 2\pi f$ which slightly changes the analysis and synthesis formulas to:
\begin{align}
X(j\Omega) &= \int_{-\infty}^{\infty} x(t)\, e^{-j\Omega t} \, dt \\
x(t) &= \frac{1}{2\pi}\int_{-\infty}^{\infty} X(j\Omega)\, e^{j\Omega t}\, d\Omega.
\end{align}
This notation\footnote{
%
This was the notation of choice in the first edition of this book. We switched to using frequency in Hertz for increased clarity, as experimented in the classroom.}
%
mirrors the specialized notation that we used for the DTFT, seen as the restriction of the $z$-transform on the unit circle; in this case, by writing $X(j\Omega)$ we indicate that the Fourier transform is the (two-sided) Laplace transform $X(s) = \int x(t)\, e^{-st}\, dt$ computed over the imaginary axis.
The convergence of the Fourier integrals is assured for functions which satisfy the so-called Dirichlet conditions; for our purposes, it suffices to know that the FT is always well defined for finite-energy signals, that is, for signals in $L_2(\mathbb{R})$. The Fourier transform in continuous time is a linear operator; for a list of its basic properties, which mirror exactly those of the DTFT, we refer to the bibliography.
The intuitive interpretation of the FT is also the same as for the DTFT; by using the definition of the inner product in~(\ref{eq:is:inner}) we can observe that, formally,
\[
X(f) = \langle e^{j2\pi f t}, x(t) \rangle
\]
that is, the value of the FT at a particular frequency $f$ measures the similarity between the original signal and a complex sinusoid of frequency $f$. It is very important to notice, however, that in continuous time the Fourier Transform is {\em not} a $2\pi$-periodic function since the time variable $t$ in the exponent is real-valued; as a consequence, $X(f)$ is a non-periodic function of $f$ over the entire real axis.
%It suffices
%here to recall the conservation of energy, also known as
%Parseval's theorem:
%\[
% \int_{-\infty}^{\infty} \bigl|x(t) \bigr|^2 \,dt
%= \frac{1}{2\pi}\int_{-\infty}^{\infty} \bigl|X(j\Omega) \bigr|^2\, d\Omega
%\]
The FT representation can be formally extended to signals which are not square integrable by means of the Dirac delta notation as we saw in Section~\ref{FourierDirac}. In particular we have
\begin{equation}
\mbox{FT}\{e^{j2\pi f_0 t}\} = \delta(f - f_0)
\end{equation}
from which the Fourier transforms of sine, cosine, and constant functions can easily be derived. Again, note that in continuous time the FT of a complex sinusoid is \emph{not} a periodic pulse train but just a single impulse.
\itempar{The Convolution Theorem.}\index{convolution!theorem}
In continuous time, the convolution theorem mirrors exactly its discrete-time counterpart as discussed in Section~\ref{convtheosec}:
\begin{equation}
\mathbf{x \ast h} \quad \stackrel{\mathrm{FT}}{\longleftrightarrow} \quad X(f) H(f).
\end{equation}
In particular, we can use the theorem to compute a convolution as the inverse Fourier transform of the product of two spectra:
\begin{equation}
(\mathbf{x \ast h})(t) = \int_{-\infty}^{\infty} X(f)H(f)\, e^{j2\pi f t} \, df
\end{equation}
\subsection{Bandlimited Signals}\label{sec:is:SincProp}
\noindent
A function whose Fourier transform is nonzero only over a finite frequency interval is called \emph{bandlimited}. A signal $\mathbf{x}$ will be called $F_s$-bandlimited if there exists a frequency $F_s$ such that\footnote{
The use of $\geq$ instead of $>$ is a technicality which will be useful in conjunction with the sampling theorem.}
\index{bandlimited signal|mie}
\[
X(f) = 0 \quad \mbox{for } |f| \geq F_s/2.
\]
The {\it total}\/ frequency support of such a signal is $F_s$ Hertz; the maximum positive frequency of the support, $F_s/2$, is often called the \emph{Nyquist frequency} and denoted by the symbol $f_N$. Bandlimited signals play a fundamental role in sampling and interpolation.
In the time domain, a signal that is nonzero only over a finite interval is called \emph{time-limited\/} and a fundamental theorem states that a signal cannot be simultaneously bandlimited and time-limited. We will prove this formally later in the chapter but, intuitively, if we consider the so-called scaling property of the Fourier transform,
\begin{equation}\label{eq:is:scalingFT}
\mbox{FT} \bigl\{ x (at) \bigr\} = \frac{1}{a}\, X(f/a),
\end{equation}
we can see that as a signal gets more ``concentrated'' in time, its frequency support becomes wider. This is another fundamental concept that will affect the spectral occupancy of a continuous-time signal as a function of its sampling frequency.
\itempar{The Sinc Function.}
The simplest $F_s$-bandlimited signal that we can design is just the indicator function for the $[-F_s/2, F_s/2]$ frequency interval, that is, a function whose Fourier transform is constant over the interval $[-F_s/2, F_s/2]$ and zero everywhere else. Consider once again the rect function\index{rect} that we first saw in Section~\ref{idealFilters}):
\[
\rect(\tau) = \begin{cases}
1 & \ |\tau| \leq 1/2 \\
0 & \ |\tau| > 1/2.
\end{cases}
\]
with this, we can express the Fourier transform of our simple $F_s$-bandlimited signal as $\alpha \rect(f/F_s)$ where $\alpha$ is a constant of our choosing. One common convention is to pick $\alpha$ so that the Fourier transform has unit area; this leads to the prototypical bandlimited spectrum
\begin{equation} \label{eq:is:protoBLfreq}
\frac{1}{F_s}\, \rect\left(\frac{f}{F_s}\right).
\end{equation}
The time-domain representation of this signal is easily obtained from the inverse Fourier transform as
\begin{equation}\label{eq:is:protoBLtime}
\int_{-F_s/2}^{F_s/2}e^{j2\pi ft} df = \frac{\sin(\pi F_s t)}{\pi F_s t } = \sinc\left(\frac{t}{T_s}\right)
\end{equation}
where we have used $T_s = 1/F_s$ and used the definition of the sinc\index{sinc} function
\[
\sinc(\tau) = \begin{cases}
\displaystyle \frac{\sin(\pi \tau)}{\pi \tau} & \ \tau \neq 0 \\
1 & \ \tau = 0
\end{cases}.
\]
The time-frequency pair
\begin{equation}\label{eq:is:rectsinc}
\sinc\left( \frac{t}{T_s} \right) \quad \stackrel{\mathrm{FT}}{\longleftrightarrow}\quad \frac{1}{F_s}\, \rect\left(\frac{f}{F_s}\right) \qquad\qquad (F_s = 1/T_s)
\end{equation}
is a fundamental building block in continuous-time signal processing and will be used repeatedly throughout the chapter; both functions are plotted in Figure~\ref{fig:is:rectsinc}. The following properties mirror the properties of the discrete-time sinc that we studied in Section~\ref{sec:fil:ideal}:
\begin{itemize}
\item the sinc function is symmetric, $\sinc(t) = \sinc(-t)$
\item The sinc function is square integrable (it has finite energy) but it is not absolutely integrable (hence the discontinuity of its Fourier transform).
\item The decay is slow, asymptotic to $1/t$.
\item The sinc function scaled by $T_s$ represents the impulse response of an ideal, continuous-time lowpass filter with cutoff frequency $F_s/2$.
\item the sinc function is zero for all integer values of its argument, except in zero. This property is called the \emph{interpolation property} of the sinc and its importance will be apparent shortly.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=custom,yticks=custom,ylabel={$rect(f/F_s)/F_s$}]{-1.5,1.5}{0,1.4}
\dspFunc[linecolor=ColorCF]{x \dspRect{0}{1}}
\dspCustomTicks[axis=x]{0 0 -0.5 $-F_s/2$ 0.5 $F_s/2$}
\dspCustomTicks[axis=y]{1 $1/F_s$}
\end{dspPlot}
\vspace{1em}
\begin{dspPlot}[height=\dspHeightCol,xticks=custom,sidegap=0,xout=true,ylabel={$sinc(t/T_s)$}]{-8,8}{-0.3,1.2}
\dspFunc[linecolor=ColorCT]{x \dspSinc{0}{1}}
\dspCustomTicks[axis=x]{0 0 1 $T_s$ -1 $-T_s$ 2 $2T_s$ 3 $3T_s$ 4 $4T_s$}
\end{dspPlot}
\caption{Frequency-domain and time-domain plots of the prototypical \mbox{$F_s$-bandlimited} function; $T_s = 1/F_s$.}\label{fig:is:rectsinc}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
diff --git a/writing/sp4comm.multipub/90-sampling/10-is-interpolation.tex b/writing/sp4comm.multipub/90-sampling/10-is-interpolation.tex
index 38ff976..143aa57 100644
--- a/writing/sp4comm.multipub/90-sampling/10-is-interpolation.tex
+++ b/writing/sp4comm.multipub/90-sampling/10-is-interpolation.tex
@@ -1,640 +1,640 @@
-\section{Practical Interpolation}
+\section{Interpolation}
\label{sec:is:interp}\index{interpolation|(}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% finite-length signal, 5 taps
\def\ta{1 } \def\tb{2 } \def\tc{0.7 } \def\td{2.4 } \def\te{-1.5 }
%% the tap plotting string
\def\taps{-2 \ta -1 \tb 0 \tc 1 \td 2 \te}
\def\plotTaps{\dspTaps[linecolor=ColorDT]{\taps}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Interpolation is the procedure by which we convert a discrete-time sequence $\mathbf{x}$ to a continuous-time function $\mathbf{x}_c$. At the core of the interpolation procedure, as we have mentioned, is the association of a {\em physical}\/ time duration $T_s$ to the intervals between samples in the discrete-time sequence; next, we need to ``fill the gaps'' between sample values. To develop the intuition, let's consider a simple example where we want to interpolate a finite set of values as in Figure~\ref{fig:is:diffint}-(a); Figures~\ref{fig:is:diffint}-(b), (c) and (d) show three possible interpolations of the given dataset to a continuous-time signal $\mathbf{x}_c$ using $T_s = 1$.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=ColorCT](-2, \ta)(0,0)(2, \te)
\end{dspPlot}
\\ (a) & (b) \\
\begin{dspPlot}[sidegap=0,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linecolor=ColorCT](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=1,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\plotTaps
\pscurve[linecolor=ColorCT](-2, \ta)(-1.75, 2)(-1.5, -1.5)(-1.25, 1.5)%
(-1, \tb)(-0.75, 0.5)(-0.5, .6)(-0.25, .5)%
(0, \tc)(0.25, -1)(0.5, 1)(0.75, -1)%
(1, \td)(1.25, 1.5)(1.5, 2.5)(1.75, 1.8)(2, \te)
\end{dspPlot}
\\ (c) & (d) \\
\end{tabular}
\caption{(a): original 5-tap discrete-time signal; (b), (c), (d): different possible interpolations of (a) to continuous time.}\label{fig:is:diffint}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
Although we haven't really formalized the requirements for the interpolation process, it is rather intuitive that all three proposed interpolations are not entirely satisfactory:
\begin{itemize}
\item in (b) the interpolation does not go through the data points; indeed, it seems reasonable to require that the interpolation include the original data set exactly;
\item in (c), while the function goes through the data points, it does not look smooth; in general, we are right to be suspicious of non-smooth curves, since no natural phenomenon can create discontinuous jumps in amplitude (achievable only via infinite speed) or curvature (achievable only via infinite acceleration);
\item in (d), while the function goes through the data points and is smooth, it seems to ``wander around'' too much, as if interpolating a more complicated dataset; indeed, we want our interpolation strategy to be ``economical'' in using the information contained in the discrete-time signal.
\end{itemize}
In the following sections we will try to make our requirements more precise and see how we can arrive at a universal interpolation scheme that is both intuitively and mathematically sound.
%To formalize the problem, and without loss of generality, we will assume in the following that $T_s = 1$ and that the finite-length discrete-time signal is defined for $n = -N, \ldots, 0, \ldots, N$ for an odd length of $M = 2N+1$ points.
\subsection{Polynomial Interpolation}
\label{sec:is:lagrange}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Lagrange polynomials
\def\lpa{%
dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul }
\def\lpb{%
dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul }
\def\lpc{%
dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul }
\def\lpd{%
dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul }
\def\lpe{%
dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul }
\def\LagPoly#1#2#3{%
\dspFunc[linecolor=#1]{x \csname lp#2\endcsname}%
%\dspText(0,1.3){\color{#1} $L_{#3}^{2}(t)$}
}
\def\lagInterp{%
x \lpa \ta mul %
x \lpb \tb mul %
x \lpc \tc mul %
x \lpd \td mul %
x \lpe \te mul %
add add add add}
\def\interpolant#1#2{%
\begin{dspClip}
\plotTaps%
\dspTaps[linecolor=ColorFour]{#1 \csname t#2\endcsname}%
\dspFunc[linewidth=0.5pt,linecolor=ColorFour]{x \csname lp#2\endcsname \csname t#2\endcsname mul}%
\end{dspClip}}
\def\polyName#1{{$x[#1]L^{(2)}_{#1}(t)$}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Given a discrete-time signal $\mathbf{x}$, once we have chosen an interpolation interval $T_s$, the first requirement for the interpolating function $\mathbf{x}_c$ is that its values at multiples of $T_s$ be equal to the corresponding points of the discrete-time sequence, i.e.
\[
x_c(nT_s) = x[n].
\]
%with our choice of $T_s = 1$ the above condition becomes simply $x(n) = x[n]$. T
The second requirement is that we want the interpolating function to be smooth and, mathematically, the smoothness of a function increases with the number of its continuous derivatives. For maximal smoothness, therefore, we require $\mathbf{x}_c \in C^{\infty}$, where $C^{M}$ is the class of functions for which all derivatives up to order $M$ exist and are continuous.
For now, let's consider the case where $\mathbf{x}$ is a finite-length signal; without loss of generality, assume the signal has odd lenght $2N+1$ and that the signal is centered in zero, as in Figure\ref{fig:is:diffint}-(a) before. To lighten the notation, let's also pick $T_s = 1$ for now.
The simplest choice for a maximally differentiable curve through a set of $2N+1$ data points is the unique polynomial interpolator of degree $2N$
\begin{equation} \label{eq:is:straightPI}
x_c(t) = a_0 + a_1 t + a_2 t^2 + \ldots + a_{2N} t^{2N}
\end{equation}
which, like all polynomials, belongs to $C^{\infty}$. Computation of the $2N+1$ coefficients $a_k$ is a classic algebraic problem, first solved in the 17th century by Newton. Numerically, one way to arrive at the solution is to work out the system of $2N+1$ equations
\begin{equation*}
\bigl\{\,x_c(n) = x[n]\,\bigr\}_{n = -N,\ldots,0,\ldots,N}
\end{equation*}
which can be carried out algorithmically using Pascal's triangle, for instance. A more interesting approach is to consider the space of finite-degree polynomials over the interval $[-N, N]$ and choose a ``smart'' basis to express $\mathbf{x}_c$. This is not the first time we encounter the idea of using a polynomial basis: in Chapter~\ref{ch:vs}, for instance, we introduced the Legendre basis to solve a function approximation problem. The interpolator in~(\ref{eq:is:straightPI}) clearly expresses $\mathbf{x}_c$ as a linear combination of the $2N+1$ {\em monomial}\/ basis vectors $\{1, t, t^2, \ldots t^{2N}\}$ but, for the task at hand, a more appropriate basis is the set of {\em Lagrange polynomials}.
The Lagrange polynomial basis for the interval $I=[-N, N]$ is the family of $2N+1$ polynomials
\begin{equation} \label{eq:is:lagPoly}
L^{(N)}_n(t) = \mathop{\prod_{k = -N}}_{k\neq n}^{N}\frac{t - k}{n - k}, \qquad n = -N, \ldots, N
\end{equation}
each of which has degree $2N$. As an example, the family of five polynomials for $N=2$ is shown in Figure~\ref{fig:is:LagBasis}. A key property of the Lagrangian basis vector is that, for integer values of their argument, we have
\begin{equation} \label{eq:is:lagInterpProp}
L^{(N)}_n(m) = \left\{
\begin{array}{ll}
1 & \mbox{if $n=m$}\\
0 & \mbox{if $n \neq m$}
\end{array}\right. \qquad\qquad -N \leq n,m \leq N.
\end{equation}
Using the above result, it is easy to verify that the polynomial interpolator for a discrete-time signal of length $2N+1$ is simply
\begin{equation} \label{eq:is:lagInterp}
x_c(t) = \sum_{n = -N}^{N} x[n]L^{(N)}_n(t)
\end{equation}
that is, a linear combination of the Lagrangian basis vectors where the scalar coefficients are the discrete-time samples themselves. Indeed, since a polynomial of degree $M$ is uniquely determined by $M$ of its values and since $x_c(n)$, because of~(\ref{eq:is:lagInterpProp}), is equal to $x[n]$ for $n = -N, -N+1, \ldots, N$, the interpolator is indeed the unique solution to the problem.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0,yticks=0.5]{-2.5,2.5}{-0.8,1.6}
\begin{dspClip}%
\LagPoly{green}{a}{-2}%
\LagPoly{blue}{b}{-1}%
\LagPoly{orange}{c}{0}%
\LagPoly{black}{d}{1}%
\LagPoly{cyan}{e}{2}%
\end{dspClip}
\end{dspPlot}
\caption{Lagrange interpolation polynomials $L_n^{(2)}(t)$ for $n=-2,\ldots,2$. Note that $L_n^{(N)}(t)$ is zero for $t$ integer except for $t = n$, where it is $1$.}\label{fig:is:LagBasis}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As an example, Figure~\ref{fig:is:weightedLag} shows how the five polynomials $L_n^{(2)}(t)$ beautifully come together to interpolate a $5$-point discrete-time signal into the smooth curve in Figure~\ref{fig:is:finalLag}. A fundamental characteristic of polynomial interpolation is that it is {\it global}: for all values of $t$, \textit{all} the original discrete-time data points contribute to the instantaneous value of $x(t)$.
Although it elegantly solves the problem of finding the smoothest curve through a finite data set, polynomial interpolation suffers from a series of drawbacks. First of all, although the Lagrangian basis provides a way to quickly compute the interpolator's coefficients, if the number of data points changes, then the set of basis vectors needs to be redetermined from scratch. From an engineering point of view it is obvious that we would like a more universal ``interpolation machine'' that does not depend on the size of the input. Additionally, and this is a problem common to all types of polynomial fitting, the method becomes numerically ill-conditioned as the polynomial degree grows large. Finally, the method produces a continuous-time interpolator that does not admit a simple frequency-domain representation, since the resulting function diverges outside the $[-N, N]$ interval.
We will now introduce an interpolation method that solves all of these problems, but the price to pay will be the partial relaxation of the smoothness constraint.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{cc}
\polyName{-2} & \polyName{-1} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-2}{a}
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{-1}{b}%
\end{dspPlot}
\end{tabular}
\polyName{0} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{0}{c}
\end{dspPlot}
\begin{tabular}{cc}
\polyName{1} & \polyName{2} \\
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{1}{d}%
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.5\dspWidth,height=0.5\dspHeight]{-2.5,2.5}{-2,3}
\interpolant{2}{e}
\end{dspPlot}
\hspace{1em}
\end{tabular}
\caption{Weighted Lagrange polynomials for the interpolation of a $5$-point signal.}\label{fig:is:weightedLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\begin{dspClip}%
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \lpa \ta mul}%
\dspFunc[linewidth=0.4pt]{x \lpb \tb mul}%
\dspFunc[linewidth=0.4pt]{x \lpc \tc mul}%
\dspFunc[linewidth=0.4pt]{x \lpd \td mul}%
\dspFunc[linewidth=0.4pt]{x \lpe \te mul}%
\plotTaps
\dspFunc[linewidth=2pt,linecolor=ColorCT,xmin=-2,xmax=2]{\lagInterp}
\end{dspClip}
\end{dspPlot}
\caption{Final Lagrange interpolation.}\label{fig:is:finalLag}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Kernel-Based Local Interpolation}\label{sec:is:locInterp}%
\index{interpolation!local}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% cubic interpolator
\def\cubFunA{abs dup 2 exp -2.25 mul exch 3 exp 1.25 mul 1 add add }
\def\cubFunB{abs dup dup 8 mul exch 2 exp -5 mul add exch 3 exp -4 add add -0.75 mul }
\def\cubFun#1{
#1 sub
dup dup dup dup %
-2 lt {pop pop pop pop 0} {
2 gt {pop pop pop 0 } {
-1 lt {pop \cubFunB } {
1 gt {\cubFunB }
{\cubFunA}
ifelse }%
ifelse }%
ifelse }%
ifelse }
\def\triInterp{%
x \cubFun{-2} \ta mul %
x \cubFun{-1} \tb mul %
x \cubFun{0} \tc mul %
x \cubFun{1} \td mul %
x \cubFun{2} \te mul %
add add add add}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The idea behind kernel-based interpolation is to build a continuous-time signal by summing together scaled copies of the same compact-support prototype function (the ``kernel''). Consider for instance a device that builds a continuous-time signal simply by keeping its output constant and equal to the last discrete-time input data point between interpolation intervals:
\begin{equation*}
x_0(t) = x[\, \lfloor t + 0.5 \rfloor \,], \qquad -N \leq t \leq N;
\end{equation*}
the result of this interpolation method, known as Zero-Order Hold (ZOH), is shown in Figure~\ref{fig:is:zoh} for the same 5-point signal as in Figure~\ref{fig:is:diffint}-(a). The above expression can be rewritten as
\begin{equation} \label{eq:is:zoh}
x_0(t) = \sum_{n = -N}^{N}x[n]\rect(t - n),
\end{equation}
which shows a remarkable similarity to~(\ref{eq:is:lagInterp}). The differences, however, are extremely significant:
\begin{itemize}
\item the continuous-time term in the sum (i.e the rect function) is no longer dependent on the length of the original data set
\item the dependence of the continuous-time term on interpolation interval (that is, on $n$) is only via a simple time shift
\item the value of the output at any instant $t$ is dependent only on one input data point: the interpolation is \textit{local} rather than global and can be performed in real time as the discrete-time data flows in.
\end{itemize}
The ZOH creates a continuous-time signal by stitching together delayed and scaled versions of a rectangular kernel, independently of the amount of data to interpolate: the resulting ``interpolation machine'' is both conceptually and practically a very simple device. The price to pay for this simplicity, however, is a ``jumpy'' output signal (that is, the interpolator is discontinuous).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\plotTaps
\psline[linewidth=2pt,linecolor=ColorCT](-2,\ta)(-1.5,\ta)(-1.5,\tb)(-.5,\tb)(-.5,\tc)(0.5,\tc)(0.5,\td)(1.5,\td)(1.5,\te)(2,\te)
\end{dspPlot}
\caption{Zero-Order Hold interpolation.}\label{fig:is:zoh}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Generally speaking, local kernel-based interpolators produce a signal via what we could call a ``mixed-domain convolution''
\begin{equation} \label{eq:is:interplocal}
x(t) = \sum_{n = -N}^{N}x[n]i(t - n);
\end{equation}
where the kernel $i(t)$ is a compact-support function fulfilling the properties
\begin{equation*}
\left\{\begin{array}{ll}
i(0) & = 1 \\
i(t) & = 0 \quad \mbox{for $t$ a nonzero integer}
\end{array}\right.
\end{equation*}
(compare this to~(\ref{eq:is:lagInterpProp})). By crafting a good kernel we can achieve an interpolated signal with better continuity properties and we will now explore a few options; note that, for an arbitrary interpolation interval $T_s$, the interpolation is obtained simply by using a scaled kernel:
\begin{equation} \label{eq:is:interpkernel}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, i\left(\frac{t - nT_s}{T_s}\right).
\end{equation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\center
\small
\psset{unit=5mm}
\begin{tabular}{ccc}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \dspRect{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \dspTri{0}{1}}
\end{dspPlot}
&
\hspace{-2em}
\begin{dspPlot}[sidegap=0,yticks=none,xout=true,width=0.3\dspWidth,height=0.5\dspHeight]{-2.8,2.8}{-.5,1.3}
\dspFunc[linecolor=ColorCT]{x \cubFun{0}}
\end{dspPlot}
\\
(a) & (b) & (c)
\end{tabular}
\caption{Interpolation kernels of order zero, one and three.}\label{fig:is:interpKernels}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Zero-Order Hold.}\index{zero-order hold}%
\index{interpolation!local!zero-order}
The kernel used in the ZOH is, as we have seen, the simple rect function shown in Figure~\ref{fig:is:interpKernels}-(a):
\begin{equation}
i_0(t) = \begin{cases}
1 & \mbox{ if } |t| < 1/2 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel is discontinuous (that is, it belongs to $C^0$) and so is the interpolated signal. Only one value of the input is used to produce the current value of the output.
\itempar{First-Order Interpolation.}\index{first-order interpolation}%
\index{interpolation!local!first-order}
The kernel used in first-order interpolation is the triangle function shown in Figure~\ref{fig:is:interpKernels}-(b):
\begin{equation}
i_1(t) = \begin{cases}
1- |t| & \mbox{ if } |t| < 1 \\
0 & \mbox{ otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^1$ and the same holds for the output signal. An example of first-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:firstOrdInterp}. The interpolation strategy is akin to ``connecting the dots'' between discrete-time samples so that, at any time $t$, \textit{two} discrete-time samples contribute to the output value. If implemented in real time, first-order interpolation would therefore introduce a delay of one interpolation period.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \dspTri{-2}{1} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{-1}{1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{0}{1} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{1}{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \dspTri{2}{1} \te mul}%
\plotTaps
\psline[linewidth=2pt,linecolor=ColorCT](-2,\ta)(-1,\tb)(0,\tc)(1,\td)(2,\te)
\end{dspPlot}
\caption{First-Order interpolation; shown in light gray are the scaled and translated copies of the first-order kernel.}\label{fig:is:firstOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Although the first-order interpolator is only marginally more complex than the ZOH, the continuity of the output implies a much better quality of the result as can be appreciated visually for familiar signals as in Figure~\ref{fig:is:interpCompare}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=ColorDT]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=ColorCT]%
{x x 0 gt {-0.5} {0.5} ifelse sub truncate 0.5235 mul RadtoDeg sin}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xout=true,sidegap=0.5]{-6,6}{-1.2,1.2}
% 2pi/12
\dspSignal[linecolor=ColorDT]{x 0.5235 mul RadtoDeg sin}
\dspFunc[,linewidth=2pt,linecolor=ColorCT]%
{x x floor sub dup 1 exch sub %
x floor 0.5235 mul RadtoDeg sin mul exch %
x ceiling 0.5235 mul RadtoDeg sin mul add}
\end{dspPlot}
\caption{Comparison between the zero- and first-order interpolation of a discrete-time sinusoidal signal.}\label{fig:is:interpCompare}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Third-Order Interpolation.}\index{third-order interpolation}%
\index{interpolation!local!third-order}
In third-order interpolation\footnote{Even-order kernels are not used in practice since they are not symmetric around a central interpolation instant.} a commonly used kernel is the cubic piecewise function shown in Figure~\ref{fig:is:interpKernels}-(c):
\begin{equation}
i_3(t) = \begin{cases}
1.25|t|^3 - 2.25|t|^2 + 1 & \mbox{for $|t| \leq 1$} \\
-0.75(|t|^3 - 5|t|^2 + 8|t| - 4) & \mbox{for $1 < |t| \leq 2$} \\
0 & \mbox{otherwise}
\end{cases}
\end{equation}
The kernel belongs to $C^2$ (first and second derivatives are continuous) and the same holds for the output signal. The kernel's support is four and, as a consequence, the values of the continuous-time signal depend on four neighboring discrete-time samples. An example of third-order interpolation applied to the usual five-point data set is shown in Figure~\ref{fig:is:thirdOrdInterp}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspPlot}[sidegap=0]{-2.5,2.5}{-2,3}
\psset{linecolor=lightgray}
\dspFunc[linewidth=0.4pt]{x \cubFun{-2} \ta mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{-1} \tb mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{0} \tc mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{1} \td mul}%
\dspFunc[linewidth=0.4pt]{x \cubFun{2} \te mul}%
\plotTaps
\dspFunc[xmin=-2,xmax=2,linewidth=2pt,linecolor=ColorCT]{\triInterp}
\end{dspPlot}
\caption{Third-Order interpolation; shown in light gray are the scaled and translated copies of the third-order kernel.}\label{fig:is:thirdOrdInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Higher-Order Interpolation.}
Local interpolation schemes can be extended to higher order kernels and, in general, a kernel of order $2N+1$ is composed of polynomials of degree $2N+1$ over a support of length $2N+2$. The resulting interpolation belongs to $C^{2N}$ and the lack of continuity in the derivative of order $2N+1$ ultimately will cause undesired high frequency content in the output, as we will see momentarily.
\subsection{Ideal Sinc Interpolation}
\label{sec:is:sincinterp}
The tradeoff, so far, seems clear:
\begin{itemize}
\item global interpolation schemes such as polynomial interpolation provide maximum smoothness at the price of a complex procedure that depends on the length of the data set;
\item local kernel-based methods are simple to implement but ultimately lack in smoothness (which leads to unwanted spectral artifacts).
\end{itemize}
However, a small miracle is in store: in the limit, as the size of the data set grows to infinity, polynomial interpolation and kernel-based interpolation become one and the same, leading to what is perhaps one of the most remarkable mathematical results in discrete-time signal processing.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% L_0^N(t) = \prod_{k=1}^{N} [t^2/k^2 - 1]
\def\sincApp#1{%
1
1 1 #1 {%
dup mul
x dup mul
exch div
1 exch sub
mul
} for}
%
\center
\begin{dspPlot}[sidegap=0,xout=true]{-15,15}{-.3,1.1}
\dspFunc[linecolor=lightgray]{\sincApp{200}}
\dspFunc[linecolor=ColorCT,linewidth=0.8pt]{x \dspSinc{0}{1}}
\end{dspPlot}
\caption{A portion of the sinc function vs the Lagrange basis vector $L_{0}^{(200)}(t)$ (light gray).}\label{fig:is:Lag2Sinc}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Consider again the expression for the Lagrange polynomial (\ref{eq:is:lagPoly}) for $T_s = 1$ and let's try to determine what happens when the size of the data set, and therefore the degree of the Lagrange polynomials, goes to infinity:
\[
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \prod_{ {\scriptstyle k = -\infty \atop \scriptstyle k\neq n }}^{\infty} \frac{t - k}{n - k} = \ ?
\]
By using the change of variable $m = n-k$ we have
\begin{align}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) &= \prod_{{ \scriptstyle m = -\infty \atop \scriptstyle m\neq 0}}^{\infty} \frac{t - n+m}{m} \nonumber \\
& = \prod_{{\scriptstyle m = -\infty \atop\scriptstyle m\neq 0 }}^{\infty}\left(1+\frac{t - n}{m}\right) \nonumber \\
& = \prod_{m=1}^{\infty}\left(1-\left(\frac{t - n}{m}\right)^{\!2}\right) \label{eq:is:lag2sinc}
\end{align}
We can now use Euler's infinite product expansion for the sine function (a somewhat esoteric formula whose proof is in the appendix),
\begin{equation}
\sin (\pi\tau) = (\pi\tau) \prod_{k = 1}^{\infty}\left(1 - \frac{\tau^2}{k^2}\right),
\end{equation}
with which we finally obtain
\begin{equation}
\lim_{N\rightarrow\infty} L^{(N)}_n(t) = \sinc\left(t - n \right)
\end{equation}
Remarkably, as $N$ goes to infinity, all Lagrange polynomials converge to simple time shifts of the same function and the function in question is the sinc; a graphical illustration of the convergence process for $L^{(N)}_0(t)$ is shown in Figure~\ref{fig:is:Lag2Sinc}. This means that the maximally smooth interpolation for an infinite-length sequence can be obtained using a kernel-based method where the kernel is the sinc function:
\begin{equation} \label{eq:is:sincInterpOne}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc(t-n)
\end{equation}
or, for an arbitrary $T_s$:
\begin{equation}\label{eq:is:sincInterp}
x_c(t) = \sum_{n = -\infty}^{\infty}x[n]\, \sinc\left(\frac{t - nT_s}{T_s}\right)
\end{equation}
Figure~\ref{fig:is:sincInterp} shows how the scaled and time-shifted copies of the sinc kernel come together to interpolated a discrete-time sequence; note how the interpolation property of the sinc that we introduced in Section~\ref{sec:is:SincProp} guarantees that $x_c(nT_s) = x[n]$.
Obviously, since the sinc is a two-sided, infinite support function, sinc interpolation falls in the same abstract category as ideal filters: a paradigm that we can try to approximate but that cannot be exactly implement in practice. Still, the mathematical construction represents a fundamental cornerstone of the translation machinery between discrete and continuous time and will represent the starting point for the development of sampling theory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
% smooth DT signal
\def\smooth{ 10 div 360 mul dup sin exch dup 2 mul sin exch dup mul 360 div sin add add 2.2 div 0.2 add }
\def\sinterpolant#1{ \dspSinc{#1}{1} #1 \smooth mul }
\center
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\dspSignal[linecolor=blue!70]{x \smooth}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=ColorDT]{x \smooth}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{0}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{1}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{2}}
\dspFunc[linewidth=0.8pt,linecolor=lightgray]{x \sinterpolant{3}}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,yticks=1,xticks=100,sidegap=0]{-4.5,7.5}{-0.8,1.2}
\SpecialCoor
\dspSignal[linecolor=ColorDT]{x \smooth}
\multido{\n=-4+1}{12}{%
\dspFunc[linewidth=0.5pt,linecolor=lightgray]{x \dspSinc{\n}{1} \n \smooth mul}}
\dspFunc[linecolor=ColorCT]{x \smooth}
\end{dspPlot}
\caption{Sinc interpolation: discrete-time signal (top); first four scaled copies of the sinc kernel for $n = 0,1,2,3$ (center); final interpolation (bottom).}\label{fig:is:sincInterp}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Spectral Properties}
For an arbitrary interval $T_s$, the Fourier transform of a kernel-based interpolant can be easily computed from~(\ref{eq:is:interpkernel}) as
\begin{align}
X_c(f) &= \int_{-\infty}^{\infty} x_c(t)\,e^{-j2\pi f t} \, dt \nonumber \\
&= \sum_{n = -\infty}^{\infty} x[n] \int_{-\infty}^{\infty} i\left(\frac{t - nT_s}{T_s}\right) e^{-j2\pi f t} \, dt \nonumber \\
&= T_s \, \sum_{n = -\infty}^{\infty} x[n] e^{-j 2\pi n fT_s} I(f T_s) \nonumber \\
&= T_s \, I(f T_s) \, X(e^{j2\pi f T_s}) \\
&= \frac{1}{F_s}\, I\left(\frac{f}{F_s}\right) \, X(e^{j2\pi f/F_s}) . \label{eq:is:interpSpec}
\end{align}
The resulting spectrum is therefore the product of three factors:
\begin{enumerate}
\item a scaling factor $T_s = 1/F_s$;
\item the term $X(e^{j2\pi f/F_s})$, which is the DTFT of the original sequence, rescaled so that $\omega = \pi$ in the discrete-time spectrum is mapped to $f = F_s/2$ in the continuous-time spectrum; please note that, since the DTFT is $2\pi$-periodic, then $X(e^{j2\pi f/F_s})$ is an $F_s$-periodic function;
\item $I(f/F_s)$, the Fourier transform of the rescaled kernel, which acts as a continuous-time filter.
\end{enumerate}
\itempar{Ideal sinc interpolation.} The ideal interpolation kernel $\sinc(t/T_s)$ is $F_s$-bandlimited, with $F_s = 1/T_s$ and therefore the sinc interpolation of a discrete-time sequence is itself bandlimited:
\begin{align}
X_c(f) &= \frac{1}{F_s} \, \rect\left(\frac{1}{F_s}\right) \, X(e^{j2\pi f/F_s}) \nonumber \\
& = \begin{cases}
(1/F_s) \, X(e^{j2\pi f/F_s}) & \mbox{ for $ |f| \leq F_s/2$} \\
0 & \mbox{ otherwise}
\end{cases}. \label{eq:is:sincInterpSpec}
\end{align}
In other words, when using sinc interpolation, the continuous-time spectrum is just a scaled and stretched version of the DTFT between $-\pi$ and $\pi$. The duration of the interpolation interval $T_s$ is inversely proportional to the resulting bandwidth of the interpolated signal; intuitively, a slow interpolation ($T_s$ large) results in a spectrum concentrated around the low frequencies; conversely, a fast interpolation ($T_s$ small) results in a spread-out spectrum (more high frequencies are present); figure~\ref{fig:is:interpSpeed} illustrates the concept graphically.\footnote{
%
To find a simple everyday (yesterday's?) analogy, think of a $45$ rpm vinyl\index{vinyl} record played at either $33$~rpm (slow interpolation) or at $78$ rpm (fast interpolation) and remember the acoustic effect on the sounds.}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\plotSpec#1#2{\dspFunc[#2]{x \dspPorkpie{0}{#1} #1 div}}
\def\plotCurrent#1#2#3#4#5{%
\FPupn\o{#2 1 / 2 trunc}
\plotSpec{\o}{linecolor=ColorCF}
\dspCustomTicks[axis=x]{0 0 {-\o} #5 {\o} #4}
\dspText(-2,1.8){$T_s=$#3}
\dspCustomTicks[axis=y]{1 $T_0$ 2 $2T_0$}}
%
\center
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{1}{1}{$T_0$}{$F_s/2=1/(2T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{2}{2}{$2T_0$}{$F_s/2=1/(4T_0)$}{$-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,xticks=custom,yticks=custom,height=\dspHeightCol,ylabel={$X(f)$}]{-2.5,2.5}{0,2.4}
\plotCurrent{3}{0.5}{$T_0/2$}{$F_s/2=1/T_0$}{$-F_s/2$}
\end{dspPlot}
\caption{Spectral characteristics of a sinc-interpolated signal: DTFT of the original discrete-time signal $x[n]$ (top); Fourier transform of its sinc interpolation with interpolation interval $T_s = T_0$ (second panel); spectrum of a sinc interpolation using a slower interpolation rate ($T_s = 2T_0$): spectrum concentrates in the low frequencies around the origin (third panel); spectrum of a sinc interpolation using a faster interpolation rate ($T_s = T_0/2$): spectrum spreads out to higher frequencies (third panel); the spectral shape and total energy remain the same.}\label{fig:is:interpSpeed}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Practical interpolation.} In practical interpolation schemes, where the kernel is a compact-suport function, the spectrum of the interpolation will be dependent on the Fourier transform of the kernel and will exhibit unwanted artifact with respect to the ideal interpolation scheme. Figure~\ref{fig:is:zohfreq}, for instance, shows the factors in~(\ref{eq:is:interpSpec}) and their product when the kernel is the zero-order hold. We can remark the following:
\begin{itemize}
\item the interpolator acts as a non-ideal lowpass filter that largely preserves the baseband copy of the periodic spectrum but only attenuates the remaining copies;
\item the interpolator distorts the baseband component since its response is not flat;
\item the spectral width of the baseband component is again inversely proportional to the interpolation period.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=1,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{x \dspTri{0}{1}}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{2} abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} abs mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{Zero-order hold interpolation in the frequency domain.}\label{fig:is:zohfreq}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X(e^{j2\pi f/F_s})I(f/F_s)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{1}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspSinc{0}{2} dup mul abs}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=custom,yticks=custom,ylabel={$X_c(f)$}]{-6,6}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \dspPeriodize \dspTri{0}{0.95} x \dspSinc{0}{2} dup mul mul}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ 2 $F_s$ 4 $2F_s$ 6 $3F_s$}
\dspCustomTicks[axis=y]{1 $F_s$}
\end{dspPlot}
\caption{First-order interpolation in the frequency domain.}\label{fig:is:fohfreq}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Higher-order interpolators ameliorate the situation; a first-order interpolator, for instance, whose kernel can be expressed as
\[
\mathbf{i}_1 = \mathbf{i}_0 \ast \mathbf{i}_0
\]
has a sharper characteristic since
\begin{equation}
I_1(f) = \sinc^2(2f).
\end{equation}
As a result, as shown in Figure~\ref{fig:is:zohfreq}, it appears that the smoothness property that made sense in the time domain also leads to a continuous-time spectrum that more closely mirrors its discrete-time counterpart by rejecting most of the out-of-band energy. It is clear that we would like the kernel to approximate as well as possible an ideal lowpass filter characteristic with cutoff $F_s/2$, which leads once again to the ideal sinc interpolator!
diff --git a/writing/sp4comm.multipub/90-sampling/20-is-sampling.tex b/writing/sp4comm.multipub/90-sampling/20-is-sampling.tex
index 334de38..80b70a5 100644
--- a/writing/sp4comm.multipub/90-sampling/20-is-sampling.tex
+++ b/writing/sp4comm.multipub/90-sampling/20-is-sampling.tex
@@ -1,445 +1,451 @@
-\section{The Sampling Theorem}
+\section{Sampling}
-In the previous section we derived the sinc interpolation scheme as the limit of polynomial interpolation applied to infinite-length discrete-time sequences. A consequence of the previous result is that any finite-energy discrete-time signal can be interpolated into a continuous-time signal which is smooth in time and strictly bandlimited in frequency. This suggests that the class of bandlimited functions must play a special role in bridging the gap between discrete and continuous time and this deserves further investigation. In particular, since a discrete-time signal can be interpolated exactly into a
-bandlimited function, we now ask ourselves whether the converse is true: can any bandlimited signal be transformed into a discrete-time signal with no loss of information? The answer, once again, will show us the power of representing signals in an opportune vector space.
+In the previous section we derived the sinc interpolation scheme as the limit of polynomial interpolation applied to infinite-length discrete-time sequences. A consequence of the previous result is that any finite-energy discrete-time signal can be interpolated into a continuous-time signal which is smooth in time and strictly bandlimited in frequency. This suggests that the class of bandlimited functions must play a special role in bridging the gap between discrete and continuous time and this deserves further investigation. In particular, since a discrete-time signal can be interpolated exactly into a bandlimited function, we now ask ourselves whether the converse is true: can any bandlimited signal be transformed into a discrete-time signal with no loss of information? The answer is yes and, once again, its proof will show us the power of representing signals in an appropriate vector space.
-\itempar{The Space of Bandlimited Signals.}
-The class of $F_s$-bandlimited functions with finite energy is a Hilbert space, where the inner product defined by~(\ref{eq:is:inner}). An orthogonal basis for this space can be obtained using the prototypical $F_s$-bandlimited function, that is, $\sinc(t/T_s)$; indeed, consider the family\index{basis!sinc}:
+
+\subsection{The Space of Bandlimited Signals}
+The class of $F_s$-bandlimited functions with finite energy is a Hilbert space, where the inner product defined by~(\ref{eq:is:inner}). An orthogonal basis for this space can be obtained from the prototypical $F_s$-bandlimited function
+\[
+ \varphi(t) = \sinc\left(\frac{t}{T_s}\right) \qquad\quad T_s = 1/F_s.
+\]
+Indeed, consider the set $\{\boldsymbol{\varphi}_n\}$ with \index{basis!sinc}
\begin{equation}
- \varphi^{(n)}(t) = \sinc\left(\frac{t-nT_s}{T_s}\right), \qquad\quad n \in \mathbb{Z}
+ \varphi_n(t) = \varphi(t - nT_s), \qquad\quad n \in \mathbb{Z},
\end{equation}
-where, once again, $T_s = 1/F_s$. Note that we have $\varphi^{(n)}(t) = \varphi^{(0)}(t -nT_s)$ so that each basis function is simply a shifted version of the prototype basis function $\varphi^{(0)}$. Orthogonality can easily be proved as follows: first of all, because of the symmetry of the sinc function and the time-invariance of the convolution, we can write
+where each vector is simply a shifted version of the prototype basis function; we will show that the vectors in the set are mutually orthogonal and therefore form a basis. First, by exploiting the symmetry of the sinc function and the time-invariance of the convolution, we can write
\begin{align*}
- \bigl\langle \varphi^{(n)}(t), \varphi^{(m)}(t) \bigr\rangle & = \bigl\langle \varphi^{(0)}(t - nT_s), \varphi^{(0)}(t - mT_s) \bigr\rangle \\
- & = \bigl\langle \varphi^{(0)}(nT_s - t), \varphi^{(0)}(mT_s - t) \bigr\rangle \\
- & = (\varphi^{(0)} \ast \varphi^{(0)}) \bigl((n-m)T_s \bigr)
+ \bigl\langle \boldsymbol{\varphi}_n, \boldsymbol{\varphi}_m \bigr\rangle
+ & = \bigl\langle \varphi(t - nT_s), \varphi(t - mT_s) \bigr\rangle \\
+ & = \bigl\langle \varphi(nT_s - t), \varphi(mT_s - t) \bigr\rangle \\
+ & = (\boldsymbol{\varphi} \ast \boldsymbol{\varphi}) \bigl((n-m)T_s \bigr).
\end{align*}
We can now apply the convolution theorem and use the Fourier transform in~(\ref{eq:is:protoBLfreq}) to obtain:
\begin{align*}
- \bigl\langle \varphi^{(n)}(t), \varphi^{(m)}(t) \bigr\rangle & =
- \int_{-\infty}^{\infty} \left(\frac{1}{F_s} \, \rect \left(\frac{f}{F_s}\right) \right)^{\!2} e^{j2\pi(f/F_s)(n-m)}\, df \\[2mm]
+ \bigl\langle \boldsymbol{\varphi}_n, \boldsymbol{\varphi}_m \bigr\rangle
+ & = \int_{-\infty}^{\infty} \left(\frac{1}{F_s} \, \rect \left(\frac{f}{F_s}\right) \right)^{\!2} e^{j2\pi(f/F_s)(n-m)}\, df \\[2mm]
& = \frac{1}{F_s^2} \int_{-F_s/2}^{F_s/2} e^{j2\pi(f/F_s)(n-m)}\, df \\[2mm]
& = \begin{cases}
\displaystyle \frac{1}{F_s} = T_s & \mbox{ if $n = m$} \\
0 & \mbox{ if $n \neq m$}
\end{cases}
\end{align*}
-so that $\bigl\{\varphi^{(n)}(t) \bigr\}_{n\in \mathbb{Z}}$ is orthogonal with normalization factor $1/T_s$.
+so that $\bigl\{\boldsymbol{\varphi}_n \bigr\}_{n\in \mathbb{Z}}$ is an orthogonal set with normalization factor $1/T_s$.
-In order to show that the space of $F_s$-bandlimited functions is indeed a Hilbert space, we should also prove that the space is complete. This is a
-more delicate notion to show\footnote{Completeness of the sinc basis can be proven as a consequence of the completeness of the Fourier series in the continuous-time domain.} and here it will simply be assumed.
+In order to show that the space of $F_s$-bandlimited functions is a Hilbert space, we should also prove that the space is complete. This is a more delicate notion to show\footnote{Completeness of the sinc basis can be proven as a consequence of the completeness of the Fourier series in the continuous-time domain.} and here it will simply be assumed.
-\itempar{Sampling as a Basis Expansion.}
+\subsection{Sampling as a Basis Expansion}
Now that we have an orthogonal basis, we can compute coefficients in the basis expansion of an arbitrary $F_s$-bandlimited function $x(t)$. We have
\begin{align}
- \bigl\langle \varphi^{(n)} (t), x(t) \bigr\rangle & = \bigl\langle \varphi^{(0)}(t - nT_s), x(t) \bigr\rangle \\
- & = \bigl(\varphi^{(0)} \ast x \bigr)(nT_s) \\
+ \bigl\langle \boldsymbol{\varphi}_n, \mathbf{x} \bigr\rangle
+ & = \bigl\langle \varphi(t - nT_s), x(t) \bigr\rangle \\
+ & = \bigl(\boldsymbol{\varphi}_n \ast \mathbf{x})(nT_s) \\
& = \int_{-\infty}^{\infty} \frac{1}{F_s}\, \rect\left(\frac{f}{F_s}\right) X(f)\, e^{j2\pi f n T_s}\, df\\
& = \frac{1}{F_s}\, \frac{1}{2\pi} \int_{-F_s/2}^{F_s/2} X(f) \, e^{j2\pi f n T_s}\, df \label{NBLProj}\\
& = T_s \, x(nT_s)
\end{align}
In the derivation, firstly we have rewritten the inner product as a convolution operation, after which we have applied the convolution theorem, and recognized the penultimate line as simply the inverse FT of $X(f)$ calculated in $t = nT_s$. We therefore have the remarkable result that the $n$-th basis expansion coefficient is \emph{equal to the sampled value of} $x(t)$ at $t = nT_s$ up to a scaling factor $T_s$. For this reason, the sinc basis expansion is also called \emph{sinc sampling}.
Reconstruction of $x(t)$ from its projections can now be achieved via the orthonormal basis reconstruction formula~(\ref{eq:vs:synthesis}); since the sinc basis is only orthogonal, rather than orthonormal, the formula needs to take into account the normalization factor and we have
-\begin{align}
- x(t) &= \frac{1}{T_s}\sum_{n = -\infty}^{\infty} \bigl\langle \varphi^{(n)}(t), x(t) \bigr\rangle \, \varphi^{(n)}(t) \nonumber \\
- &= \sum_{n = -\infty}^{\infty} x(nT_s) \, \sinc\!\left(\frac{t-nT_s}{T_s}\right)
-\end{align}
+\begin{equation}
+ \mathbf{x} = \frac{1}{T_s}\sum_{n = -\infty}^{\infty} \bigl\langle \boldsymbol{\varphi}_n, \mathbf{x} \bigr\rangle \, \boldsymbol{\varphi}_n
+\end{equation}
+or, explicitly,
+\begin{equation}
+ x(t) = \sum_{n = -\infty}^{\infty} x(nT_s) \, \sinc\!\left(\frac{t-nT_s}{T_s}\right)
+\end{equation}
which corresponds to the interpolation formula~(\ref{eq:is:sincInterp}).
-\itempar{The Sampling: Theorem.}
+
+
+\subsection{The Sampling Theorem}
\index{sampling theorem|mie}%
\index{sampling!theorem}
We have now all the elements in hand to formally enunciate the sampling theorem:
\begin{quote}
If $x(t)$ is an $F_s$-bandlimited continuous-time signal, a \emph{sufficient} representation of $x(t)$ is given by the discrete-time signal $x[n] = x(nT_s)$, with $T_s = 1/F_s$. The continuous time signal $x(t)$ can be exactly reconstructed from the discrete-time signal $x[n]$ as
\[
x(t) = \sum_{n = -\infty}^{\infty} x[n]\, \sinc\left(\frac{t-nT_s}{T_s}\right).
\]
\end{quote}
The proof of the theorem is inherent to the properties of the Hilbert space of bandlimited functions, and is trivial once the the existence of an orthogonal
basis has been fully proven (but we skipped the completeness part).
The theorem gives us a sufficient condition for converting a continuous-time singnal into a discrete-time sequence with no loss of information. If there exists a frequency $f_N$ so that $X(f) = 0$ for $f>f_N$ then $x(t)$ is $f$-bandlimited for all choices of $f > 2f_N$; this means that we can safely sample $x(t)$ with any sampling period smaller than $1/(2f_N)$ or, alternatively, with any sampling frequency larger than twice the maximum frequency
\[
F_s > 2f_N.
\]
The original signal $x(t)$ can in this case be perfectly reconstructed from the sequence of samples via sinc interpolation.
-In practice, if we know that the spectral content of a continuous-time signal is zero (or becomes \textit{de facto} negligible) above a certain maximum positive frequency $f_N$, then we know that we can safely sample it for all sampling frequencies larger than twice this maximum frequency. In the next Section we will study what happens when this bandlimitedness condition is not fully satisfied or satisfiable.
+In practice, if we know that the spectral content of a continuous-time signal is zero (or becomes \textit{de facto} negligible) above a certain maximum positive frequency $f_N$, then we know that we can safely sample it for all sampling frequencies larger than twice this maximum frequency. In the next section we will study what happens when this bandlimitedness condition is not fully satisfied or satisfiable.
\section{Aliasing}\index{aliasing|(}
``Naive'' sampling, as we have mentioned, is associated to the very intuitive idea of measuring the value of a phenomenon of interest over time in order to record its evolution. We have all done it in our lives, whether to monitor the temperature of a cake we are cooking or to keep track of weather patterns. In the previous section we have formalized this intuition and shown the condition under which the sampling operation entails no loss of information and allows for perfect reconstruction.
The mathematical derivations showed that sampling should be interpreted as the decomposition of a continuous-time signal into a linear combination of sinc basis functions and that the samples are in fact the coefficients of this linear combination. Since the sinc basis is orthogonal, the process to determine the expansion coefficients is formally identical to what we could call ``raw sampling'', that is, the inner product between the continuous-time signal and the $n$-th basis vector is simply the scaled value of the signal at $t = nT_s$:
\begin{equation}\label{eq:is:sincSamp}
- \bigl\langle \varphi^{(n)} (t), x(t) \bigr\rangle = T_s \, x(nT_s)
+ \bigl\langle \boldsymbol{\varphi}_n, \mathbf{x} \bigr\rangle = T_s \, x(nT_s)
\end{equation}
-The equivalence between sinc sampling and raw sampling of course only holds if $x(t)$ is bandlimited to $F_s = 1/T_s$. If this is not the case but we still want to sample every $T_s$ seconds, then the approximation properties of orthogonal bases described in Section~\ref{sec:vs:approx} state that the minimum-MSE discrete-time representation of $x(t)$ is given by the samples \emph{of its projection over the space of $F_s$-bandlimited signals\/}. Indeed, we can easily reformulate~\ref{eq:is:sincSamp} as:
+The equivalence between sinc sampling and raw sampling of course only holds if $\mathbf{x}$ is bandlimited to $F_s = 1/T_s$. If this is not the case but we still want to sample every $T_s$ seconds, then the approximation properties of orthogonal bases described in Section~\ref{sec:vs:approx} state that the minimum-MSE discrete-time representation of $\mathbf{x}$ is given by the samples \emph{of its projection over the space of $F_s$-bandlimited signals\/}. This projection operation makes sense from a signal processing point of view; the relationship between inner product and convolution allows us to rewrite~\ref{eq:is:sincSamp} as:
\begin{equation}\label{eq:is:sincSamp}
- \bigl\langle \varphi^{(n)} (t), x(t) \bigr\rangle = [\sinc(t/T_s) \ast x(t)](nT_s);
+ \bigl\langle \boldsymbol{\varphi}_n, \mathbf{x} \bigr\rangle = (\boldsymbol{\varphi}_n \ast \mathbf{x})(nT_s);
\end{equation}
-this shows that sinc sampling is equivalent to filtering $x(t)$ with a continuous-time, ideal lowpass filter with cutoff frequency $F_s/2$, followed by raw sampling, as shown in Figure~\ref{fig:is:sincSampling}. The filter truncates the spectrum of the signal outside of the $[-F_s/2, F_s/2]$ interval; for an already-bandlimited signal the filtering operation is obviously irrelevant (up to a scaling factor) and therefore sinc and raw sampling coincide. This interpretation also gives us the ``recipe'' for sampling real world signals: given a sampling rate of choice $T_s$, filter the input with a good lowpass filter with cutoff frequency $1/(2T_s)$ and then measure the output at regular intervals.
+sinc sampling is therefore equivalent to filtering $\mathbf{x}$ with a continuous-time, ideal lowpass filter with cutoff frequency $F_s/2$, followed by raw sampling, as shown in Figure~\ref{fig:is:sincSampling}. The filter truncates the spectrum of the signal outside of the $[-F_s/2, F_s/2]$ interval; for an already-bandlimited signal the filtering operation is obviously irrelevant (up to a scaling factor) and therefore sinc and raw sampling coincide. This interpretation also gives us the ``recipe'' for sampling real world signals: given a sampling rate of choice $T_s$, filter the input with a good lowpass filter with cutoff frequency $1/(2T_s)$ and then measure the output at regular intervals.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\center
\begin{dspBlocks}{1}{0.4}
$x(t)$~ & \BDlowpass & &
\raisebox{-1.4em}{\psframebox[linewidth=1.5pt]{%
\psset{xunit=1em,yunit=1em,linewidth=1.8pt}%
\pspicture(-3,-1.8)(2,1.8)%
\psline(-2.8,0)(-1.6,0)(1.2,1.4)
\psline(1.1,0)(1.8,0)
\psarc[linewidth=1pt]{<-}(-1.6,0){2em}{-10}{55}
\endpspicture}}
& $x[n]$ \\
& $F_s$ & & $T_s$
\psset{linewidth=1.5pt}
\ncline{->}{1,1}{1,2}
\ncline{1,2}{1,4}%^{$x_{LP}(t)$}
\ncline{->}{1,4}{1,5}
\end{dspBlocks}
\caption{Sinc sampling interpreted as projection over the space of $F_s$-bandlimited functions (lowpass filtering) followed by raw sampling.}\label{fig:is:sincSampling}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In signal processing practice, however, there are many situation in which this idealized setup does not hold:
\begin{itemize}
\item good approximations to a continuous-time ideal filter are difficult to build (continuous-time filters are made of resistors, capacitors and coils!)
\item we may think the signal is bandlimited but in reality it is not
\item even with a good filter, out of band noise will leak into the raw sampler
\end{itemize}
In these and in many other cases, the hypothesis of perfect bandlimitedness required by the sampling theorem is not fulfilled. We will therefore now study in detail what happens when we raw-sample a signal at a rate that is not adapted to its spectral support.
-\subsection{Aliasing: Intuition}
-In the rest of this section we will assume that we are able to produce a discrete-time sequence by measuring the instantaneous value of a continuous-time signal at multiples of a sampling period $T_s$, as in Figure~\ref{fig:is:rawSampling}.
+\subsection{Intuition}
+In the rest of this section we will assume that we are able to produce a discrete-time sequence by measuring the instantaneous value of a continuous-time signal at multiples of a sampling period $T_s$, as in Figure~\ref{fig:is:rawSampling}. The questions we are interested in concern the potential loss of information introduced by raw sampling, a phenomenon called ``aliasing''. We will start to develop the intuition considering the problem of raw-sampling a sinusoid.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\begin{dspBlocks}{1}{0.1}
$x(t)$~~ & \BDsampler & ~~$x[n]$ \\
& $T_s$ &
\psset{linewidth=1.5pt}
\ncline{-}{1,1}{1,2}
\ncline{->}{1,2}{1,3}
\end{dspBlocks}
\caption{Raw sampling setup}\label{fig:is:rawSampling}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-The questions we are interested in concern the potential loss of information introduced by raw sampling, a phenomenon called ``aliasing''. We will start to develop the intuition considering the problem of raw-sampling a sinusoid.
-\itempar{Sampling of Sinusoids.}\index{complex exponential!aliasing}
+\itempar{Aliased Sinusoids.}\index{complex exponential!aliasing}
Consider the simple continuous-time signal
\begin{equation}\label{eq:is:CTsinusoid}
x(t) = e^{j2\pi f_0 t}
\end{equation}
that represents a complex oscillation at frequency $f_0$. We can visualize the signal as the position of a point that rotates around the unit circle on the complex plane with angular speed of $2\pi f_0$ radians per second. Note that, in continuous time, a sinusoidal signal is always periodic with period $T=1/f_0$ (in discrete time this only happens for angular frequencies that are rational multiples of $2\pi$) and that all angular speeds are allowed (while in discrete time we are limited to a maximum forward speed of $\pi$ radians per sample).
Clearly, since $x(t)$ contains only one frequency, it is $f$-bandlimited for all $f > |f_0|$. The raw-sampled version of the continuous-time complex exponential is the discrete-time sequence
\begin{equation}
x[n] = e^{j2\pi (f_0/F_s)n} = e^{j\omega_0 n}
\end{equation}
where $F_s = 1/T_s$ is the sampling frequency of the raw sampler. If the frequency of the sinusoid satisfies $|f_0| < F_s/2$, then $\omega_0 \in (-\pi, \pi)$ and the frequency of the original sinusoid can be univocally determined from the sampled signal; in other words, we can reconstruct the original continuous-time signal from its raw samples. Now assume that $f_0 = F_s/2$; we have
\[
x[n] = e^{j\pi n} = e^{-j\pi n}
\]
In other words, we encounter a first ambiguity with respect to the direction of rotation of the complex exponential: from the sampled signal we cannot determine whether the original frequency was $f_0 = F_s/2$ or $f_0 = -F_s/2$. If we increase the frequency of the oscillation even more, say $f_0 = (1+\alpha)(F_s/2)$, we have
\[
x[n] = e^{j(1+\alpha)\pi n} = e^{-j\alpha\pi n};
\]
if we try to infer the original frequency from the sampled sinusoid, we cannot discriminate between a counterclockwise rotation with $f_0 = (1+\alpha)(F_s/2)$ or a clockwise rotation with $f_0 = -\alpha(F_s/2)$. Two different frequencies are mapped to the same digital frequency (hence the term ``aliasing'', indicating two or more entities sharing the same name). Finally, if $f_0$ grows to be larger than $F_s$ we can see the full obliterating effects of aliasing: write $f_0 = kF_s + f_1$ with $f_1 < F_s$, that is, $k = f_0 \mod F_s$; we have
\begin{align}
x[n] &= x(nT_s) = e^{j2\pi (kF_s/F_s)n}\, e^{j 2\pi (f_1/F_s) n} \nonumber \\
&= e^{j 2\pi (f_1/F_s) n} \label{eq:is:twoSinAlias} \\
&= e^{j\omega_1 n} \nonumber
\end{align}
-so that the sinusoid is completely indistinguishable from a sinusoid of frequency $f_1$ sampled at $F_s$.
-
-Another graphical example of aliasing when sampling a sinusoid is depicted in Figure~\ref{fig:is:aliasSinusoid}.
+so that the sinusoid is completely indistinguishable from a sinusoid of frequency $f_1$ sampled at $F_s$. The effect of aliasing on a real-valued sinusoid is illustrated in the time domain in Figure~\ref{fig:is:aliasSinusoid}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\sampCos#1{%
\FPupn\ntaps{#1 \dspXmin{} \dspXmax{} - * 1 + 0 trunc}%
\FPupn\endx{\dspXmin{} #1 \ntaps{} -1 + / +}%
\psplot[plotstyle=dots,dotstyle=*,showpoints=true,%
dotstyle=*,dotsize=\dspDotSize,plotpoints=\ntaps,linecolor=ColorDT]%
{\dspXmin}{\endx}{x 3 mul 360 mul cos}}
\center
\begin{dspPlot}[height=\dspHeightCol,sidegap=0,xout=true]{0,1}{-1.2,1.2}
\dspFunc[linecolor=ColorCT]{x 3 mul 360 mul cos}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,sidegap=0,xout=true]{0,1}{-1.2,1.2}
\dspFunc[linecolor=ColorCT]{x 3 mul 360 mul cos}
\sampCos{24}%
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,sidegap=0,xout=true]{0,1}{-1.2,1.2}
\dspFunc[linecolor=ColorCT]{x 3 mul 360 mul cos}
\sampCos{2.9}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,sidegap=0,xout=true]{0,10}{-1.2,1.2}
\dspFunc[linecolor=ColorCT]{x 3 mul 360 mul cos}
\sampCos{2.9}
\end{dspPlot}
\caption{Graphical illustration of aliasing. Top panel: 3Hz-sinusoidal signal $x(t) = \sin(6\pi t)$ (note that it covers 3 full periods in one second); second panel: raw samples at $F_s = 50$Hz (no aliasing); third panel: raw samples at $F_s = 2.9$Hz (full aliasing); bottom panel: wider view of the signal sampled at 2.9Hz, showing how the samples describe a sinusoid of frequency $f_1 = 0.1$Hz (one period in 10 seconds); indeed, we can write $f_0 = F_s + f_1$ as in the derivation of~(\ref{eq:is:twoSinAlias}).}\label{fig:is:aliasSinusoid}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\itempar{Energy Folding of the Fourier Transform.}
-Given a raw sampling frequency $F_s$, consider the continuous-time signal
+\itempar{Spectral Energy Folding.}
+Given a sampling frequency $F_s$, consider a continuous-time signal composed of the sum of two sinusoids
\[
x(t) = Ae^{j2\pi f_a t} + Be^{j2\pi f_b t}
\]
where $f_b = f_a + F_s$. The sampled version of this signal is
\begin{align*}
x[n] &= A\, e ^{j2\pi(f_a/F_s)n} + B\, e ^{j2\pi(f_b/F_s) n}\\
&= A\, e ^{j2\pi(f_a/F_s)n} + B\, e ^{j2\pi(f_a/F_s + 1) n}\\
&= A\, e ^{j\omega_a n} + B\, e ^{j\omega_a n}e^{j2\pi n} \\
&= (A+B) \,e ^{j\omega_a n}
\end{align*}
-In other words, two continuous-time exponentials that are $F_s$~Hz apart are indistinguishable, once sampled at $F_s$~Hz, from a single discrete-time complex exponential whose amplitude is equal to the sum of the amplitudes of the original sinusoids.
+that is, the samples are identical to what would be obtained by sampling a \textit{single} sinusoid with amplitude equal to the sum of the amplitudes of the original sinusoids. The result is similar if the two sinusoids have nonzero initial phase.
-To understand what happens to a general signal after raw sampling, consider the interpretation of the inverse Fourier transform as a bank of (infinitely many) complex oscillators, initialized with phase and amplitude, each contributing to the energy content of the signal at their respective frequency. Sampling the original signal is like sampling its inverse FT, and therefore sampling all the contributing complex exponentials. In the
-sampled version, any two frequencies $F_s$ apart become indistinguishable and so the contributions of all sampled oscillations at multiples of $F_s$ add up to the same discrete-time oscillation in the spectrum of the sampled signal. This aliasing can be represented as a spectral \emph{superposition}: the continuous-time spectrum above $F_s/2$ is shifted back to $-F_s/2$, summed over $[-F_s/2, F_s/2]$, and the process is repeated again and again; the
+If we now consider the effects of raw sampling for generic signal, we can recall the interpretation of the inverse Fourier transform as a bank of (infinitely many) complex oscillators, initialized with phase and amplitude, whose summed outputs generate the signal. Because of linearity, we can imagine sampling these oscillators independently before the Fourier integral and, in this sampled version, any two oscillators $F_s$ apart become indistinguishable; in general, all sampled oscillations at multiples of $F_s$ add up to the same discrete-time oscillation in the spectrum of the sampled signal. This aliasing can be represented as a spectral \emph{superposition}: the continuous-time spectrum above $F_s/2$ is shifted back to $-F_s/2$, summed over $[-F_s/2, F_s/2]$, and the process is repeated again and again; the
same applies for the spectrum below $-F_s/2$, as illustrated in Figure~\ref{fig:is:periodization}. This process is the familiar periodization of a signal:
\[
\sum_{k = -\infty}^{\infty} X(f + kF_s)
\]
as we will prove formally in the next section.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\def\conn#1#2#3#4#5{%
\psbezier[#2,linecolor=ColorOne]{->}(#3,0)(#3,#5)(#4,#5)(#4,0)}
%
\begin{dspPlot}[height=2cm,sidegap=0,xticks=custom,yticks=none,ylabel={$X_c(f)$}]{-5,5}{0,1.2}
\dspCustomTicks[axis=x]{%
0 0
1 $F_s/2$ -1 $-F_s/2$
2 $F_s$ -2 $-F_s$
3 $3F_s/2$ -3 $-3F_s/2$
4 $2F_s$ -4 $-2F_s$}
\psset{linewidth=0.5pt}
\conn{2}{}{2}{0}{0.5}
\conn{3}{}{2.2}{0.2}{0.6}
\conn{4}{}{1.8}{-0.2}{0.4}
\conn{5}{}{-2}{0}{0.5}
\conn{5}{}{-2.2}{-0.2}{0.6}
\conn{5}{}{-1.8}{0.2}{0.4}
\conn{6}{}{4}{0}{0.5}
\conn{6}{}{4.2}{0.2}{0.6}
\conn{6}{}{3.8}{-0.2}{0.4}
\conn{6}{}{-4}{0}{0.5}
\conn{6}{}{-4.2}{-0.2}{0.6}
\conn{6}{}{-3.8}{0.2}{0.4}
\end{dspPlot}
\caption{Spectral folding due to raw sampling.}\label{fig:is:periodization}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\subsection{Aliasing: Formal Derivation}
+\subsection{Aliased Spectra}
In the following, we consider the relationship between the DTFT of a raw-sampled signal $x[n]$ and the FT of the original continuous-time signal $x_c(t)$. For clarity, we add the subscript ``$c$'' to all continuous-time quantities, e.g.:
\[
x[n] = x_c(nT_s);
\]
also, all periodic functions will be denoted by the usual tilde notation.
Assume $x(t)$ is absolutely integrable, so that the sampled sequence $x[n] = x_c(nT_s)$ is absolutely summable and therfore its DTFT $X(e^{j\omega})$ is well defined. Using the inversion formula we can write:
\begin{equation}\label{PSFstart}
x[n] = \frac{1}{2\pi} \int_{-\pi}^{\pi} X(e^{j\omega}) \, e^{j\omega n} \, d\omega.
\end{equation}
At the same time, we can also use the inverse Fourier transform of the original signal to write
\begin{equation}\label{eq:is:alpo}
x[n] = x_c(nT_s) = \int_{-\infty}^{\infty} X_c(f)\, e^{j2\pi f n T_s} \, df;
\end{equation}
the idea is to manipulate the integral in the above expression until we arrive at a formula looking like so:
\[
x[n] = \frac{1}{2\pi}\int_{-\pi}^{\pi} g(\omega)e^{j\omega n}d\omega
\]
at which point we will be able to establish that the DTFT of the raw-sampled sequence is the function $g(\omega)$.
Fist of all, we know from the discussion in the previous section that all frequencies $F_s$ apart will give indistinguishable contributions to the discrete-time spectrum; we can therefore split the integration in~(\ref{eq:is:alpo}) into a sum of integrals over contiguous, non-overlapping intervals of width equal to $F_s$:
\begin{equation*}
x[n] = \sum_{k = -\infty}^{\infty} \int_{kF_s - F_s/2}^{kF_s + F_s/2} X_c(f)e^{j2\pi f\, T_s n} df.
\end{equation*}
Now we perform the change of variable $f \rightarrow f + kF_s$ so that the integration limits of each term become $\pm F_s/2$ and $e^{j2\pi(f - kF_s)T_s n} = e^{j2\pi f\, T_s n}$:
\begin{align*}
x[n] &= \sum_{k = -\infty}^{\infty} \int_{-F_s/2}^{F_s/2} X_c(f - kF_s)e^{j2\pi f\, T_s n} df \\ \\
&= \int_{-F_s/2}^{F_s/2} \left[\sum_{k = -\infty}^{\infty} X_c(f - kF_s)\right] e^{j2\pi f\, T_s n} df.
\end{align*}
After interchanging the order of integration and summation (which can be safely done if $x_c(t)$ is absolutely integrable as we have assumed), we can recognize the term in brackets as the periodization of the original spectrum $X_c(f)$. Define the $F_s$-periodic function
\begin{equation} \label{eq:is:periodizedFT}
\tilde{X}_c(f) = \sum_{k = -\infty}^{\infty} X_c(f - kF_s)
\end{equation}
so that
\begin{equation*}
x[n] = \int_{-F_s/2}^{F_s/2} \tilde{X}_c(f) e^{j2\pi f\, T_s n} df.
\end{equation*}
With a final change of variable $\omega = 2\pi f\, T_s$, so that $f = \frac{\omega}{2\pi}F_s$, we have
\begin{align*}
x[n] &= \frac{1}{2\pi} \int_{-\pi}^{\pi} F_s\, \tilde{X}_c\left(\frac{\omega}{2\pi}F_s\right)e^{j\omega n} d\omega \\[3mm]
&= \mbox{IDTFT}\left(F_s \tilde{X}_c\left(\frac{\omega}{2\pi}F_s\right)\right)
\end{align*}
In other words, the DTFT of a raw-sampled signal is the Fourier transform of the signal, periodized with period $F_s$ and rescaled to be $2\pi$-periodic. The result is known in Fourier theory under the name of \emph{Poisson sum formula}. Explicitly,
\begin{equation}
X(e^{j\omega}) = F_s \sum_{k = -\infty}^{\infty} X_c\left(\frac{\omega}{2\pi}F_s - kF_s \right)
\end{equation}
When $x_c(t)$ is $F_s$-bandlimited, the copies in the periodized spectrum do not overlap and the $2\pi$-periodic discrete-time spectrum between $-\pi$ and $\pi$ is simply\index{aliasing|mie}
\begin{equation}\label{eq:is:noAliasingSpecEq}
X(e^{j\omega}) = F_s\, X_c\left(\frac{\omega}{2\pi}F_s \right)
\end{equation}
-\subsection{Aliasing: Examples}
+\subsection{Typical Cases}
Figures \ref{fig:is:alias1} to \ref{fig:is:alias4} illustrate four different prototypical examples of the relationship between the continuous-time spectrum and the discrete-time spectrum. For all figures, the top panel shows the continuous-time spectrum $X(f)$, with labels indicating the sampling frequency. The middle panel shows the periodic function $\tilde{X}_c(f)$ defined in~(\ref{eq:is:periodizedFT}); the repeated, shifted copies of the spectrum are plotted with a dashed line (but they are not visible if there is no overlap), with the main period highlighted. Finally, the last panel shows the DTFT of the sampled sequence over the customary $[-\pi,\pi]$ interval.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% create a slide with aliasing denouement
%% \BLcase{frame title}{spectral shape}{Omega_N}
%%
\def\BLcase#1#2#3#4{%
%% spectral shape function takes 2 args: center and width; this defines a centered version
\def\spec##1{%
#2{##1}{#3}}
%% periodized version
\def\perd{%
0
-6 2 11 {
/i exch def
x \spec{i}
add
} for}
%
\begin{figure}
\center
% original spectrum
\begin{dspPlot}[sidegap=0,height=\dspHeightCol,xticks=none,yticks=none,ylabel={$X_c(f)$}]{-5,5}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \spec{0}}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ -1 $-F_s/2$}
\end{dspPlot}
% periodized spectrum
\begin{dspPlot}[sidegap=0,height=\dspHeightCol,xticks=custom,yticks=none,ylabel={$\tilde{X_c}(f)$}]{-5,5}{0,1.2}
\dspCustomTicks[axis=x]{%
0 0 1 $F_s/2$ -1 $-F_s/2$ 2 $F_s$ -2 $-F_s$ 4 $2F_s$ -4 $-2F_s$}
% first step, copies every 2, dashed
\multido{\n=-6+2}{11}{%
\dspFunc[linecolor=ColorCF!30,linestyle=dashed]{x \spec{\n}}}
% second step, sum of copies
\dspFunc[linecolor=red!30]{\perd}
% show main period
\dspFunc[xmin=-1,xmax=1,linecolor=ColorCF]{\perd}
\pnode(-1,0){A}\pnode(1,0){B}
\end{dspPlot}
% discrete-time spectrum
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=3,yticks=none,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{\perd}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{#1}\label{fig:is:alias#4}
\end{figure}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Oversampling.}\index{oversampling}
Figure~\ref{fig:is:alias1} shows the result of sampling a bandlimited signal with a sampling frequency in excess of the minimum (in this case, $F_s = (3/2)f_N$, where $f_N$ indicates the maximum positive frequency in the original spectrum); in this case we say that the signal has been \emph{oversampled}. The result is that spectral copies do not overlap in the periodization so that the discrete-time spectrum is just a scaled version of the original spectrum,
with a narrower support than the full $[-\pi, \pi]$ range because of the oversampling (in this case $\omega_{\max} = 2\pi / 3$).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\BLcase{Oversampling a continuous-time signal with a sampling frequency larger than twice the maximum positive frequency; no aliasing.}{\dspQuad}{0.666 }{1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Critical Sampling.}
\index{critical sampling}
Figure~\ref{fig:is:alias2} shows the result of sampling a bandlimited signal with a sampling frequency exactly equal to twice the maximum positive frequency; in this case we say that the signal has been \emph{critically sampled}. In the periodized spectrum, once again the copies do not overlap and the discrete-time spectrum is a scaled version of the original spectrum occupying the whole $[-\pi, \pi]$ range.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\BLcase{Critically sampling a continuous-time signal with a sampling frequency equal to twice the maximum positive frequency; no aliasing.}{\dspQuad}{1 }{2}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Undersampling (Aliasing).}
Figure~\ref{fig:is:alias3} shows the result of sampling a bandlimited signal with a sampling frequency less than twice the maximum frequency. In this case, copies do overlap in the periodized spectrum and the resulting discrete-time spectrum is an aliased version of the original; the continuous-time signal can\emph{not} be reconstructed from the sampled signal. Note, in particular, that the original lowpass spectral characteristic becomes a highpass shape in
the sampled domain (energy at $\omega=\pi$ is larger than at $\omega=0$). This example allows us to guess the type of distortion introduced by aliasing: if the source is an audio signal such as music (a naturally lowpass signal), aliasing would introduce extremely disrupting and spurious high frequency content that would render the signal almost unintelligible. Because of the additive nature of the spectral periodization, no type of post-processing in the digital domain can completely get rid of the aliased components, making the discrete-time data all but unusable.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\BLcase{Undersampling a continuous-time signal with a sampling frequency smaller than twice the maximum positive frequency; aliasing is incurred.}{\dspQuad}{1.5 }{3}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Sampling of Non-Bandlimited Signals.}
Figure~\ref{fig:is:alias4} shows the result of sampling a non-bandlimited signal with a sampling frequency which is chosen as a tradeoff between alias and number of samples per second. The idea is to disregard the low-energy ``tails'' of the original spectrum so that their alias does not
exceedingly impact the discrete-time spectrum. In the periodized spectrum, copies do overlap and the resulting discrete-time spectrum is an aliased version of the original, which is nevertheless similar to the original because of the low amplitude of the aliased tails. This scenario also occurs when the original signal is bandlimited but out of band noise leaks into the sampler.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\nbl#1#2{ #1 sub 4 mul abs dup mul 0.5 mul 1 add 1 exch div }
\BLcase{Raw-sampling a non-bandlimited continuous-time signal; aliasing is incurred.}{\nbl}{1}{4}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itempar{Sinc Sampling of Non-Bandlimited Signals.}
-\ref{sec:is:antialias}
+\label{sec:is:antialias}
Figure~\ref{fig:is:alias5} shows the preferred design when sampling non-bandlimited signals. An ideal continuous-time lowpass filter $H(f)$ with cutoff frequency $F_s/2$ is used to limit the spectral support of $x_c(t)$. The periodization of the spectrum after the filter is alias free and the only distortion introduced by the process is the \textit{controlled} loss of high-frequency content induced by the anti-alias filter $H(f)$. This procedure is the sinc sampling operation described in the beginning of this section and, as we showed, it is optimal with respect to the Mean Square Error between original and reconstructed signals.
\index{aliasing|)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\def\nbl#1{#1 sub 4 mul abs dup mul 0.5 mul 1 add 1 exch div }
\def\nblb#1{#1 sub dup abs 1 gt {pop 0} { 4 mul abs dup mul 0.5 mul 1 add 1 exch div} ifelse }
%% periodized version
\def\perd{%
0
-6 2 11 {
/i exch def
x \nblb{i}
add
} for}
% original spectrum
\begin{dspPlot}[sidegap=0,height=\dspHeightCol,xticks=none,yticks=none,ylabel={$X_c(f)$}]{-5,5}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \nbl{0}}
\dspFunc[linecolor=ColorCFilt,linestyle=dashed]{x \dspRect{0}{2} 1.1 mul }
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ -1 $-F_s/2$}
\end{dspPlot}
\begin{dspPlot}[sidegap=0,height=\dspHeightCol,xticks=none,yticks=none,ylabel={$Y_c(f) = X_c(f)H(f)$}]{-5,5}{0,1.2}
\dspFunc[linecolor=ColorCF]{x \nblb{0}}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ -1 $-F_s/2$}
\end{dspPlot}
% periodized spectrum, in two steps
\begin{dspPlot}[sidegap=0,height=\dspHeightCol,xticks=custom,yticks=none,ylabel={$\tilde{Y_c}(f)$}]{-5,5}{0,1.2}
\dspCustomTicks[axis=x]{0 0 1 $F_s/2$ -1 $-F_s/2$ 2 $F_s$ -2 $-F_s$ 4 $2F_s$ -4 $-2F_s$}
\multido{\n=-6+2}{11}{%
\dspFunc[linecolor=red!30,linestyle=dashed]{x \nblb{\n}}}
\dspFunc[linecolor=red!30]{\perd}
\dspFunc[xmin=-1,xmax=1,linecolor=ColorCF]{\perd}
\pnode(-1,0){A}\pnode(1,0){B}
\end{dspPlot}
\begin{dspPlot}[xtype=freq,height=\dspHeightCol,xticks=3,yticks=none,ylabel={$Y(e^{j\omega})$}]{-1,1}{0,1.2}
\dspFunc[linecolor=ColorDF]{\perd}
\pnode(-1,1.2){a}\pnode(1,1.2){b}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{A}{a}
\ncline[linewidth=1pt,linecolor=gray,linestyle=dashed]{->}{B}{b}
\end{dspPlot}
\caption{Sampling a non-bandlimited continuous-time signal with the use of an anti-alias filter $H(f)$ with cutoff frequency $F_s/2$.}\label{fig:is:alias5}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
diff --git a/writing/sp4comm.multipub/90-sampling/90-is-examples.tex b/writing/sp4comm.multipub/90-sampling/90-is-examples.tex
index e3fa89a..f8f70f9 100644
--- a/writing/sp4comm.multipub/90-sampling/90-is-examples.tex
+++ b/writing/sp4comm.multipub/90-sampling/90-is-examples.tex
@@ -1,321 +1,322 @@
\section{Examples}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\begin{example}{Another way to aliasing}
-Consider a real function $x(t)$ for which the Fourier transform is well defined:
-\begin{equation}\label{eq:is:exSampeq1}
- X(f) = \int_{-\infty}^{\infty}x(t)\, e^{-j2\pi f t}\, dt
-\end{equation}
-Suppose that we only possess a sampled version of $x(t)$, that is, we only know the numeric value of $x(t)$ at times multiples of a sampling interval $T_s$ and that we want to obtain an approximation of the Fourier transform above.
-
-Assume we do not know about the DTFT; an intuitive (and standard) place to start is to express the Fourier integral as a Riemann sum:
-\begin{equation}\label{eq:is:exSampeq2}
- X(f) \approx \hat{X}(f) = \sum_{n=-\infty}^{\infty} T_s x(nT_s) \, e^{-j 2\pi f n T_s }
-\end{equation}
-an expression that only uses the known sampled values of $x(t)$. In order to understand whether~(\ref{eq:is:exSampeq2}) is a good approximation consider the periodization of $X(f)$:
-\begin{equation}\label{eq:is:exSampeq3}
- \tilde{X}(f) = \sum_{k=-\infty}^{\infty} X\left( f + kF_s \right)
-\end{equation}
-in which $X(f)$ is repeated (with overlap) with period $F_s$. We will show that:
-\[
- \hat{X}(f) = \tilde{X}(f)
-\]
-that is, the Riemann approximation is equivalent to a periodization \index{periodization} of the original Fourier transform; in mathematics this is known as a particular form of the {\em Poisson sum formula}\index{Poisson sum formula}.
-
-Consider the periodic nature of $\tilde{X}(j\Omega)$ and remember that any periodic function $s(\tau)$ of period $L$ admits a \emph{Fourier series}\index{Fourier series} expansion:
-\begin{equation}\label{eq:is:fseEx}
- s(\tau) = \sum_{n=-\infty}^{\infty} A_n\, e^{j\frac{2\pi}{L}n\tau}
-\end{equation}
-where
-\begin{equation}\label{fsecEx}
- A_n = \frac{1}{L} \int_{-L/2}^{L/2} s(\tau) \, e^{-j\frac{2\pi}{L}n\tau} \, d\tau
-\end{equation}
-To prove our result we will consider the periodic nature of $\hat{X}(f)$ and compute its Fourier \textit{series} expansion coefficients (that is, we take a Fourier transform of a Fourier transform). Replacing $L$ by $F_s = 1 / T_s$ in~(\ref{eq:is:fseEx}) we can write
-\begin{align*}
- A_n &= (1/F_s) \int_{-F_s/2}^{F_s/2} \tilde{X}(f) \, e^{-j(2\pi/F_s) f n}\, df \\[3mm]
- &= T_s \int_{-F_s/2}^{F_s/2} \sum_{k=-\infty}^{+\infty} X\left( f + kF_s \right) e^{-j2\pi f nT_s } \, df \\
-\end{align*}
-By inverting integral and summation, which we can do if the Fourier transform~(\ref{eq:is:exSampeq2}) is well defined:
-\begin{equation*}
- A_n =T_s \sum_k \int_{-F_s/2}^{F_s/2} X\left(f + kF_s \right) e^{-j2\pi f nT_s} \, df
-\end{equation*}
-and, with the change of variable $f \rightarrow f + kF_s$,
-\begin{align*}
- A_n &= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, e^{j T_s F_s nk} \,df \\
- &= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, df
-\end{align*}
-The integrals in the sum are over contiguous and non-overlapping intervals, therefore:
-\begin{align*}
- A_n & = T_s \int_{-\infty}^{+\infty} X(f) \, e^{-j 2\pi f nT_s} \, df \\
- & = T_s\, f (-nT_s)
-\end{align*}
-so that by replacing the values for all the $A_n$ in~(\ref{eq:is:fseEx}) we obtain $\tilde{X}(f) = \hat{X}(f)$.
-
-What we just found is another derivation of the aliasing\index{aliasing} formula. Intuitively, there is a duality between the time domain and the frequency domain in that a discretization of the time domain leads to a periodization of the frequency domain; similarly, a discretization of the frequency domain leads to a periodization of the time domain (think of the DFS and see also Exercise~\ref{ex:is:aliasTimeEx}).
+\begin{example}[Another way to aliasing]
+ Consider a real function $x(t)$ for which the Fourier transform is well defined:
+ \begin{equation}\label{eq:is:exSampeq1}
+ X(f) = \int_{-\infty}^{\infty}x(t)\, e^{-j2\pi f t}\, dt
+ \end{equation}
+ Suppose that we only possess a sampled version of $x(t)$, that is, we only know the numeric value of $x(t)$ at times multiples of a sampling interval $T_s$ and that we want to obtain an approximation of the Fourier transform above.
+
+ Assume we do not know about the DTFT; an intuitive (and standard) place to start is to express the Fourier integral as a Riemann sum:
+ \begin{equation}\label{eq:is:exSampeq2}
+ X(f) \approx \hat{X}(f) = \sum_{n=-\infty}^{\infty} T_s x(nT_s) \, e^{-j 2\pi f n T_s }
+ \end{equation}
+ an expression that only uses the known sampled values of $x(t)$. In order to understand whether~(\ref{eq:is:exSampeq2}) is a good approximation consider the periodization of $X(f)$:
+ \begin{equation}\label{eq:is:exSampeq3}
+ \tilde{X}(f) = \sum_{k=-\infty}^{\infty} X\left( f + kF_s \right)
+ \end{equation}
+ in which $X(f)$ is repeated (with overlap) with period $F_s$. We will show that:
+ \[
+ \hat{X}(f) = \tilde{X}(f)
+ \]
+ that is, the Riemann approximation is equivalent to a periodization \index{periodization} of the original Fourier transform; in mathematics this is known as a particular form of the {\em Poisson sum formula}\index{Poisson sum formula}.
+
+ Consider the periodic nature of $\tilde{X}(j\Omega)$ and remember that any periodic function $s(\tau)$ of period $L$ admits a \emph{Fourier series}\index{Fourier series} expansion:
+ \begin{equation}\label{eq:is:fseEx}
+ s(\tau) = \sum_{n=-\infty}^{\infty} A_n\, e^{j\frac{2\pi}{L}n\tau}
+ \end{equation}
+ where
+ \begin{equation}\label{fsecEx}
+ A_n = \frac{1}{L} \int_{-L/2}^{L/2} s(\tau) \, e^{-j\frac{2\pi}{L}n\tau} \, d\tau
+ \end{equation}
+ To prove our result we will consider the periodic nature of $\hat{X}(f)$ and compute its Fourier \textit{series} expansion coefficients (that is, we take a Fourier transform of a Fourier transform). Replacing $L$ by $F_s = 1 / T_s$ in~(\ref{eq:is:fseEx}) we can write
+ \begin{align*}
+ A_n &= (1/F_s) \int_{-F_s/2}^{F_s/2} \tilde{X}(f) \, e^{-j(2\pi/F_s) f n}\, df \\[3mm]
+ &= T_s \int_{-F_s/2}^{F_s/2} \sum_{k=-\infty}^{+\infty} X\left( f + kF_s \right) e^{-j2\pi f nT_s } \, df \\
+ \end{align*}
+ By inverting integral and summation, which we can do if the Fourier transform~(\ref{eq:is:exSampeq2}) is well defined:
+ \begin{equation*}
+ A_n =T_s \sum_k \int_{-F_s/2}^{F_s/2} X\left(f + kF_s \right) e^{-j2\pi f nT_s} \, df
+ \end{equation*}
+ and, with the change of variable $f \rightarrow f + kF_s$,
+ \begin{align*}
+ A_n &= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, e^{j T_s F_s nk} \,df \\
+ &= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, df
+ \end{align*}
+ The integrals in the sum are over contiguous and non-overlapping intervals, therefore:
+ \begin{align*}
+ A_n & = T_s \int_{-\infty}^{+\infty} X(f) \, e^{-j 2\pi f nT_s} \, df \\
+ & = T_s\, f (-nT_s)
+ \end{align*}
+ so that by replacing the values for all the $A_n$ in~(\ref{eq:is:fseEx}) we obtain $\tilde{X}(f) = \hat{X}(f)$.
+
+ What we just found is another derivation of the aliasing\index{aliasing} formula. Intuitively, there is a duality between the time domain and the frequency domain in that a discretization of the time domain leads to a periodization of the frequency domain; similarly, a discretization of the frequency domain leads to a periodization of the time domain (think of the DFS and see also Exercise~\ref{ex:is:aliasTimeEx}).
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{example}{Time-limited vs.\ bandlimited functions}\index{bandlimited signal}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[b!]
- \center
- \begin{dspPlot}[height=\dspHeightCol,sidegap=0,xticks=custom,yticks=none,ylabel={$X(f)$}]{-3,3}{0,1.8}
- \dspFunc[linecolor=ColorCF]{x \dspPorkpie{0}{1}}
- \dspCustomTicks[axis=x]{%
- 0 0
- 1 $f_0$ -1 $-f_0$
- 2 $2f_0=F_s/2$ -2 $-2f_0$}
- \end{dspPlot}
- \begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=2,yticks=none,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.8}
- \dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{0.5}}
- \end{dspPlot}
- \caption{Bandlimited signal and its discrete-time counterpart.}\label{fig:is:tlvsblFig}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{example}[Time-limited vs.\ bandlimited functions]\index{bandlimited signal}
A fundamental result of spectral analysis states that a function cannot have finite support both in time and in frequency; in other words, a signal cannot be both time-limited and band-limited. This can be easily shown by contradiction using the sampling theorem and the properties of the $z$-transform. Let's assume that the continuous-time signal $x_c(t)$ is $2f_0$-bandlimited (that is, $X_c(f) = 0$ for $|f| > f_0$) and that there also exist a value $t_0 > 0$ so that
\[
x_c(t) = 0 \quad \mbox{for } |t| > t_0.
\]
Since the signal is bandlimited, we know that it can be perfectly represented by a sequence of equally spaced samples, provided that the sampling rate satisfies $F_s \ge 2f_0$. Let's for instance pick $F_s = 4f_0$ and call $x[n] = x_c(nT_s)$ the resulting discrete-time signal for $T_s = 1/(4f_0)$. Using~(\ref{eq:is:DTFTsampled}), the DTFT of the sampled sequence over $[-\pi, \pi]$ is simply the rescaled continuous-time spectrum between $[-2f_0, 2f_0]$:
\[
X(e^{j\omega}) = 4f_0\, X_c\left(\frac{\omega}{\pi} 2f_0\right);
\]
since by hypothesis $X_c(f)$ is zero outside of the $[-f_0, f_0]$ interval, as illustrated in Figure~\ref{fig:is:tlvsblFig}, it will be
\begin{equation}\label{eq:is:timefreq}
X(e^{j\omega}) = 0 \quad \mbox{for } \frac{\pi}{2} < \omega < \pi.
\end{equation}
On the other hand, we assumed that $x_c(t)$ is also time-limited so the sequence $x[n]$ is going to have a finite support and its $z$-transform will contain only a finite number of terms:
\[
X(z) = \sum_{n=-M}^{M} x[n] z^{-n}
\]
where
\[
M = \bigg\lfloor \frac{t_0}{T_s} \bigg\rfloor.
\]
Since the DTFT is $X(z)$ for $z=e^{j\omega}$, because of~(\ref{eq:is:timefreq}) we have that $X(z) = 0$ over a finite interval; but,
since the $z$-transform is a finite-degree polynomial, it will necessarily be zero everywhere (see also Example~\ref{ex:fil:impossIdealProof}). And so the only signal that can be both time-limited and bandlimited is the null signal.
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{figure}[b!]
+ \center
+ \begin{dspPlot}[height=\dspHeightCol,sidegap=0,xticks=custom,yticks=none,ylabel={$X(f)$}]{-3,3}{0,1.8}
+ \dspFunc[linecolor=ColorCF]{x \dspPorkpie{0}{1}}
+ \dspCustomTicks[axis=x]{%
+ 0 0
+ 1 $f_0$ -1 $-f_0$
+ 2 $2f_0=F_s/2$ -2 $-2f_0$}
+ \end{dspPlot}
+ \begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=2,yticks=none,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.8}
+ \dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{0.5}}
+ \end{dspPlot}
+ \caption{Bandlimited signal and its discrete-time counterpart.}\label{fig:is:tlvsblFig}
+\end{figure}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
\begin{comment}
The trick of periodizing a function and then computing its Fourier series expansion comes very handy also in proving that a function cannot be both bandlimited and time-limited (that is, have a finite support both in time and in frequency). The proof is by contradiction: assume $x(t)$ has finite time support, i.e. there exists a time $T_0$ such that
\[
x(t) = 0 \quad \mbox{for } |t| > T_0;
\]
assume that $x(t)$ has a well-defined Fourier transform $X(f)$ and that it is {\em also} bandlimited so that we can find a frequency $f_0$ for which
\[
x(f) = 0 \quad \mbox{for } |f| > f_0.
\]
Consider now the periodization\index{periodization} of the function in time with period $S$:
\[
\tilde{x}(t) = \sum_{k=-\infty}^{\infty} x(t - kS);
\]
since $x(t) = 0$ for $|t| > T_0$, if we choose $S > 2T_0$ the copies in the sum do not overlap, as shown in Figure~\ref{fig:is:tlvsblFig}. If we compute the Fourier series expansion~(\ref{eq:is:fsecEx}) for the $S$-periodic function $\tilde{x}(t)$ we have
\begin{align*}
A_n &= \frac{1}{S}\int_{-S/2}^{S/2} \tilde{x}(t)\, e^{-j(2\pi/S)nt} \, dt \\
&= \frac{1}{S}\int_{-T_0}^{T_0} x(t) \, e^{-j(2\pi/S)nt} \, dt \\
&= X\left(\frac{n}{S}\right);
\end{align*}
this indicated that the Fourier series coefficients of the periodized function are samples of the Fourier transform of the original function (another duality between periodization and sampling). Since we assumed that $f(t)$ is bandlimited, there will be only a finite number of nonzero $A_n$ coefficients; indeed
\[
A_n = 0 \quad \mbox{for } |n| > \lfloor f_0 S \rfloor = N_0
\]
and therefore we can write the reconstruction formula~(\ref{eq:is:fseEx}) as:
\[
\tilde{x}(t) = \sum_{n = -N_0}^{N_0} A_n\, e^{j(2\pi/S)nt}
\]
Now consider the complex-valued polynomial of degree $2N_0 +1$
\[
P(z) = \sum_{n = -N_0}^{N_0} A_n z^n
\]
obviously $P\bigl(e^{j(2\pi/S)t} \bigr) = \tilde{x}(t)$ but we also know that $\tilde{x}(t)$ is identically zero over the $[T_0\,,\, S-T_0]$ interval, as shown in Figure~\ref{fig:is:tlvsblFig}. However, a finite-degree polynomial $P(z)$ has only a finite number of roots\index{roots!of complex polynomial} and
therefore it cannot be identically zero over an interval unless it is zero everywhere (see also Example~\ref{ex:fil:impossIdealProof}). Hence, either $x(t) = 0$ everywhere or $x(t)$ cannot be both bandlimited and time-limited.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\spectrumShape{2 mul
dup abs 3.10 gt
{pop 0}
{0.18 mul RadtoDeg %
dup cos exch %
dup 3 mul cos 2 mul exch %
0 mul cos -0.7 mul %
add add 0.31 mul 0.017 add }
ifelse }
%
\center
\begin{dspPlot}[height=\dspHeightCol,xticks=custom,yticks=none,sidegap=0]{-10,10}{0,1.5}
\dspCustomTicks[]{1.6 $T_0$ 7 $S$ -7 $-S$}
\dspFunc[linecolor=ColorCT!30,xmin=2 ]{x 7 sub \spectrumShape}
\dspFunc[linecolor=ColorCT!30,xmax=-2]{x 7 add \spectrumShape}
\dspFunc[linecolor=ColorCT]{x \spectrumShape}
\psbrace[rot=-90,ref=tC,nodesepB=-11pt](5.4,.5)(1.6,.5){$\tilde{x}(t) = 0$}
\end{dspPlot}
\caption{Finite support function $x(t)$ (dark) and non-overlapping periodization (light).}\label{xxfig:is:tlvsblFig}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{comment}
\end{example}
\section{Appendix}
\subsection*{The Sinc Product Expansion Formula}
The goal is to prove the product expansion\index{sinc}
\begin{equation}\label{eq:is:appSinExp}
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\end{equation}
We %will
present two proofs; the first was proposed by Euler
in~1748 and, while it certainly lacks rigor by modern
standards, it has the irresistible charm of elegance and
simplicity in that it relies only on basic algebra. The
second proof is more rigorous, and is based on the theory of
Fourier series for periodic functions; relying on Fourier
theory, however, hides most of the convergence issues.
\itempar{Euler's Proof.}
Consider the $N$ roots of unity for $N$ odd. They
%will be
comprise $z
= 1$ plus $N-1$ complex conjugate roots of the form $z =
e^{\pm j\omega_N k}$ for $k = 1, \ldots$,\linebreak
$ (N-1)/2$ and
$\omega_N = 2\pi/N$. If we group the complex conjugate roots
pairwise we can factor the polynomial $z^N-1$ as
\[
z^N-1 = (z-1)\prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2z\cos(\omega_N k) + 1 \bigr)
\]
The above expression can immediately be generalized to
\[
z^N-a^N = (z-a) \prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2az \cos(\omega_N k) + a^2 \bigr)
\]
Now replace $z$ and $a$ in the above formula by $z =
(1+x/N)$ and $a = (1-x/N)$; we obtain the following:
\begin{align*}
&\left(1+\frac{x}{N}\right)^{\!N} - \left(1-\frac{x}{N}\right)^{\!N} = \nonumber \\
&\qquad
= \frac{4x}{N} \prod_{k=1}^{(N-1)/2}
\left(1-\cos(\omega_N k) + \frac{x^2}{N^2} \,
\bigl(1+\cos(\omega_N k) \bigr)\right) \\
&\qquad =
\frac{4x}{N}\prod_{k=1}^{(N-1)/2}
\bigl( 1-\cos(\omega_N k) \bigr)
\left(1 + \frac{x^2}{N^2}\cdot\frac{1+\cos(\omega_N k)}{1-\cos(\omega_N k)}\right) \\
&\qquad = A\, x \prod_{k=1}^{(N-1)/2}
\left(1 + \frac{x^2 \bigl(1+\cos(\omega_N k) \bigr)}{N^2
\bigl(1-\cos(\omega_N k) \bigr)}\right)
\end{align*}
where $A$ is just the finite product
$(4/N)\prod_{k=1}^{(N-1)/2} \bigl(1-\cos(\omega_N k) \bigr)$.
The value $A$ is also the coefficient for the degree-one
term $x$ in the right-hand side and it can be easily seen
from the expansion of the left hand-side that $A=2$ for all
$N$; actually, this is an application of Pascal's triangle
and it was proven by Pascal in the general case in 1654. As
$N$ grows larger we have that:
\[
\left(1 \pm \frac{x}{N}\right)^{\!N} \approx e^{\pm x}
\]
and
at the same time, if $N$ is large, then $\omega_N = 2\pi/N$
is small and, for small values of the angle, the cosine can
be approximated as
\[
\cos(\omega) \approx 1 - \frac{\omega^2}{2}
\]
so that the denominator in the general product term can, in
turn, be approximated as
\[
N^2
\left(1-\cos \left( \frac{ 2\pi }{ N} \, k \right) \right)
\approx N^2 \cdot \frac{4k^2\pi^2}{2N^2} = 2k^2 \pi^2
\]
By the same token, for large $N$, the numerator can be
approximated as $1+\cos((2\pi/n)k) \approx 2$ and therefore
(by bringing $A=2$ over to the left-hand side)
the above expansion becomes
\[
\frac{e^x - e^{-x}}{2}
= x \! \left( 1 + \frac{x^2}{\pi^2} \right)
\left(1 + \frac{x^2}{4\pi^2}\right) \left( 1 + \frac{x^2}{9\pi^2}\right)
\cdots
\]
Finally, we replace $x$ by $j\pi t$ to obtain:
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)
\]
\itempar{Rigorous Proof.}
Consider the Fourier series expansion of the \emph{even}
function $f(x) = \cos(\tau x)$ periodized over the interval
$[-\pi, \pi]$. We have
\[
f(x) = \frac{1}{2}a_0 + \sum_{n = 1}^{\infty} a_n \cos(nx)
\]
with
\begin{align*}
a_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}\cos(\tau x)\cos(n x)\, dx \\
& = \frac{2}{\pi}\int_{0}^{\pi} \frac{1}{2}
\, \left( \cos \bigl( (\tau+n)x \bigr) + \cos \bigl((\tau-n)x \bigr)
\right) \, dx \\
& =
\frac{1}{\pi} \left( \frac{\sin \bigl((\tau+n)\pi \bigr)}{\tau + n}
+ \frac{\sin \bigl((\tau-n)\pi \bigr)}{\tau - n}\right) \\
& = \frac{2\sin (\tau\pi)}{\pi}\, \frac{(-1)^n \tau}{\tau^2 -n^2}
\end{align*}
so that
\[
\cos(\tau x) = \frac{2\tau \sin(\tau\pi)}{\pi}
\left( \frac{1}{2\tau^2} - \frac{\cos( x)}{\tau^2 - 1}
+ \frac{\cos(2x)}{\tau^2 - 2^2} -
\frac{\cos(3x)}{\tau^2 - 3^2}
+ \cdots \right)
\]
In particular, for $x = \pi$ we have
\[
\cot (\pi\tau) = \frac{2\tau}{\pi}
\left( \frac{1}{2\tau^2} + \frac{1}{\tau^2 - 1} +
\frac{1}{\tau^2 - 2^2} + \frac{1}{\tau^2 - 3^2} + \cdots\right)
\]
which we can rewrite as
\[
\pi\left(\cot(\pi\tau) -
\frac{1}{\pi\tau}\right)
= \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}
\]
If we now integrate between $0$ and $t$ both sides of the
equation we have
\[
\int_{0}^{t} \left(\cot(\pi\tau) - \frac{1}{\pi\tau}\right)
d\pi\tau =
\ln\frac{\sin(\pi\tau)}{\pi \tau} \biggr|_0^t
= \ln\left[ \frac{\sin(\pi t)}{\pi t}\right]
\]
and
\[
\int_{0}^{t} \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}\, d\tau
= \sum_{n = 1}^{\infty} \ln
\left[ 1 - \frac{t^2}{n^2}\right]
= \ln \left[ \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)\right]
\]
from which, finally,
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\]
\section{Further Reading}
The sampling theorem is often credited to C.\ Shannon, and indeed it appears with a embryonic proof in his foundational 1948 paper ``A Mathematical Theory of Communication'', \textit{Bell System Technical Journal\/}, Vol. 27, 1948, pp. 379-423 and pp. 623-656.
Contemporary treatments can be found in all signal processing books, but also in more mathematical texts, such as S.\ Mallat's \textit{A Wavelet Tour of Signal Processing\/} (Academic Press, 1998). These more modern treatments take a Hilbert space point of view, which allows the extension of sampling theorems to more general spaces than just bandlimited functions. More recently, a renewed interest in sampling theory has been spurred by applications such as nonuniform sampling or the sampling of signals that, although not bandlimited, possess a finite rate of innovation (FRI sampling).
diff --git a/writing/sp4comm.multipub/90-sampling/99-is-exercises.tex b/writing/sp4comm.multipub/90-sampling/99-is-exercises.tex
index fb9b412..46acb15 100644
--- a/writing/sp4comm.multipub/90-sampling/99-is-exercises.tex
+++ b/writing/sp4comm.multipub/90-sampling/99-is-exercises.tex
@@ -1,428 +1,467 @@
\section{Exercises}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\ifexercises{%
\begin{exercise}{Zero-order hold}
-Consider a discrete-time sequence $x[n]$ with DTFT $X(e^{j\omega})$. Next, consider the continuous-time interpolated signal
-\[
- x_0(t) = \sum_{n=-\infty}^{\infty}x[n] \,\mbox{rect}\,(t-n)
-\]
-i.e.\ the signal interpolated with a zero-order hold and $T_s = 1$~sec.
-\begin{enumerate}
- \item Express $X_0(f)$ (the spectrum of $x_0(t)$) in terms of $X(e^{j\omega})$.
- \item Compare $X_0(f)$ to $X(f)$, the Fourier transform of the signal
+ Consider a discrete-time sequence $x[n]$ with DTFT $X(e^{j\omega})$. Next, consider the continuous-time interpolated signal
\[
- x(t) = \sum_{n\in\mathbb{Z}} x[n]\, \textrm{sinc} (t-n)
+ x_0(t) = \sum_{n=-\infty}^{\infty}x[n] \,\mbox{rect}\,(t-n)
\]
- Comment on the result: you should point out two major problems.
-\end{enumerate}
-As it appears, interpolating with a zero-order hold introduces in-band distortion in the region $|f| < 1/2$ and out-of-band spurious components at higher frequencies. Both problems could however be fixed by a well-designed continuous-time filter $G(f)$ applied to the ZOH interpolation.
-\begin{enumerate}
- \setcounter{enumi}{2}
- \item Sketch the frequency response $G(f)$
- \item Propose two solutions (one in the continuous-time omain, and another in the discrete-time domain) to eliminate or attenuate the in-band distortion due to the zero-order hold. Discuss the advantages and disadvantages of each.
-\end{enumerate}
+ i.e.\ the signal interpolated with a zero-order hold and $T_s = 1$~sec.
+ \begin{enumerate}
+ \item Express $X_0(f)$ (the spectrum of $x_0(t)$) in terms of $X(e^{j\omega})$.
+ \item Compare $X_0(f)$ to $X(f)$, the Fourier transform of the signal
+ \[
+ x(t) = \sum_{n\in\mathbb{Z}} x[n]\, \textrm{sinc} (t-n)
+ \]
+ Comment on the result: you should point out two major problems.
+ \end{enumerate}
+ As it appears, interpolating with a zero-order hold introduces in-band distortion in the region $|f| < 1/2$ and out-of-band spurious components at higher frequencies. Both problems could however be fixed by a well-designed continuous-time filter $G(f)$ applied to the ZOH interpolation.
+ \begin{enumerate}
+ \setcounter{enumi}{2}
+ \item Sketch the frequency response $G(f)$
+ \item Propose two solutions (one in the continuous-time omain, and another in the discrete-time domain) to eliminate or attenuate the in-band distortion due to the zero-order hold. Discuss the advantages and disadvantages of each.
+ \end{enumerate}
\end{exercise}
+}\fi
+\ifanswers{%
+
+}\fi
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\ifexercises{%
\begin{exercise}{Interpolation.}
Consider a finite-energy discrete-time sequence $x[n]$ with DTFT $X(e^{j\omega})$ and the continuous-time interpolated signal
\[
x_0(t) = \sum_{n=-\infty}^{\infty} x[n] \mbox{rect}\,(t-n)
\]
i.e. a signal obtained from the discrete-time sequence using a zero-centered zero-order hold with interpolation period $T_s = 1$s. Let $X_0(f)$ be the Fourier transform of $x_0(t)$.
\begin{enumerate}
\item Express $X_0(f)$ in terms of $X(e^{j\omega})$.
\item Compare $X_0(f)$ to $X(f)$, where $X(f)$ is the spectrum of the continuous-time signal obtained using an ideal sinc interpolator with $T_s=1$:
\[
x(t) = \sum_{n=-\infty}^{\infty} x[n]\textrm{sinc}(t-n)
\]
Comment on the result: you should point out two major problems.
\item The signal $x(t)$ can be obtained back from the zero-order hold interpolation via a continuous-time filtering operation:
\[
x(t)=x_0(t)\ast g(t).
\]
Sketch the frequency response of the filter $g(t)$.
\item Propose two solutions (one in the continuous-time domain, and another in the discrete-time domain) to eliminate or attenuate the distortion due to the zero-order hold. Discuss the advantages and disadvantages of each.
\end{enumerate}
\end{exercise}
}\fi
\ifanswers{%
-
\begin{solution}{}
\begin{enumerate}
\item
\begin{align*}
X_0(f) &= \int_{-\infty}^{\infty} x_0(t)e^{-j2\pi f t} dt\\
&= \int_{-\infty}^{\infty} \sum_{n=-\infty}^{\infty}x[n]\mbox{rect}\,(t-n)e^{-j2 \pi f t} dt \\
&= \sum_{n=-\infty}^{\infty} x[n] \int_{-\infty}^{\infty} \mbox{rect}\,(t-n)e^{-j2\pi f t} dt \\
&= \sum_{n=-\infty}^{\infty}x[n] e^{-j2\pi f n} \, \int_{-1/2}^{1/2} e^{-j2\pi f \tau} d\tau \\
&= X(e^{j2\pi f}) \, \frac{\sin(\pi f)}{\pi f}\, \\
&= \mbox{sinc}(f)\,X(e^{j2\pi f}).
\end{align*}
\item To understand the effects of the zero-order hold consider for instance a discrete-time signal with a triangular
spectrum, as shown in the left panel below. We know that sinc interpolation will exactly preserve the shape of the spectrum and return a continuous-time signal that is strictly bandlimited to the $[-F_s/2, F_s/2]$ interval (with $F_s = 1/T_s = 1$), that is:
\[
X(f) = \begin{cases}
X(e^{j2\pi f}) & |f| < 1/2 \\
0 & \mbox{otherwise}
\end{cases}
\]
as shown in the right panel below.
\begin{center}
\begin{tabular}{cc}
$X(e^{j\omega})$ & $X(f)$ \\
\begin{dspPlot}[xtype=freq,xout=true,width=5cm,height=2cm]{-1,1}{0,1.2}
\dspFunc{x \dspTri{0}{1}}
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,xticks=0.5,yticks=none,width=5cm,height=2cm]{-1.5,1.5}{0,1.2}
\dspFunc{x \dspTri{0}{0.5}}
\end{dspPlot}
\end{tabular}
\end{center}
Conversely, the spectrum of the continuous-time signal interpolated by the zero-order hold is the product of $X(e^{j2\pi f})$, which is periodic with period $F_s = 1$ Hz, and the term $\sinc(f)$, whose first spectral null is for $f=1$ Hz. Here are the two terms, and their product, in magnitude:
\begin{center}
\begin{tabular}{cc}
$X(e^{j2\pi f})$ & $\sinc(f)$ \\
\begin{dspPlot}[sidegap=0,xticks=custom,yticks=custom,width=5cm,height=2cm]{-5,5}{0,1.2}
\dspCustomTicks[axis=x]{0 0 2 1 4 2}
\dspFunc{x \dspPeriodize \dspTri{0}{1}}
\end{dspPlot}
&
\begin{dspPlot}[sidegap=0,xticks=custom,yticks=none,width=5cm,height=2cm]{-5,5}{0,1.2}
\dspFunc{x \dspSinc{0}{2} abs}
\dspCustomTicks[axis=x]{0 0 2 1 4 2}
\end{dspPlot}
\end{tabular}
$X_0(f)$ \\
\begin{dspPlot}[sidegap=0,xticks=2,yticks=none]{-4.5,4.5}{0,1.2}
\dspFunc{x \dspPeriodize \dspTri{0}{1} x \dspSinc{0}{2} mul abs}
\dspCustomTicks[axis=x]{0 0 2 1 4 2}
\end{dspPlot}
\end{center}
There are two main problems in the zero-order hold interpolation as compared to the sinc interpolation:
\begin{itemize}
\item The zero-order hold interpolation is NOT bandlimited: the $2\pi$-periodic replicas of the digital spectrum leak through in
the continuous-time signal as high frequency components. This is due to the sidelobes of the interpolation function in the frequency domain (rect in time $\leftrightarrow$ sinc in frequency) and it represents an undesirable high-frequency content which is typical of all local interpolation schemes.
\item There is a distortion in the main portion of the spectrum (that between$-F_s/2$ and $F_s/2 = 0.5$ Hz) due to the non-flat frequency response of the interpolation function. It can be seen if we zoom in the baseband portion:
\begin{center}
\begin{dspPlot}[sidegap=0,xticks=2,yticks=none]{-1,1}{0,1.2}
\dspFunc{x \dspPeriodize \dspTri{0}{1} x \dspSinc{0}{2} mul abs}
\dspCustomTicks[axis=x]{0 0 1 0.5}
\end{dspPlot}
\end{center}
\end{itemize}
\item As we have seen, $X(f) = X(e^{j2\pi f}) \rect(f)$ while $X_0(f) = \mbox{sinc}(f)\,X(e^{j2\pi f})$. Therefore
\[
G(f) = \begin{cases}
\frac{1}{\textrm{sinc}\left(f\right)} & |f| < 1/2 \\
0 & \mbox{otherwise}
\end{cases}
\]
\item A first solution is to compensate for the distortion introduced by $G(f)$ in the discrete-time domain. This is
equivalent to pre-filtering $x[n]$ with a discrete-time filter of magnitude $1/G(e^{j2\pi f})$ . The advantages of this method is that digital filters such as this one are relatively easy to design and that the filtering can be done in the discrete-time domain. The disadvantage is that this approach does not eliminate or attenuate the high frequency leakage outside the baseband.
In continuous time, one could cascade the interpolator with an analog lowpass filter to eliminate the leakage. The disadvantage is that it is hard to implement an analog lowpass which can also compensate for the in-band distortion introduced by $G(f)$; such a filter will also introduce unavoidable phase distortion (no analog filter has linear phase).
\end{enumerate}
\end{solution}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\ifexercises{%
\begin{exercise}{A bizarre interpolator}
-Consider the local interpolation scheme of the previous exercise but assume that the characteristic of the
-interpolator is the following:
-\[
- I(t) = \begin{cases}
- 1 - 2|t| & \mbox{ for } |t| \leq 1/2 \\
- 0 & \mbox{ otherwise}
- \end{cases}
-\]
-This is a triangular characteristic with the same support as the zero-order hold. If we pick an interpolation interval $T_s$ and interpolate a discrete-time signal $x[n]$ with $I(t)$, we obtain the continuous-time signal:
-\[
- x(t) = \sum_n x[n]\, I\! \left( \frac{t-nT_s}{T_s} \right)
-\]
-which looks like this:
-\begin{center}
- \begin{dspPlot}[height=\dspHeightCol,xout=true]{0,5}{-1.2,1.2}
- \psset{linewidth=1pt,linecolor=ColorCT}
- \psline(-.5, 0)(0, 1)(.5, 0)
- \psline(.5, 0)(1, 0.7)(1.5, 0)
- \psline(1.5, 0)(2, -0.2)(2.5, 0)
- \psline(2.5, 0)(3, -1)(3.5, 0)
- \psline(3.5, 0)(4, 0.5)(4.5, 0)
- \psline(4.5, 0)(5, -0.5)(5.5, 0)
- \psset{linecolor=black}
- \dspTaps[linecolor=ColorDT]{0 1 1 0.7 2 -0.2 3 -1 4 0.5 5 -0.5}%
- \end{dspPlot}
-\end{center}
-
-Assume that the spectrum of $x[n]$ between $-\pi$ and $\pi$ is
-\[
- X(e^{j\omega}) =
- \begin{cases}
- 1 & \mbox{ for } |\omega| \leq 2\pi/3 \\
- 0 & \mbox{ otherwise}
- \end{cases}
-\]
-(with the obvious $2\pi$-periodicity over the entire frequency axis).
-\begin{enumerate}
- \item Compute and sketch the Fourier transform $I(f)$ of the interpolating function $I(t)$. (Recall that the triangular function can be expressed as the convolution of $\mbox{rect}(t/2)$ with itself).
- \item Sketch the Fourier transform $X(f)$ of the interpolated signal $x(t)$; in particular, clearly mark the Nyquist frequency $f_N = 1/(2T_s)$.
- \item The use of $I(t)$ instead of a sinc interpolator introduces two types of errors: briefly describe them.
- \item To eliminate the error \emph{in the baseband} $[-f_N, f_N]$ we can pre-filter the signal $x[n]$ with a filter $h[n]$ \emph{before} interpolating with
-$I(t)$. Write the frequency response of the discrete-time filter $H(e^{j\omega})$.
-\end{enumerate}
+ Consider the local interpolation scheme of the previous exercise but assume that the characteristic of the
+ interpolator is the following:
+ \[
+ I(t) = \begin{cases}
+ 1 - 2|t| & \mbox{ for } |t| \leq 1/2 \\
+ 0 & \mbox{ otherwise}
+ \end{cases}
+ \]
+ This is a triangular characteristic with the same support as the zero-order hold. If we pick an interpolation interval $T_s$ and interpolate a discrete-time signal $x[n]$ with $I(t)$, we obtain the continuous-time signal:
+ \[
+ x(t) = \sum_n x[n]\, I\! \left( \frac{t-nT_s}{T_s} \right)
+ \]
+ which looks like this:
+ \begin{center}
+ \begin{dspPlot}[height=\dspHeightCol,xout=true]{0,5}{-1.2,1.2}
+ \psset{linewidth=1pt,linecolor=ColorCT}
+ \psline(-.5, 0)(0, 1)(.5, 0)
+ \psline(.5, 0)(1, 0.7)(1.5, 0)
+ \psline(1.5, 0)(2, -0.2)(2.5, 0)
+ \psline(2.5, 0)(3, -1)(3.5, 0)
+ \psline(3.5, 0)(4, 0.5)(4.5, 0)
+ \psline(4.5, 0)(5, -0.5)(5.5, 0)
+ \psset{linecolor=black}
+ \dspTaps[linecolor=ColorDT]{0 1 1 0.7 2 -0.2 3 -1 4 0.5 5 -0.5}%
+ \end{dspPlot}
+ \end{center}
+
+ Assume that the spectrum of $x[n]$ between $-\pi$ and $\pi$ is
+ \[
+ X(e^{j\omega}) =
+ \begin{cases}
+ 1 & \mbox{ for } |\omega| \leq 2\pi/3 \\
+ 0 & \mbox{ otherwise}
+ \end{cases}
+ \]
+ (with the obvious $2\pi$-periodicity over the entire frequency axis).
+ \begin{enumerate}
+ \item Compute and sketch the Fourier transform $I(f)$ of the interpolating function $I(t)$. (Recall that the triangular function can be expressed as the convolution of $\mbox{rect}(t/2)$ with itself).
+ \item Sketch the Fourier transform $X(f)$ of the interpolated signal $x(t)$; in particular, clearly mark the Nyquist frequency $f_N = 1/(2T_s)$.
+ \item The use of $I(t)$ instead of a sinc interpolator introduces two types of errors: briefly describe them.
+ \item To eliminate the error \emph{in the baseband} $[-f_N, f_N]$ we can pre-filter the signal $x[n]$ with a filter $h[n]$ \emph{before} interpolating with
+ $I(t)$. Write the frequency response of the discrete-time filter $H(e^{j\omega})$.
+ \end{enumerate}
\end{exercise}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\ifexercises{%
\begin{exercise}{Aliasing or not.}
Consider a bandlimited continuous-time signal $x(t)$ with the following spectrum $X(f)$:
\begin{center}
\begin{dspPlot}[sidegap=0,xout=true,xticks=custom,yticks=custom]{-4,4}{0,1.2}
\dspFunc{x \dspQuad{0}{1}}
\dspCustomTicks[axis=x]{1 $2$KHz}
\end{dspPlot}
\end{center}
Sketch the DTFT of the discrete-time signal $x[n] = x(n/F_s)$ for the cases $F_s =4$KHz and $F_s = 2$KHz.
\end{exercise}
-
}\fi
\ifanswers{%
-
\begin{solution}{}
For $F_s =4$KHz there is no aliasing and the spectrum is like so:
\begin{center}
\begin{dspPlot}[xtype=freq,xout=true,yticks=none]{-1,1}{0,1.2}
\dspFunc{x \dspQuad{0}{1}}
\end{dspPlot}
\end{center}
For $F_s = 2$KHz there is aliasing and we have
\begin{center}
\begin{dspPlot}[xtype=freq,xout=true,yticks=none]{-1,1}{0,1.2}
\dspFunc{
x \dspQuad{-2}{2}
x \dspQuad{0}{2}
x \dspQuad{2}{2}
add add 2 div}
\end{dspPlot}
\end{center}
\end{solution}
+}\fi
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\ifexercises{%
\begin{exercise}{Another view of Sampling}
An alternative way of describing the sampling operation relies on the concept of \textit{modulation by a pulse train}. Given a sampling interval $T_s$, a continuous-time pulse train $p(t)$ is an infinite collection of equally spaced Dirac deltas:
\[
p(t) = \sum_{k=-\infty}^{\infty}\delta(t-kT_s).
\]
The pulse train is the used to modulate a continuous-time signal:
\[
x_s(t) = p(t)\,x(t).
\]
Intuitively, $x_s(t)$ represents a ``hybrid'' signal where the nonzero values are only those of the discrete time samples that would be obtained by raw-sampling $x(t)$ with period $T_s$; however, instead of representing the samples a countable sequence (i.e. with a different mathematical object) we are still using a continuous-time signal that is nonzero only over infinitesimally short instants centered on the sampling times. Using Dirac deltas allows us to embed the instantaneous sampling values in the signal.
Note that the Fourier Transform of the pulse train is
\[
P(f) = F_s \sum_{k=-\infty}^{\infty} \delta \! \left( f- k F_s \right)
\]
(where, as per usual, $F_s = 1/T_s$). This result is a bit tricky to show, but the intuition is that a periodic set of pulses in time produces a periodic set of pulses in frequency and that the spacing between pulses frequency is inversely proportional to the spacing between pulses in time.
Derive the Fourier transform of $x_s(t)$ and show that if $x(t)$ is bandlimited to $F_s/2$, where $F_s = 1/T_s$, then we can reconstruct $x(t)$ from $x_s(t)$.
\end{exercise}
+}\fi
+\ifanswers{%
\begin{solution}{}
By using the modulation theorem, the product in time becomes a convolution in frequency:
\begin{align*}
X_s(f) &= X(f) \ast P(f) \\
&= \int_\mathbb{R} X(\varphi)P(f - \varphi)d\varphi \\
&= F_s \int_\mathbb{R} X(\varphi) \sum_{k \in \mathbb{Z}} \delta\left(f - \varphi - k F_s\right)d\varphi \\
&= F_s \sum_{k \in \mathbb{Z}} X\left(f - kF_s\right).
\end{align*}
In other words, the spectrum of the delta-modulated signal is the periodization (with period $F_s=1/T_s$) of the original spectrum. If the latter is bandlimited to $F_s/2$ there will be no overlap between copies in the periodization and therefore $x(t)$ can be obtained simply by lowpass filtering $x_s(t)$ in the continuous-time domain.
\end{solution}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\ifexercises{%
\begin{exercise}{Bandpass Sampling}
-Consider a real, continuous-time signal $x_c(t)$ with the following spectrum $X_c(f)$:
-\begin{center}
- \begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=custom]{-4,4}{0, 1.2}
- \psline[linecolor=ColorCF](-4,0)(-2,0)(-2,1)(-1,0)(1,0)(2,1)(2,0)(4,0)
- \dspCustomTicks{-2 $-2f_0$ -1 $-f_0$ 0 0 1 $f_0$ 2 $2f_0$}
- \end{dspPlot}
-\end{center}
-\begin{enumerate}
- \item What is the bandwidth of the signal? What is the minimum sampling period in order to satisfy the sampling theorem?
- \item Take a sampling period $T_s = 1/(2f_0)$; clearly, with this sampling period, there will be aliasing. Plot the DTFT of the discrete-time signal $x_a[n] = x_c(nT_s)$.
- \item Suggest a block diagram to reconstruct $x_c(t)$ from $x_a[n]$.
- \item With such a scheme available, we can therefore exploit aliasing to reduce the sampling frequency necessary to sample a bandpass signal. In general, what is the minimum sampling frequency to be able to reconstruct, with the above strategy, a real-valued signal whose frequency support on the positive axis is $[f_0, f_1]$ (with the usual symmetry around zero, of course)?
-\end{enumerate}
+ Consider a real, continuous-time signal $x_c(t)$ with the following spectrum $X_c(f)$:
+ \begin{center}
+ \begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=custom]{-4,4}{0, 1.2}
+ \psline[linecolor=ColorCF](-4,0)(-2,0)(-2,1)(-1,0)(1,0)(2,1)(2,0)(4,0)
+ \dspCustomTicks{-2 $-2f_0$ -1 $-f_0$ 0 0 1 $f_0$ 2 $2f_0$}
+ \end{dspPlot}
+ \end{center}
+ \begin{enumerate}
+ \item What is the bandwidth of the signal? What is the minimum sampling period in order to satisfy the sampling theorem?
+ \item Take a sampling period $T_s = 1/(2f_0)$; clearly, with this sampling period, there will be aliasing. Plot the DTFT of the discrete-time signal $x_a[n] = x_c(nT_s)$.
+ \item Suggest a block diagram to reconstruct $x_c(t)$ from $x_a[n]$.
+ \item With such a scheme available, we can therefore exploit aliasing to reduce the sampling frequency necessary to sample a bandpass signal. In general, what is the minimum sampling frequency to be able to reconstruct, with the above strategy, a real-valued signal whose frequency support on the positive axis is $[f_0, f_1]$ (with the usual symmetry around zero, of course)?
+ \end{enumerate}
\end{exercise}
+}\fi
+\ifanswers{%
\begin{solution}{}
\begin{enumerate}
\item The highest nonzero frequency is $2f_0$ and therefore $x_c(t)$ is $4f_0$-bandlimited. The minimum sampling frequency that satisfies the sampling theorem is $F_s = 4f_0$. Note however that the support over which the (positive) spectrum is nonzero is the interval $[f_0, 2f_0]$ so that one could say that the total \emph{effective} bandwidth of the signal is only $2f_0$.
\item The digital spectrum will be the periodized continuous-time spectrum, rescaled to $[-\pi, \pi]$; the periodization after sampling at a frequency $F_a = 2f_0$, yields
\[
\tilde{X}_c(f) = \sum_{k = -\infty}^{\infty} X_c(f - 2kf_0).
\]
The general term $X_c(f - 2kf_0))$ is nonzero for $f_0 \leq |f - 2kf_0| \leq 2f_0 \quad \textrm{ for } k\in\mathbb Z$ or, equivalently,
\begin{align*}
(2k+1)f_0 & \leq f \leq (2k+2)f_0 \\
(2k-2)f_0 & \leq f \leq (2k-1)f_0.
\end{align*}
These are non-overlapping intervals and, therefore, no disruptive superposition will occur. The DTFT of the samples signal is
\[
X_a(e^{j\omega}) = 2f_0 \sum_{k = -\infty}^{\infty} X_c(\frac{\omega}{\pi}f_0 - 2kf_0);
\]
for instance, as $\omega$ goes from zero to $\pi$, the nonzero contribution to the DTFT will be the term $X_c(\frac{\omega}{\pi}f_0 - 2f_0)$ where the argument goes from $-2f_0$ to $-f_0$. The spectrum is represented here with periodicity explicit:
\begin{center}
\begin{dspPlot}[xtype=freq,xout=true,xticks=1,yticks=custom]{-4,4}{0,1.2}
\dspFunc{x \dspPeriodize \dspTri{0}{1}}
\dspCustomTicks[axis=y]{1 $2f_0$}
\end{dspPlot}
\end{center}
\item Here's a possible scheme:
\begin{itemize}
\item Sinc-interpolate $x_a[n]$ with period $T_a = 1/F_a$ to obtain $x_b(t)$; the spectrum will be like so:
\begin{center}
\begin{dspPlot}[xtype=freq,xout=true,xticks=custom,yticks=custom,width=5cm,height=2cm]{-4,4}{0,1.2}
\dspFunc{x -2 lt {0} {x 2 gt {0} {x \dspPeriodize \dspTri{0}{1}} ifelse} ifelse}
\dspCustomTicks[axis=y]{1 $2f_0$}
\end{dspPlot}
\end{center}
\item filter in continuous time $x_p(t)$ with an ideal bandpass filter with (positive) passband equal to $[f_0, 2f_0]$ to obtain $x_c(t)$.
\end{itemize}
\item The effective \emph{positive} bandwidth of a signal whose spectrum is nonzero only over $[-f_1, -f_0] \cup [f_0, f_1]$ is $W = f_1 - f_0$. Obviously the sampling frequency must be at least equal to the \textit{total} effective bandwidth, so a first condition on the sampling frequency is $F_s \geq 2W.$ We can now distinguish two cases.
\begin{itemize}
\item[1)] assume $f_1$ is a multiple of the positive bandwidth, i.e.\ $f_1 = MW$ for some integer $M > 1$ (for $x[n]$ above, it was $M = 2$). Then the argument we made before can be easily generalized: if we pick $F_s = 2W$ and sample we have that
\[
\tilde{X}_c(f) = \sum_{k = -\infty}^{\infty} X_c(f - 2kW).
\]
The general term $X_c(f - 2kW)$ is nonzero only for
\[
f_0 \leq |f - 2kW| \leq f_1 \quad \textrm{for } k\in\mathbb Z.
\]
Since $f_0 = f_1 - W = (M-1)W$, this translates to
\begin{align*}
(2k+M-1)W & \leq f \leq (2k+M)W \\
(2k-M)W & \leq f \leq (2k-M+1)W
\end{align*}
which, once again, are non-overlapping intervals.
\item[2)] if $f_1$ is \emph{not} a multiple of $W$ the easiest thing to do is to decrease the lower frequency $f_0$ to a new {\em smaller} frequency $f_0'$ so that the new positive bandwidth $W' = f_1 - f_0'$ divides $f_1$ exactly. In other words we set a new lower frequency $f_0'$ so that it will be $f_1 = M(f_1-f_0')$ for some integer $M>1$; it is easy to see that
\[
M = \biggl\lfloor \frac{f_1}{f_1 - f_0} \biggr\rfloor.
\]
since this is the maximum number of copies of the $W$-wide spectrum which fit \emph{with no overlap} in the $[0, f_0]$ interval. If $W > f_0$ obviously we cannot hope to reduce the sampling frequency and we have to use normal sampling. This artificial change of frequency will leave a small empty ``gap'' in the new bandwidth $[f_0', f_1]$, but that's no problem. Now we use the previous result and sample at $F_s = 2(f_1 - f_0')$ with no overlaps. Since $f_1 - f_0' = f_1/M$, we have that, in conclusion, the minimum sampling frequency is
\[
F_s = 2f_1 / \biggl\lfloor \frac{f_1}{f_1 - f_0} \biggr\rfloor
\]
i.e.\ we obtain a sampling frequency reduction factor of $\lfloor f_1/(f_1 - f_0) \rfloor$.
\end{itemize}
\end{enumerate}
\end{solution}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\ifexercises{%
\begin{exercise}{Digital processing of continuous-time signals}
-For your birthday, you receive an unexpected present: a $4$~MHz sampler, complete with anti-aliasing filter. This means you can safely sample signals up to a frequency of $2$ ~MHz; since this frequency is above the AM radio frequency band, you decide to hook up the sampler to your favorite signal processing system and build an entirely digital radio receiver. In this exercise we will explore how to do so.
-
-To simplify things a bit, assume that the AM radio spectrum extends from $1$~Mhz to $1.2$~Mhz and that in this band you have ten channels side by side, each one of which occupies $20$~KHz.
-\begin{enumerate}
- \item Sketch the digital spectrum at the output of the A/D converter, and show the bands occupied by the channels, numbered from $1$ to $10$, with their beginning and end frequencies.
-\end{enumerate}
-
-The first thing that you need to do is to find a way to isolate
-the channel you want to listen to and to eliminate the rest.
-For this, you need a bandpass filter centered on the band of
-interest. Of course, this filter must be {\em tunable} in
-the sense that you must be able to change its spectral
-location when you want to change station. An easy way to
-obtain a tunable bandpass filter is by modulating a lowpass
-filter with a sinusoidal oscillator whose frequency is
-controllable by the user:
-\begin{enumerate}\setcounter{enumi}{1}
- \item As an example of a tunable filter, assume $h[n]$ is an ideal lowpass filter with cutoff frequency $\pi/8$. Plot the magnitude response of the filter $h_m[n] = \cos(\omega_m n)h[n]$, where $\omega_m = \pi/2$; $\omega_m$ is called the {\em tuning frequency}.
- \item Specify the cutoff frequency of a lowpass filter which can be used to select one of the AM channels above.
- \item Specify the tuning frequencies for channel~$1$, $5$ and $10$.
-\end{enumerate}
-
-Now that you know how to select a channel, all that is left to do is to demodulate the signal and feed it to an interpolator and then to a loudspeaker.
-
-\begin{enumerate}\setcounter{enumi}{4}
- \item Sketch the complete block diagram of the radio receiver, from the antenna going into the sampler to the final loudspeaker. Use only one sinusoidal oscillator. Do not forget the filter before the interpolator (specify its bandwidth).
-\end{enumerate}
-
-The whole receiver now works at a rate of $4$ MHz; since it outputs audio signals, this is clearly a waste.
-\begin{enumerate}\setcounter{enumi}{5}
- \item Which is the minimum interpolation frequency you can use? Modify the receiver's block diagram with the necessary elements to use a lower frequency interpolator.
-\end{enumerate}
+ For your birthday, you receive an unexpected present: a $4$~MHz sampler, complete with anti-aliasing filter. This means you can safely sample signals up to a frequency of $2$ ~MHz; since this frequency is above the AM radio frequency band, you decide to hook up the sampler to your favorite signal processing system and build an entirely digital radio receiver. In this exercise we will explore how to do so.
+
+ To simplify things a bit, assume that the AM radio spectrum extends from $1$~Mhz to $1.2$~Mhz and that in this band you have ten channels side by side, each one of which occupies $20$~KHz.
+ \begin{enumerate}
+ \item Sketch the digital spectrum at the output of the A/D converter, and show the bands occupied by the channels, numbered from $1$ to $10$, with their beginning and end frequencies.
+ \end{enumerate}
+
+ The first thing that you need to do is to find a way to isolate
+ the channel you want to listen to and to eliminate the rest.
+ For this, you need a bandpass filter centered on the band of
+ interest. Of course, this filter must be {\em tunable} in
+ the sense that you must be able to change its spectral
+ location when you want to change station. An easy way to
+ obtain a tunable bandpass filter is by modulating a lowpass
+ filter with a sinusoidal oscillator whose frequency is
+ controllable by the user:
+ \begin{enumerate}\setcounter{enumi}{1}
+ \item As an example of a tunable filter, assume $h[n]$ is an ideal lowpass filter with cutoff frequency $\pi/8$. Plot the magnitude response of the filter $h_m[n] = \cos(\omega_m n)h[n]$, where $\omega_m = \pi/2$; $\omega_m$ is called the {\em tuning frequency}.
+ \item Specify the cutoff frequency of a lowpass filter which can be used to select one of the AM channels above.
+ \item Specify the tuning frequencies for channel~$1$, $5$ and $10$.
+ \end{enumerate}
+
+ Now that you know how to select a channel, all that is left to do is to demodulate the signal and feed it to an interpolator and then to a loudspeaker.
+
+ \begin{enumerate}\setcounter{enumi}{4}
+ \item Sketch the complete block diagram of the radio receiver, from the antenna going into the sampler to the final loudspeaker. Use only one sinusoidal oscillator. Do not forget the filter before the interpolator (specify its bandwidth).
+ \end{enumerate}
+
+ The whole receiver now works at a rate of $4$ MHz; since it outputs audio signals, this is clearly a waste.
+ \begin{enumerate}\setcounter{enumi}{5}
+ \item Which is the minimum interpolation frequency you can use? Modify the receiver's block diagram with the necessary elements to use a lower frequency interpolator.
+ \end{enumerate}
\end{exercise}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
+\ifexercises{%
\begin{exercise}{Acoustic aliasing}
-Assume $x(t)$ is a continuous-time pure sinusoid at $10$ KHz. It is sampled with a sampler at $8$ KHz and then interpolated back to a continuous-time signal with an interpolator at $8$ KHz. What is the perceived frequency of the interpolated sinusoid?
+ Assume $x(t)$ is a continuous-time pure sinusoid at $10$ KHz. It is sampled with a sampler at $8$ KHz and then interpolated back to a continuous-time signal with an interpolator at $8$ KHz. What is the perceived frequency of the interpolated sinusoid?
\end{exercise}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
+\ifexercises{%
\begin{exercise}{Interpolation subtleties}
-We have seen that any discrete-time sequence can be sinc-interpolated into a continuous-time signal which is $F_s$-bandlimited; $F_s$ depends on the
-interpolation interval $T_s$ via the relation $F_s = 1/T_s$.
-
-Consider the continuous-time signal $x_c(t) = e^{-t/T_s}u(t)$ and the discrete-time sequence $x[n] = e^{-n}u[n]$. Clearly, $x_c(nT_s) = x[n]$; but, can we also say that $x_c(t)$ is the signal we obtain if we apply sinc interpolation to the sequence $x[n] = e^{-n}$ with interpolation interval $T_s$?
-Explain in detail.
+ We have seen that any discrete-time sequence can be sinc-interpolated into a continuous-time signal which is $F_s$-bandlimited; $F_s$ depends on the
+ interpolation interval $T_s$ via the relation $F_s = 1/T_s$.
+
+ Consider the continuous-time signal $x_c(t) = e^{-t/T_s}u(t)$ and the discrete-time sequence $x[n] = e^{-n}u[n]$. Clearly, $x_c(nT_s) = x[n]$; but, can we also say that $x_c(t)$ is the signal we obtain if we apply sinc interpolation to the sequence $x[n] = e^{-n}$ with interpolation interval $T_s$?
+ Explain in detail.
\end{exercise}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
+\ifexercises{%
\begin{exercise}{Time and frequency}
-Consider a real continuous-time signal $x(t)$. All you know about the signal is that $x(t) = 0$ for $|t| > t_0$. Can you determine a sampling frequency $F_s$ so that when you sample $x(t)$, there is no aliasing? Explain.
+ Consider a real continuous-time signal $x(t)$. All you know about the signal is that $x(t) = 0$ for $|t| > t_0$. Can you determine a sampling frequency $F_s$ so that when you sample $x(t)$, there is no aliasing? Explain.
\end{exercise}
+}\fi
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
+\ifexercises{%
\begin{exercise}{Aliasing in time}\label{aliasTimeEx}
-Consider an $N$-periodic discrete-time signal
-$\tilde{x}[n]$, with $N$ an \emph{even} number, and let
-$\tilde{X}[k]$ be its DFS:
-\[
- \tilde{X}[k] = \sum_{n=0}^{N-1}\tilde{x}[n]
-\, e^{-j\frac{2\pi}{N}nk}\, ,\qquad \quad k\in \mathbb{Z}
-\]
-Let $\tilde{Y}[m] = \tilde{X}[2m]$, i.e.\ a ``subsampled''
-version of the DFS coefficients; clearly this defines a
-$(N/2)$-periodic sequence of DFS coefficients.
-%Consider
-Now consider
-the $(N/2)$-point inverse DFS of $\tilde{Y}[m]$ and call
-this $(N/2)$-periodic signal $\tilde{y}[n]$:
-\[
- \tilde{y}[n] = \frac{2}{N}\sum_{k=0}^{N/2-1}\tilde{Y}[k]
-\, e^{j\frac{2\pi}{N/2}nk},\qquad \quad n \in \mathbb{Z}
-\]
-Express $\tilde{y}[n]$ in terms of $\tilde{x}[n]$ and
-describe in a few words what has happened to $\tilde{x}[n]$
-and why.
+ Consider an $N$-periodic discrete-time signal
+ $\tilde{x}[n]$, with $N$ an \emph{even} number, and let
+ $\tilde{X}[k]$ be its DFS:
+ \[
+ \tilde{X}[k] = \sum_{n=0}^{N-1}\tilde{x}[n]
+ \, e^{-j\frac{2\pi}{N}nk}\, ,\qquad \quad k\in \mathbb{Z}
+ \]
+ Let $\tilde{Y}[m] = \tilde{X}[2m]$, i.e.\ a ``subsampled''
+ version of the DFS coefficients; clearly this defines a
+ $(N/2)$-periodic sequence of DFS coefficients.
+ %Consider
+ Now consider
+ the $(N/2)$-point inverse DFS of $\tilde{Y}[m]$ and call
+ this $(N/2)$-periodic signal $\tilde{y}[n]$:
+ \[
+ \tilde{y}[n] = \frac{2}{N}\sum_{k=0}^{N/2-1}\tilde{Y}[k]
+ \, e^{j\frac{2\pi}{N/2}nk},\qquad \quad n \in \mathbb{Z}
+ \]
+ Express $\tilde{y}[n]$ in terms of $\tilde{x}[n]$ and
+ describe in a few words what has happened to $\tilde{x}[n]$
+ and why.
\end{exercise}
+}\fi
diff --git a/writing/sp4comm.multipub/sp4comm.tex b/writing/sp4comm.multipub/sp4comm.tex
index 8c1a406..9bfdc1c 100644
--- a/writing/sp4comm.multipub/sp4comm.tex
+++ b/writing/sp4comm.multipub/sp4comm.tex
@@ -1,165 +1,165 @@
%% Test book for multipub
%%
\documentclass[12pt,a4paper,fleqn]{book}
% include multipub package and declare desired format (PRINT | EPUB | KINDLE)
% (note: cannot use package and option formalism because it breaks LateXML)
\input{../multipub/toolbox/tex/multipub}
\multipub{PRINT}
% packages included here must be common to all versions, i.e. they need
% to have LateXML bindings available
\usepackage{makeidx}
\usepackage{amsmath, amssymb}
\usepackage{graphicx}
\usepackage{url}
% now do target-specific includes and inits
\begin{PRINT}
\include{styles/printlayout}
\include{styles/color}
% \include{styles/grayscale}
\end{PRINT}
\begin{KINDLE}
\include{styles/printlayout}
\end{KINDLE}
\begin{HTML}
\include{styles/epublayout}
\end{HTML}
\begin{EPUB}
\include{styles/epublayout}
\end{EPUB}
% Here we can define some macros common to all versions. This, for instance,
% produces numberless sections that still appear in headers and TOC
\newcommand{\codasection}[1]{%
\section*{#1}%
\markright{#1}%
\addcontentsline{toc}{section}{#1}}
\newcommand{\itempar}[1]{\par\vspace{1em}\noindent{\sffamily\bfseries #1}\hspace{1ex}}
\newcommand{\circonv}{\mbox{\,$\bigcirc$\hspace{-1.8ex}\scriptsize{N} \,}}
\newcommand{\Real}[1]{\mbox{Re\{$#1$\}}}
\newcommand{\Imag}[1]{\mbox{Im\{$#1$\}}}
\newcommand{\ztrans}{\mbox{$z$-transform}}
\newcommand{\expt}[1]{\mbox{E$\left[ #1 \right]$}}
\newcommand{\proba}[1]{\mbox{P[$#1$]}}
\renewcommand{\Re}{\operatorname{Re}}
\renewcommand{\Im}{\operatorname{Im}}
\newcommand{\DFT}[1]{\mbox{DFT\big\{$#1$\big\}}}
\newcommand{\DFS}[1]{\mbox{DFS\big\{$#1$\big\}}}
\newcommand{\IDFT}[1]{\mbox{IDFT\big\{$#1$\big\}}}
\newcommand{\DTFT}[1]{\mbox{DTFT\big\{$#1$\big\}}}
\newcommand{\rect}{\operatorname{rect}}
\newcommand{\sinc}{\operatorname{sinc}}
\newcommand{\de}{\, \mathrm{d}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\setcounter{secnumdepth}{3}
\makeindex
%% OUPUT SELECTOR
\newif\ifexercises
\newif\ifanswers
%% choose one
%\exercisestrue\answersfalse % only exercises
%\exercisesfalse\answerstrue % only solutions
\exercisestrue\answerstrue % both
\multipubbegin
\begin{document}
\title{Test Book for PDF and EPUB}
\author{Paolo Prandoni}
\date{}
\maketitle
\frontmatter
\tableofcontents
%\includefile{00-intro}{00-intro}
\mainmatter
\includefile{20-signals}{10-dt-signals}
\begincoda
\includefile{20-signals}{90-dt-examples}
\includefile{20-signals}{99-dt-exercises}
\endcoda
%\includefile{30-vectors}{00-vectors}
%\includefile{60-ztrans}{10-ztrans}
\begincoda
%\includefile{60-ztrans}{90-examples}
%\includefile{60-ztrans}{99-exercises}
\endcoda
% Fourier analysis
%\includefile{40-fourier}{00-intro}
%\includefile{40-fourier}{10-DFT}
%\includefile{40-fourier}{20-DTFT}
%\includefile{40-fourier}{30-tables}
%\begincoda
%\includefile{40-fourier}{100-realtrans}
%\includefile{40-fourier}{110-wagonwheel}
\setcounter{chapter}{8}
%\includefile{90-sampling}{00-is-intro}
-%\includefile{90-sampling}{10-is-interpolation}
+\includefile{90-sampling}{10-is-interpolation}
%\includefile{90-sampling}{20-is-sampling}
%\includefile{90-sampling}{30-is-processing}
%\begincoda
%\includefile{90-sampling}{90-is-examples}
%\includefile{90-sampling}{99-is-exercises}
%\endcoda
\setcounter{chapter}{10}
-%\includefile{110-multirate}{10-multirate}
-%\begincoda
-%\includefile{110-multirate}{90-examples}
-%\includefile{110-multirate}{99-exercises}
-%\endcoda
+\includefile{110-multirate}{10-mr-multirate}
+\begincoda
+\includefile{110-multirate}{90-mr-examples}
+\includefile{110-multirate}{99-mr-exercises}
+\endcoda
%\includefile{130-image}{20-jpg}
%\includefile{130-image}{00-intro}
%\includefile{130-image}{10-improc}
%\includefile{130-image}{20-jpg}
%\includefile{80-stochastic}{00-rn-intro}
%\includefile{80-stochastic}{10-rn-spectral}
%\includefile{80-stochastic}{20-rn-adaptive}
\begincoda
%\includefile{80-stochastic}{90-rn-examples}
%\includefile{80-stochastic}{99-rn-exercises}
\endcoda
% vector space
%\includefile{30-vectors}{00-vectors}
%\includefile{ch02}{chapter02}
\backmatter
\printindex
\end{document}
diff --git a/writing/sp4comm.multipub/styles/printlayout.tex b/writing/sp4comm.multipub/styles/printlayout.tex
index c683ced..f8256f6 100644
--- a/writing/sp4comm.multipub/styles/printlayout.tex
+++ b/writing/sp4comm.multipub/styles/printlayout.tex
@@ -1,144 +1,144 @@
%% definitions of page layout for PRINT version
%% The following macros affect the rendering of sections and other
%% environments and they will have to be redefined for all other
%% styles such as EPUB
%% exercises, solutions, examples
%
\usepackage{amsthm}
%\newtheorem*{solution}{\normalfont\normalsize\sffamily\bfseries Solution}
\newtheoremstyle{dspbook}% % Name
{}% % Space above
{2em}% % Space below
{}% % Body font
{}% % Indent amount
{\bfseries}% % Theorem head font
{.}% % Punctuation after theorem head
{1em}% % Space after theorem head, ' ', or \newline
{\thmname{#1}\thmnumber{ #2}:\hspace{0.5ex}\thmnote{#3}}% % Theorem head spec (can be left empty, meaning `normal')
\theoremstyle{dspbook}
%\theoremstyle{definition}
% to specify an inline title use \begin{exercise}\pstyle{title text}
%\newtheorem{exercise}{\normalfont\normalsize\sffamily\bfseries Exercise}[chapter]
\newtheorem{example}{\normalfont\normalsize\sffamily\bfseries Example}[chapter]
\def\extitle#1{{\normalfont\normalsize\sffamily\bfseries #1.\ }}
\newcounter{exercise}
-\renewcommand{\theexercise}{\arabic{exercise}}
+\renewcommand{\theexercise}{\arabic{chapter}.\arabic{exercise}}
\newenvironment{exercise}[1]
{\refstepcounter{exercise}%
\vspace{1em}\par\noindent%
{\sffamily\bfseries Exercise~\theexercise. \ifthenelse{\equal{#1}{}}{}{{#1}}}%
\parindent=0pt\par\nobreak}
- {\par\nobreak\centerline{\rule{5\baselineskip}{.5pt}}\par\vspace{.2\baselineskip}}%\hfill$\Diamond$\par\sk
+ %{\par\nobreak\centerline{\rule{5\baselineskip}{.5pt}}\par\vspace{.2\baselineskip}}%\hfill$\Diamond$\par\sk
\newcounter{solution}
-\renewcommand{\thesolution}{\arabic{solution}}
+\renewcommand{\thesolution}{\arabic{chapter}.\arabic{solution}}
\newenvironment{solution}[1]
{\refstepcounter{solution}%
- \vspace{1em}\par\noindent%
+ \vspace{0.8em}\par\noindent%
\par\noindent\textbf{\sffamily\bfseries Solution~\thesolution. \ifthenelse{\equal{#1}{}}{}{{#1}}}%
\parindent=0pt\par\nobreak}
- {\par\nobreak\centerline{\rule{5\baselineskip}{.5pt}}\par\vspace{.2\baselineskip}}%\hfill$\Diamond$\par\sk
+ {\par\vspace{.2\baselineskip}\nobreak\centerline{\rule{5\baselineskip}{.5pt}}\par\vspace{.2\baselineskip}}%\hfill$\Diamond$\par\sk
%% this is used for end-of-chapter sections (further reading, appendices, etc)
%% In the PRINT version we change the graphical appearance of section titles
%%
%% numbered sections: gray rounded box with black text (see later for \sectionbox)
\def\normalbanner{\sectionbox{gray!80}{black}{\thesection\hspace{1em}}}
%% unnumbered sections: darkgray rounded box with white text
\def\codabanner{\sectionbox{darkgray}{white}{}}
\def\begincoda{\titleformat{\section}{}{}{0pt}{\codabanner}}
\def\endcoda{\titleformat{\section}{}{}{0pt}{\normalbanner}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% In print layout we can compile the pstricks figures directly:
\usepackage{pstricks}
\usepackage{pst-node,pstricks-add,pst-tree,pst-plot,pst-3dplot}
\usepackage{dsptricks,dspfunctions,dspblocks}
% figure default sizes for dspTricks
\setlength{\dspWidth}{0.8\textwidth}
\setlength{\dspHeight}{0.4\dspWidth}
\def\dspWidthCol{0.4\textwidth}
\def\dspHeightCol{0.2\dspWidth}
\def\dspHeightShort{0.15\dspWidth}
%% fonts
\usepackage{avant}
\usepackage{courier}
\usepackage[utopia]{mathdesign}
%\usepackage[mathcal]{eucal}
\DeclareSymbolFont{usualmathcal}{OMS}{cmsy}{m}{n}
\DeclareSymbolFontAlphabet{\mathcal}{usualmathcal}
\usepackage{titlesec}
\usepackage[parfill]{parskip}
\usepackage[format=hang,font={small,it},labelfont=bf]{caption}
%% postscript macros for section headings
% #1: background color
% #2: foreground color
% #3: section heading (either \thesection or nothing)
% #4: section title
\def\sectionbox#1#2#3#4{%
\psframebox*[cornersize=absolute,linearc=1mm,framesep=2mm,fillcolor=#1]{%
\parbox{\dimexpr\textwidth-4mm}{%
\centering\parbox{\dimexpr\textwidth-4em}{%
\color{#2}\normalfont\Large\sffamily\bfseries#3#4}}}}
%% titles
\titleformat{\chapter}[display]{\normalfont\huge\sffamily\filcenter}%
{\large\chaptertitlename\ \thechapter}{20pt}{\bfseries\huge}
%% modified section headings for normal sections
\titleformat{\section}{}{}{0pt}{\normalbanner}
%% subsections & co.
\titleformat{\subsection}{\normalfont\large\sffamily\bfseries}{\thesubsection}{1em}{}
\titleformat{\subsubsection}{\normalfont\normalsize\sffamily\bfseries}{}{1em}{}
\titleformat{\paragraph}[runin]{\normalfont\normalsize\sffamily\bfseries}{}{1em}{}
%% headers
%
\usepackage{fancyhdr}
\pagestyle{fancy}
\renewcommand{\chaptermark}[1]{\markboth{{\thechapter \ -- \ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{{\thesection \ -- \ #1}}}
\fancyhead[LE,RO]{\sffamily\thepage}
\fancyhead[RE]{\sffamily\slshape\leftmark}
\fancyhead[LO]{\sffamily\slshape\rightmark}
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
%% Table of contents
%
\usepackage{tocloft}
\renewcommand{\cfttoctitlefont}{\huge\bfseries\sffamily}
\renewcommand{\cftchapfont}{\large\bfseries\sffamily}
\renewcommand{\cftsecfont}{\sffamily}
\renewcommand{\cftsubsecfont}{\sffamily}
\renewcommand{\cftchappagefont} {\sffamily}
\renewcommand{\cftsecpagefont}{\sffamily}
\renewcommand{\cftsubsecpagefont}{\sffamily}

Event Timeline