Page MenuHomec4science

00-sampling.tex
No OneTemporary

File Metadata

Created
Thu, Mar 13, 14:21

00-sampling.tex

\chapter{Interpolation and Sampling}\label{ChSampling}
Signals (in signal processing) are mathematical descriptions of a flow of information. The discrete-time paradigm is the mathematical model of choice in two specific situations: the first, in the tradition of natural observations, captures the process of repeatedly measuring the value of a physical
quantity at successive instants in time for \emph{analysis}
purposes (precipitation levels, stock values, etc.). The
second, which is much more recent and dates back to the
first digital processors, is the ability to
\emph{synthesize} discrete-time signals by means of iterative
numerical algorithms (mathematical simulations, computer
music, etc.). Discrete-time is the mechanized playground of
digital machines.
Continuous-time signals, on the other hand, leverage on a
view of the world in which physical phenomena have, % a
potentially, an infinitely small granularity, in the sense that
measurements can be arbitrarily dense. In this
continuous-time paradigm, real-world phenomena are modeled as
\emph{functions of a real variable\/}; the definition of a
signal over the real line allows for infinitely small
subdivisions of the function's domain and, therefore,
infinitely precise localization of its values. Whether
philosophically valid\footnote{Remember Zeno's paradoxes\dots}
or physically valid,\footnote{The shortest unit of time at
which the usual laws of gravitational physics still hold is
called \emph{Planck time} and is estimated at $10^{-
43}$~seconds. Apparently, therefore, the universe works in
discrete-time\dots}
the continuous-time paradigm is an
indispensable model in the analysis of analog signal
processing systems.
We will now study the mathematical description of the
(porous) interface between continuous-time and discrete
time. The tools that we will introduce, will allow us to cross
this boundary, back and forth, with little or no loss of
information for the signals involved.
\section{Preliminaries and Notation}
\itempar{Interpolation.}\index{interpolation}
Interpolation comes into play when discrete-time signals
need to be converted to continuous-time signals. The need
arises at the interface between the digital world and the
analog world; as an example, consider a discrete-time
waveform synthesizer which is used to drive an analog
amplifier and loudspeaker.
In this case, it is useful to
express the input to the amplifier as a function of a real
variable, defined over the entire real line; this is because
the behavior of analog circuitry is best modeled by
continuous-time functions.
We will see that at the core of
the interpolation process is the association of a physical
time duration $T_s$ to the intervals between samples of the
discrete-time sequence. The fundamental questions concerning
interpolation involve the spectral properties of the
interpolated function with respect to those of the original
sequence.
\medskip
\itempar{Sampling.}\index{sampling}
Sampling is the method by which an underlying
continuous-time phenomenon is ``reduced'' to a discrete-time sequence.
The simplest sampling system just records the value of a
physical variable at repeated instants in time and
associates the value to a point in a discrete-time sequence;
in the following, we %will
refer to this scheme as the
``naive'' sampling operator.
Other sampling methods exist
(and we will see the most important one) but, in all cases,
a correspondence is established between time instants in
%the
continuous time and points in the discrete-time sequence.
In
the following, we %will
only consider \emph{uniform sampling\/},
in which the time instants are uniformly spaced $T_s$
seconds apart. $T_s$ is called the \emph{sampling period}
and its inverse, $F_s$ is called the \emph{sampling
frequency}\index{sampling!frequency} of a sampling system.
The fundamental question of sampling is whether any
information is lost in the sampling process. If the answer
is in the negative (at least for a given class of signals),
this means that all the processing tools developed in the
discrete-time domain can be applied to continuous-time
signals as well, after sampling.
\begin{table}[H]
\caption{Notation used in the Chapter.}\label{tab9.1}
\vskip-3mm
\renewcommand{\arraystretch}{1.2}
\begin{center}\small
\begin{tabular}[h]{|c|l|l|c|}
\hline
\bf Name & \bf Description & \bf Units & \bf Relations \\
\hline
$T_s$ & Sampling period & seconds & $T_s = 1/F_s$ \\
\hline
$F_s$ & Sampling frequency & hertz & $F_s = 1/T_s$ \\
\hline
$\Omega_s$ & Sampling frequency (angular) & rad/sec & $\Omega_s = 2\pi F_s = 2\pi/T_s$ \\
\hline
$\Omega_N$ & Nyquist frequency (angular) & rad/sec & $\Omega_N = \Omega_s/2 = \pi/T_s$ \\
\hline
\end{tabular}
\end{center}
\end{table}
%\renewcommand{\arraystretch}{1}
\itempar{Notation.}
In the rest of this Chapter we will encounter a series of
variables which are all interrelated and whose different
forms will be used interchangeably according to convenience.
They are summarized %here
as % for
a quick reference %noted
in Table \ref{tab9.1}.
\medskip
\section{Continuous-Time Signals}
\noindent
Interpolation and sampling constitute the bridges between
the discrete- and continuous-time worlds. Before we proceed
to the core of the matter, it is useful to take a quick tour
of the main properties of continuous-time signals, which we
%will
simply state here without formal proofs.
Continuous-time signals are modeled by complex functions of
a real variable $t$ which usually represents time (in
seconds) but which can represent other physical coordinates
of interest. For maximum generality, no special requirement
is imposed on functions modeling signals; just as in the
discrete-time case, the functions can be periodic or
aperiodic, or they can have a finite support (in the sense
that they are nonzero over a finite interval only).
A common condition, on an aperiodic signal, is that its modeling
function be square integrable; this corresponds to the
reasonable requirement that the signal have finite energy.
\itempar{Inner Product and Convolution.}
We have already encountered some examples of continuous-time
signals in conjunction with Hilbert spaces; in
Section~\ref{vsex}, for instance, we introduced the space of
square integrable functions over an interval and
% in a short while,
we will shortly
introduce the space of bandlimited signals.
For inner product spaces, whose elements are functions on the
real line, we %will
use the following inner product
definition:\index{inner product!for functions}
\begin{equation}\label{ctinner}
\bigl\langle f(t), g(t) \bigr\rangle
= \int_{-\infty}^{\infty} f^*(t)g(t) \, dt
\end{equation}
The \emph{convolution} \index{convolution!in continuous
time} of two real continuous-time signals is defined as
usual from the definition of the inner product; in
particular:
\begin{align}
(f \ast g)(t) & = \bigl\langle f(t-\tau), g(\tau) \bigr\rangle \\
& = \int_{-\infty}^{\infty} f(t - \tau) g(\tau) \, d\tau
\end{align}
The convolution operator, in continuous time, is linear and
time invariant, as can be easily verified. Note that,
%just like
in discrete-time, convolution represents the operation
of filtering a signal with a continuous-time LTI filter,
whose impulse response is of course a continuous-time
function.
\itempar{Frequency-Domain Representation of Continuous-Time
Signals.}\\
The Fourier transform\index{Fourier transform (continuous time)}
of a continuous-time signal $x(t)$ and its inversion
formula are defined as\footnote{The notation $X(j\Omega)$
mirrors the specialized notation that we used for the DTFT; in
this case, by writing $X(j\Omega)$ we indicate that the
Fourier transform is just the (two-sided) Laplace transform
$X(s) = \int x(t)\, e^{-st}\, dt$ computed on the imaginary
axis.}
\begin{align}
& X(j\Omega) =
\int_{-\infty}^{\infty} x(t)\, e^{-j\Omega t} \, dt \\
& x(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} X(j\Omega)
\, e^{j\Omega t}\, d\Omega
\end{align}
The convergence of the above integrals is assured for
functions which satisfy the so-called Dirichlet conditions.
In particular, the FT is always well defined for square
integrable (finite energy), continuous-time signals.
The Fourier transform in continuous time is a linear
operator; for a list of its properties, which mirror those
that
we saw for the DTFT, we refer to the bibliography.
It suffices
here to recall the conservation of energy, also known as
Parseval's theorem:
\[
\int_{-\infty}^{\infty} \bigl|x(t) \bigr|^2 \,dt
= \frac{1}{2\pi}\int_{-\infty}^{\infty} \bigl|X(j\Omega) \bigr|^2\, d\Omega
\]
The FT representation can be formally extended to signals
which are not square summable by means of the Dirac delta
notation as we saw in Section~\ref{FourierDirac}. In
particular we have
\begin{equation}
\mbox{FT}\{e^{j\Omega_0 t}\} = \delta(\Omega - \Omega_0)
\end{equation}
from which the Fourier transforms of sine, cosine, and
constant functions can easily be derived. Please note that,
in continuous-time, the FT of a complex sinusoid is
\emph{not} a train of impulses but just a single impulse.
\medskip \medskip
\itempar{The Convolution Theorem.}\index{convolution!theorem}
The convolution theorem for continuous-time signal exactly
mirrors the theorem in Section~\ref{convtheosec}; it states
that if\linebreak $h(t) = (f \ast g)(t)$ then the Fourier transforms
of the three signals are related by
$H(j\Omega) = F(j\Omega) G(j\Omega)$. In particular we can use the
convolution theorem to compute
\begin{equation}
(f \ast g)(t) = \frac{1}{2\pi}
\int_{-\infty}^{\infty} F(j\Omega)G(j\Omega)\, e^{j\Omega t} \, d\Omega
\end{equation}
\section{Bandlimited Signals}\label{SincProp}
\noindent
A signal whose Fourier transform is nonzero only, over a
finite frequency interval, is called \emph{bandlimited}. In
other words, the signal $x(t)$ is bandlimited if there
exists a frequency $\Omega_N$ such that:\footnote{The use of
$\geq$ instead of $>$ is a technicality which will be useful
in conjunction with the sampling theorem.}
\index{bandlimited signal|mie}
\[
X(j\Omega) = 0 \qquad \quad \mbox{for } |\Omega| \geq \Omega_N
\]
Such a signal will be called $\Omega_N$-bandlimited and
$\Omega_N$ is often called the\linebreak \emph{Nyquist frequency}.
It may be useful to mention that, symmetrically, a
cont\-inuous-time signal which is nonzero, over a finite time interval only,
is called a \emph{time-limited\/} signal (or finite-support
signal). A fundamental theorem\linebreak states that a bandlimited
signal cannot be time-limited, and vice versa.\linebreak
While this
can be proved formally %without too much effort,
and quite easily, here we
simply give the intuition behind the statement. The
time-scaling property of the Fourier transform states that:
\begin{equation}\label{specContrEq}
\mbox{FT} \bigl\{ f (at) \bigr\}
= \frac{1}{a}\, F \left(j\frac{\Omega}{a} \right)
\end{equation}
so that the more ``compact'' in time a signal is, the wider
its frequency support becomes.
\smallskip \medskip
\itempar{The Sinc Function.}
Let us now consider a prototypical $\Omega_N$-bandlimited
signal $\varphi(t)$ whose Fourier transform is a real
\emph{constant} over the interval\linebreak $[-\Omega_N, \Omega_N]$
and zero everywhere else. If we define the
rect function\index{rect} as
follows (see also Section~\ref{idealFilters}):
\[
\rect(x) =
%\left\{
\begin{cases} %{ll}
1 & \ |x| \leq 1/2 \\
0 & \ |x| > 1/2
\end{cases} %\right.
\]
we can express the Fourier transform of the prototypical
$\Omega_N$-bandlimited signal as
\begin{equation} \label{FreqProtBL}
\Phi(j\Omega) =
\frac{\pi}{\Omega_s}\, \rect\left(\frac{\Omega}{2\Omega_N}\right)
\end{equation}
where the leading factor is just a normalization term. The
time-domain expression for the signal is easily obtained
from the inverse Fourier transform as
\begin{equation}\label{TimeProtBL}
\varphi(t) = \frac{\sin \Omega_N t}{\Omega_N t}
= \sinc\left(\frac{t}{T_s}\right)
\end{equation}
where we have used $T_s = \pi/\Omega_N$ and defined the sinc\index{sinc}
function as
\[
\sinc(x) = %\left\{
\begin{cases} %{ll}
\displaystyle \frac{\sin(\pi x)}{\pi x} & \ x \neq 0 \\
1 & \ x = 0
\end{cases} % \right.
\]
The sinc function is plotted in
Figure~\ref{SincFunctionFig}. Note the following:
\begin{itemize}
\item The function is symmetric, $\sinc(x) = \sinc(-x)$.
\item The sinc function is zero for all integer values of
its argument, except in zero. This feature is called the
\emph{interpolation property} of the sinc, as we will shortly see
more in detail. % later.
\item The sinc function is square integrable (it has finite
energy) but it is not absolutely integrable (hence the
discontinuity of its Fourier transform).
\item The decay is slow, asymptotic to $1/x$.
\item The scaled sinc function represents the impulse
response of an ideal, continuous-time lowpass filter with
cutoff frequency $\Omega_N$.
\end{itemize}
\smallskip \medskip \medskip
\section{Interpolation}\label{InterpSec}\index{interpolation|(}
\noindent
Interpolation is a procedure whereby we convert a
discrete-time sequence $x[n]$ to a continuous-time function $x(t)$.
Since this can be done in an arbitrary number of ways, we
have
%need
to start by formulating some requirements on the
resulting signal. At the heart of the interpolating
procedure, as we have mentioned, is the association of a
physical time duration $T_s$ to the interval between the
samples in the discrete-time sequence. An intuitive
requirement on the interpolated function is that its values
at multiples of $T_s$ be equal to the corresponding points
of the discrete-time sequence, i.e.
\[
x(t) \Bigr|_{t = nT_s} = x[n]
\]
The interpolation problem now reduces to ``filling the
gaps'' between these instants.
\subsection{Local Interpolation}\label{locInterpSec}%
\index{interpolation!local}
The simplest interpolation schemes create a continuous-time
function $x(t)$ from a discrete-time sequence $x[n]$, by
setting $x(t)$ to be equal to $x[n]$ for $t=nT_s$ and by
setting $x(t)$ to be some linear combination of neighboring
sequence values when $t$ lies in between interpolation
instants. In general, the local interpolation schemes can be
expressed by the following formula:
\begin{equation}\label{InterpEqGen}
x(t) = \sum_{n = -\infty}^{\infty}x[n]\,
I\!\left(\frac{t - nT_s}{T_s} \right)
\end{equation}
where $I(t)$ is called the interpolation function (for
linear functions the notation $I_N(t)$ is used and the
subscript $N$ indicates how many discrete-time samples,
besides the current one, enter into the computation of the
interpolated values for $x(t)$). The interpolation function
must satisfy the fundamental \emph{interpolation
properties\/}:
\begin{equation}\label{interpProp}
%\left\{\begin{array}{l}
\begin{cases}
I(0) = 1 \\
I(k) = 0$ \quad for $k \in \mathbb{Z} \setminus \{0\}
\end{cases}
%\right.
\end{equation}
where the second requirement implies that, no matter what
the support of $I(t)$ is, its values should not affect other
interpolation instants. By changing the function $I(t)$, we
can change the type of interpolation and the properties of
the interpolated signal $x(t)$.
Note that~(\ref{InterpEqGen}) can be interpreted either
simply as a linear combination of shifted interpolation
functions or, more interestingly, as a ``mixed domain''
convolution product, where we are convolving a discrete-time
signal $x[n]$ with a continuous-time ``impulse response''
$I(t)$ scaled in time by the interpolation period $T_s$.
\itempar{Zero-Order Hold.}\index{zero-order hold}%
\index{interpolation!local!zero-order}
The simplest approach for the interpolating function is the
piecewise-constant interpolation; here the continuous-time
signal is kept constant between discrete sample values,
yielding
\[
x(t) = x[n], \qquad \quad \mbox{for }
\left(n-\frac{1}{2} \right)T_s \leq t < \left(n + \frac{1}{2} \right)T_s
\]
and an example is shown in Figure~\ref{intzerohold}; it is
apparent that the resulting function is far from smooth
since the interpolated function is discontinuous. The
interpolation function is simply:
\[
I_0(t) = \rect(t)
\]
and the values of $x(t)$ depend only on the current
discrete-time sample value.
%9.1%%%
\begin{figure}[h]
\center\small
\begin{psDTplot}[dx=1,dy=0.5,sidegap=0.5,labelnudge=2ex]{-6}{6}{-1.2}{1.2}
% 2pi/12
\psDTplotFunction[linestyle=dotted,linecolor=darkgray,linewidth=1.2pt]%
{x 0.5235 mul RadtoDeg sin}
\psDTplotFunction[linecolor=gray,linewidth=1.2pt]%
{x x 0 gt {-0.5} {0.5} ifelse sub truncate 0.5235 mul RadtoDeg sin}
\psDTplotSignal{x 0.5235 mul RadtoDeg sin}
\end{psDTplot}
\vskip-7mm
\caption{Interpolation of a discrete-time sinusoid with
the zero-order hold. Note the discontinuities introduced
by this simple scheme.}\label{intzerohold}
\vskip2mm
\end{figure}
\itempar{First-Order Hold.}\index{first-order hold}%
\index{interpolation!local!first-order}
A linear interpolator (sometimes called a first-order\linebreak hold)
simply connects the points corresponding to the samples with
straight lines. An example is shown in
%9.2
Figure~\ref{intfirsthold}; note that now $x(t)$ depends
on two consecutive discrete-time samples, across which a
connecting straight line is drawn. From the point of view of
smoothness, this interpolator already represents % a good
an
improvement over the zero-order hold: indeed the
interpolated function is now continuous, although its first
derivative is not. The first-order hold can be expressed in
the same notation as in~(\ref{InterpEqGen}) by defining the
following triangular function:
\[
I_1(t) = %\left\{
\begin{cases}
1- |t| & \mbox{ if } |t| < 1 \\
0 & \mbox{ otherwise}
\end{cases} %\right.
\]
%\noindent
which is shown in Figure~\ref{interpolatorFunctsFig}.%
\footnote{Note that $I_1(t) = (I_0 \ast I_0)(t)$.}
It is immediately verifiable % to verify
that $I_1(t)$ satisfies the
interpolation properties~(\ref{interpProp}).
%%9.2
\begin{figure}[H]
\vskip2mm
\center\small
\begin{psDTplot}[dx=1,dy=0.5,sidegap=0.5,labelnudge=2ex]{-6}{6}{-1.2}{1.2}
% 2pi/12
\psDTplotFunction[linecolor=gray,linewidth=1.2pt]%
{x x floor sub dup 1 exch sub %
x floor 0.5235 mul RadtoDeg sin mul exch %
x ceiling 0.5235 mul RadtoDeg sin mul add}
\psDTplotFunction[linestyle=dotted,linecolor=darkgray,linewidth=1pt]%
{x 0.5235 mul RadtoDeg sin}
\psDTplotSignal{x 0.5235 mul RadtoDeg sin}
\end{psDTplot}
\vskip-7mm
\caption{Interpolation of a discrete-time sinusoid with
the first-order hold. Note that the first derivative is
discontinuous.}\label{intfirsthold}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%
\eject
%%9.3%%
\begin{figure}[h]
\center\small
\begin{tabular}{cc}
\begin{psCTplot}[size=small,dx=0.5,dy=1]{-1.5}{1.5}{0}{1.2}
\psCTplotData[linewidth=1.2pt]{-1.5 0 -0.5 0 -0.5 1 0.5 1 0.5 0 1.5 0}
\end{psCTplot}
&
\begin{psCTplot}[size=small,dx=0.5,dy=1]{-1.5}{1.5}{0}{1.2}
\psCTplotData[linewidth=1.2pt]{-1.5 0 -1 0 0 1 1 0 1.5 0}
\end{psCTplot}
\end{tabular}
\vskip-4mm
\caption{Interpolation functions for the zero-order (left)
and first-order interpolators (right).}\label{interpolatorFunctsFig}
\vskip2mm
\end{figure}
%%%%%%%%%%%%%%%%%%
\itempar{Higher-Order Interpolators.}
The zero- and first-order interpolators are widely used in
practical circuits due to their extreme simplicity.
Note that the interpolating functions
$ I_{0} (t) $ and $ I_{1} (t) $
are alsol knows as the B-spline functions of order zero and
one respectively.
These %\linebreak
schemes can be extended to higher order interpolation
functions and, in general, $I_N(t)$ %will be
is a $N$-th order
polynomial in $t$. The advantage of the local interpolation
schemes is that, for small~$N$, they can easily be
implemented in practice as \emph{causal} interpolation
schemes (locality is akin to FIR filtering); their
disadvantage is that, because of the locality, their $N$-th
derivative %will be
is discontinuous. This discontinuity
represents a lack of smoothness in the interpolated
function; from a spectral point of view this corresponds to
a high frequency energy content, which is usually
undesirable.
\medskip %\medskip
\subsection{Polynomial Interpolation}\label{lagrangeIntSec}
The lack of smoothness of local interpolations is easily
eliminated when we need to interpolate just a \emph{finite}
number of discrete-time samples. In fact, in this case the
task becomes a classic polynomial interpolation problem for
which the optimal solution has been known for a long time
under the name of \emph{Lagrange
interpolation}.\index{Lagrange interpolation|mie} Note that
a polynomial interpolating a finite set of samples is a
maximally smooth function in the sense that it is
continuous, together with all its derivatives.
Consider a length~$(2N + 1)$ discrete-time signal $x[n]$, with
$n = -N, \ldots, N$.
Associate to each sample an abscissa $t_n = nT_s$;
we know from basic algebra that there is one and only one
polynomial $P(t)$ of degree $2N$ which passes through all
the $2N+1$ pairs $\bigl(t_n, x[n] \bigr)$ and this polynomial is the
Lagrange interpolator. The coefficients of the polynomial
could be found by solving the set of $2N+1$ equations:
\begin{equation}\label{LagrangeEqSet}
\bigl\{ P(t_n) = x[n] \bigr\}_{n = -N, \ldots, N}
\end{equation}
\eject
\noindent
but a simpler way to determine the expression for $P(t)$ is
to use the set of~$2N+1$
\emph{Lagrange polynomials}\index{Lagrange polynomials}
of degree $2N$:
\begin{align}
L^{(N)}_n(t) &=
\prod_{{ \scriptstyle k = -N \atop \scriptstyle k\neq n }}^{N}
\frac{(t - t_k)}{(t_n - t_k)} \nonumber \\
&= %\mathop{
\prod_{{\scriptstyle k = -N \atop \scriptstyle k\neq n}}^{N}
\frac{t/T_s - k}{n - k}\,,
\qquad \quad n = -N, \ldots, N \label{LagPoly}
\end{align}
%\noindent
The polynomials $L^{(N)}_n(t)$ for $T_s = 1$ and $N = 2$
(i.e. interpolation of 5 points) are plotted in
Figure~\ref{lagrangePolyFig}. %9.4
By using this notation, the \emph{global\/} Lagrange
interpolator\index{Lagrange interpolation}\index{interpolation!local!Lagrange}
for a given set of abscissa/ordinate pairs can
now be written as a simple linear combination of Lagrange polynomials:
\begin{equation}\label{LagIntForm}
P(t) = \sum_{n = -N}^{N} x[n] L^{(N)}_n (t)
\end{equation}
%\endinput
and it is easy to verify that this is the unique
interpolating polynomial of degree $2N$ in the sense
of~(\ref{LagrangeEqSet}). Note that \emph{each} of the
$L^{(N)}_n(t)$ satisfies the interpolation
properties~(\ref{interpProp}) or, concisely (for $T_s = 1$):
\[
L^{(N)}_n(m) = \delta[n-m]
\]
%\noindent
The interpolation formula, however, cannot be written in the
form of~(\ref{InterpEqGen}) since the Lagrange polynomials
are not simply shifts of a single prototype function.
The continuous time signal
%\pagebreak
$x(t) = P(t)$ is now a
\emph{global\/} interpo-\linebreak
% 9.4%%
\begin{figure}[H]
%\vskip3mm
\center\small
\begin{psCTplot}[dy=0.5,height=6cm]{-2.5}{2.5}{-0.8}{1.6}
\psclip{\psframe(-2.5,-0.8)(2.5,1.6)}
\psCTplotFunction[linewidth=1.2pt,linecolor=lightgray]%
{x dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul }
\psCTplotFunction[linewidth=1.2pt,linecolor=gray]%
{x dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul }
\psCTplotFunction[linewidth=1.2pt]%
{x dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul }
\psCTplotFunction[linewidth=1.2pt,linecolor=gray]%
{x dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul }
\psCTplotFunction[linewidth=1.2pt,linecolor=lightgray]%
{x dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul }
\endpsclip
\end{psCTplot}
\vskip-4mm
\caption{Lagrange interpolation polynomials $L_n^{(2)}(t)$
for $T = 1$ and $n=-2,\ldots,2$. Note that $L_n^{(N)}(t)$
is zero for $t$ integer except for $t = n$, where it is
$1$.}\label{lagrangePolyFig}
\end{figure}
\eject
\noindent
lating function for the finite-length
discrete-time signal $x[n]$, in the sense that it depends on
\emph{all\/} samples in the signal; as a consequence, $x(t)$
is maximally smooth ($x(t) \in C^{\infty}$). An example of
Lagrange interpolation for $N = 2$ is plotted in
Figure~\ref{intlagrangeFig}. %9.5
%%9.5%%
\begin{figure}[h]
\vskip2mm
\center\small
\begin{psDTplot}[dx=1,dy=1,sidegap=0.5]{-2}{2}{-3}{3}
%\psclip{\psframe(-2.5,-3)(2.5,3)}
\psset{linestyle=dashed}
\psCTplotFunction[linewidth=0.5pt,tmin=-2,tmax=2]%
{x dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul } %
\psCTplotFunction[linewidth=0.5pt,tmin=-2,tmax=2]%
{x dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul %
2 mul } %
\psCTplotFunction[linewidth=0.5pt,tmin=-2,tmax=2]%
{x dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul %
.7 mul} %
\psCTplotFunction[linewidth=0.5pt,tmin=-2,tmax=2]%
{x dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul %
2.4 mul } %
\psCTplotFunction[linewidth=0.5pt,tmin=-2,tmax=2]%
{x dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul %
-1.5 mul} %
\psset{linestyle=solid}
\psCTplotFunction[linewidth=1.2pt,linecolor=darkgray,tmin=-2,tmax=2]%
{x dup 1 add -1 div exch %
dup -2 div exch %
dup -1 add -3 div exch %
-2 add -4 div %
mul mul mul %
1 mul %
x dup 2 add exch %
dup -1 div exch %
dup -1 add -2 div exch %
-2 add -3 div %
mul mul mul %
2 mul %
x dup 2 add 2 div exch %
dup 1 add 1 div exch %
dup -1 add -1 div exch %
-2 add -2 div %
mul mul mul %
.7 mul %
x dup 2 add 3 div exch %
dup 1 add 2 div exch %
dup 1 div exch %
-2 add -1 div %
mul mul mul %
2.4 mul %
x dup 2 add 4 div exch %
dup 1 add 3 div exch %
dup 2 div exch %
-1 add 1 div %
mul mul mul %
-1.5 mul %
add add add add}
%\endpsclip
\psDTplotData{-2 1 -1 2 0 0.7 1 2.4 2 -1.5}
\end{psDTplot}
\vskip-7mm
\caption{Maximally smooth Lagrange interpolation of a
length-$5$ signal.}\label{intlagrangeFig}
\vskip2mm
\end{figure}
%%%%%%%%%%%%%%%%%%%%%
\medskip
\subsection{Sinc Interpolation}
The beauty of local interpolation schemes lies in the fact
that the interpolated function is simply a linear
combination of shifted versions of the \emph{same} prototype
interpolation function $I(t)$; unfortunately, this has the
disadvantage of creating a continuous-time function which
lacks smoothness. Polynomial interpolation, on the other
hand, is perfectly smooth but it only works in the
finite-length case and it requires different interpolation
functions with different signal lengths. Yet, both
approaches can come together in a %nice
convenient mathematical way and
we are now ready to introduce the maximally smooth
interpolation scheme for infinite discrete-time signals.
Let us take the expression for the Lagrange polynomial of
degree~$N$ in (\ref{LagPoly}) and consider its limit for~$N$
going to infinity. We have
\begin{align}
\lim_{N\rightarrow\infty} L^{(N)}_n(t)
&= %\mathop{
\prod_{ {\scriptstyle k = -\infty \atop \scriptstyle k\neq n }}%_{}
^{\infty} \frac{t/T_s - k}{n - k}
%\nonumber \\&
= %\mathop{
\prod_{{ \scriptstyle m = -\infty \atop \scriptstyle m\neq 0}}%_{}
^{\infty} \frac{t/T_s - n+m}{m} \nonumber
\\ %\end{align}\begin{align}%\lim_{N\rightarrow\infty} L^{(N)}_n(t)
& = %\mathop{
\prod_{{\scriptstyle m = -\infty \atop\scriptstyle m\neq 0 }}%_{}
^{\infty}\left(1+\frac{t/T_s - n}{m}\right) \nonumber \\
& = \prod_{m=1}^{\infty}\left(1-\left(\frac{t/T_s - n}{m}\right)^{\!2}\right) \label{sincLagAp}
\end{align}
%where
Here, we have used the change of variable $m = n-k$. We can
now invoke Euler's infinite product expansion for the sine
function
\[
\sin (\pi\tau) = (\pi\tau) \prod_{k = 1}^{\infty}
\left(1 - \frac{\tau^2}{k^2}\right)
\]
(whose derivation is in the appendix) to finally obtain
\begin{equation}
\lim_{N\rightarrow\infty} L^{(N)}_n(t)
= \sinc\left(\frac{t - nT_s}{T_s}\right)
\end{equation}
The convergence of the Lagrange polynomial $L^{(N)}_0(t)$ to
the sinc function is illustrated in
Figure~\ref{SincFunctionFig}. %9.6
Note that, now, as the number
of points becomes infinite, the Lagrange polynomials
converge to shifts of the \emph{same} prototype function,
i.e.\ the sinc; therefore, the interpolation formula can be
expressed as in~(\ref{InterpEqGen}) with $I(t) = \sinc(t)$;
indeed, if we consider an infinite sequence $x[n]$ and apply
the Lagrange interpolation
formula~(\ref{LagIntForm})\index{sinc interpolation|(}%
\index{interpolation!sinc}, we
obtain:
\begin{equation}\label{sincInterpoOne}
x(t) = \sum_{n = -\infty}^{\infty}x[n]
\, \sinc\left(\frac{t - nT_s}{T_s}\right)
\end{equation}
%%9.6%%%
\begin{figure}[h]
\center\small
\begin{psCTplot}[dy=0.5,dx=5]{-13}{13}{-0.3}{1.1}
\psCTplotFile[linecolor=lightgray,linewidth=1.2pt]%
{data/sincapp.txt}
\def\sincFun{x 3.14 mul dup RadtoDeg sin exch div}
\psCTplotFunction[tmin=-11,tmax=11]{\sincFun}
\psCTplotFunction[linestyle=dashed,tmin=10]{\sincFun}
\psCTplotFunction[linestyle=dashed,tmax=-11]{\sincFun}
\end{psCTplot}
\vskip-5mm
\caption{A portion of the sinc function and its Lagrange
approximation $L_{0}^{(100)}(t)$ (light
gray).}\label{SincFunctionFig}
\end{figure}\index{sinc interpolation|)}\index{interpolation|)}
%%%%%
\itempar{Spectral Properties of the Sinc Interpolation.}
The sinc interpolation of a discrete-time sequence gives
rise to a strictly bandlimited continuous-time function. If
the DTFT $X(e^{j\omega})$ of the discrete-time sequence
exists, the spectrum of the interpolated function
$X(j\Omega)$ can be obtained as follows:
\begin{align*}
X(j\Omega) & = \int_{-\infty}^{\infty}
\sum_{n = -\infty}^{\infty}x[n]\,
\sinc\left(\frac{t - nT_s}{T_s}\right) e^{-j\Omega t} \, dt \\
& = \sum_{n = -\infty}^{\infty} x[n] \int_{-\infty}^{\infty}
\sinc\left(\frac{t - nT_s}{T_s}\right) e^{-j\Omega t} \,dt
\end{align*}
Now we use (\ref{FreqProtBL}) to %get
obtain the Fourier Transform
of the scaled and shifted sinc:
\begin{equation*}
X(j\Omega) =
\sum_{n = -\infty}^{\infty} x[n]
\left(\frac{\pi}{\Omega_N}\right) \rect\left(\frac{\Omega}{2\Omega_N}\right)
e^{-jnT_s \Omega}
\end{equation*}
and use the fact that, as usual, $T_s = \pi/\Omega_N$:
\begin{align*} %\mbox{\hspace{3em}}
X(j\Omega) & =
\left(\frac{\pi}{\Omega_N}\right)
\rect\left(\frac{\Omega}{2\Omega_N}\right)
\sum_{n = -\infty}^{\infty} x[n] \, e^{-j\pi(\Omega/\Omega_N)n}
\\
& = %\left\{
\begin{cases}
\ds \frac{\pi}{\Omega_N} \, X(e^{j\pi\Omega/\Omega_N})
& \mbox{ for $ |\Omega| \leq \Omega_N$} \\[2mm]
0 & \mbox{ otherwise} \end{cases} %\right.
\end{align*}
In other words, the continuous-time spectrum is just a
scaled and stretched version of the DTFT of the
discrete-time sequence between $-\pi$ and $\pi$.
The duration of the interpolation interval $T_s$ is
inversely proportional to the resulting bandwidth of the
interpolated signal. Intuitively, a slow interpolation
($T_s$ large) %will
results in a spectrum concentrated around
the low frequencies; conversely, a fast interpolation ($T_s$
small) %will
results in a spread-out spectrum (more high
frequencies are present).\footnote{To find a simple everyday
(yesterday's?) analogy, think of a $45$ rpm vinyl\index{vinyl} record played
at either $33$~rpm (slow interpolation) or at
$78$ rpm (fast interpolation) and remember the acoustic effect on the
sounds.}
%%ch9b
\medskip \medskip
\section{The Sampling Theorem}
\noindent
We have seen in the previous Section that the ``natural''
polynomial interpolation scheme leads to the so-called sinc
interpolation for infinite discrete time sequences. Another
way to look at the previous result is that any square
summable discrete-time signal can be interpolated into a
continuous-time signal which is smooth in time and strictly
bandlimited in frequency. This suggests that the class of
bandlimited functions must play a special role in bridging
the gap between discrete and continuous time and this
deserves further investigation. In particular, since any
discrete-time signal can be interpolated exactly into a
bandlimited function, we now ask ourselves whether the
converse is true: can any bandlimited signal be transformed
into a discrete-time signal with no loss of information?
\eject
\itempar{The Space of Bandlimited Signals.}
The class of $\Omega_N$-bandlimited functions of finite
energy forms a Hilbert space, with the inner product defined
in~(\ref{ctinner}).
An orthogonal basis for the space of $\Omega_N$-bandlimited
functions can easily be obtained from the prototypical
bandlimited function, the sinc; indeed, consider the
family\index{basis!sinc}:
\begin{equation}
\varphi^{(n)}(t) =
\sinc\left(\frac{t-nT_s}{T_s}\right), \qquad\quad n \in \mathbb{Z}
\end{equation}
where, once again, $T_s = \pi/\Omega_N$.
Note that we have $\varphi^{(n)}(t) = \varphi^{(0)}(t -
nT_s)$ so that each basis function is simply a translated
version of the prototype basis function $\varphi^{(0)}$.
Orthogonality can easily be proved as follows: first of
all, because of the symmetry of the sinc function and the
time-invariance of the convolution, we can write
\begin{align*}
\bigl\langle \varphi^{(n)}(t), \varphi^{(m)}(t) \bigr\rangle
& = \bigl\langle \varphi^{(0)}(t - nT_s), \varphi^{(0)}(t -
mT_s) \bigr\rangle \\[1mm]
& = \bigl\langle \varphi^{(0)}(nT_s - t), \varphi^{(0)}(mT_s - t) \bigr\rangle \\[1mm]
& = (\varphi^{(0)} \ast \varphi^{(0)}) \bigl((n-m)T_s
\bigr)
\end{align*}
We can now apply the convolution theorem
and~(\ref{FreqProtBL}) to obtain:
\begin{align*}
\bigl\langle \varphi^{(n)}(t), \varphi^{(m)}(t) \bigr\rangle
& =
\frac{1}{2\pi} \int_{-\infty}^{\infty}
\left(\frac{\pi}{\Omega_N} \, \rect \left(\frac{\Omega}{\Omega_N}\right)
\right)^{\!2} e^{j\Omega (n-m)T_s}\, d\Omega \\[3mm]
& = \frac{\pi}{2\Omega_N^2} \int_{-\Omega_N}^{\Omega_N} e^{j\Omega (n-m)T_s}
\, d\Omega \\[3mm]
& = % \left\{
\begin{cases}
\displaystyle \frac{\pi}{\Omega_N} = T_s & \mbox{ if $n = m$} \\
0 & \mbox{ if $n \neq m$} \end{cases}
%\right.
\end{align*}
so that
$\bigl\{\varphi^{(n)}(t) \bigr\}_{n\in \mathbb{Z}}$
is orthogonal with normalization factor $\Omega_N/\pi$
(or, equivalently, $1/T_s$).
In order to show that the space of
$\Omega_N$-bandlimited functions is indeed a Hilbert space,
we should also prove that the space is complete. This is a
more delicate notion to show\footnote{Completeness of the
sinc basis can be proven as a consequence of the
completeness of the Fourier series in the continuous-time
domain.}
and here it will simply be assumed.
\eject
\itempar{Sampling as a Basis Expansion.}
Now that we have an orthogonal basis, we can compute
coefficients in the basis expansion of an arbitrary
$\Omega_N$-bandlimited function $x(t)$. We have
\begin{align}
\bigl\langle \varphi^{(n)} (t), x(t) \bigr\rangle
& = \bigl\langle \varphi^{(0)}(t - nT_s), x(t) \bigr\rangle \\
& = \bigl(\varphi^{(0)} \ast x \bigr)(nT_s) \\
& = \frac{1}{2\pi}\int_{-\infty}^{\infty}
\frac{\pi}{\Omega_N}\, \rect
\left(\frac{\Omega}{\Omega_N}\right) X(j\Omega)\, e^{j\Omega nT_s}\, d\Omega\\
& = \frac{\pi}{\Omega_N}\, \frac{1}{2\pi}
\int_{-\Omega_N}^{\Omega_N} X(j\Omega) \, e^{j\Omega nT_s}\, d\Omega
\label{NBLProj}\\
& = T_s \, x(nT_s)
\end{align}
In the derivation, firstly we have rewritten the inner product
as a convolution operation, %then
after which we have applied the
convolution theorem, and recognized the penultimate line as
simply the inverse FT of $X(j\Omega)$ calculated in $t =
nT_s$. We therefore have the remarkable result that the
$n$-th basis expansion coefficient is \emph{proportional to the
sampled value of} $x(t)$ at $t = nT_s$. For this reason, the
sinc basis expansion is also called \emph{sinc sampling}.
Reconstruction of $x(t)$ from its projections can now be
achieved via the orthonormal basis reconstruction
formula~(\ref{eq:mb.38}); since the sinc basis is just
orthogonal, rather than orthonormal, (\ref{eq:mb.38})~needs
to take into account the normalization factor and we %express this as %
have
\begin{align}
x(t) & = \frac{1}{T_s}\sum_{n = -\infty}^{\infty}
\bigl\langle \varphi^{(n)}(t), x(t) \bigr\rangle \, \varphi^{(n)}(t) \nonumber \\
&= \sum_{n = -\infty}^{\infty} x(nT_s)
\, \sinc\!\left(\frac{t-nT_s}{T_s}\right)
\end{align}
which corresponds to the interpolation formula~(\ref{sincInterpoOne}).
\medskip
\itempar{The Sampling: Theorem.}\index{sampling theorem|mie}%
\index{sampling!theorem}
{\em If $x(t)$ is a $\Omega_N$-bandlimited continuous-time
signal, a \emph{sufficient} representation of $x(t)$ is
given by the discrete-time signal $x[n] = x(nT_s)$, with
$T_s = \pi/\Omega_N$. The continuous time signal $x(t)$ can
be exactly reconstructed from the discrete-time signal
$x[n]$ as}
\[
x(t) = \sum_{n = -\infty}^{\infty} x[n]
\, \mbox{\rm sinc}\left(\frac{t-nT_s}{T_s}\right)
\]
\medskip
The proof of the theorem is inherent to the properties of
the Hilbert space of bandlimited functions, and is
\pagebreak
trivial %after
once having proved the existence of an orthogonal
basis. Now, call $\Omega_{\max}$ the largest nonzero
frequency of a bandlimited signal.\footnote{For real signals,
whose spectrum is symmetric, $X(j\Omega) = 0$ for $|\Omega|
> \Omega_{\max}$ so that $\Omega_{\max}$ is the largest
{\em positive} nonzero frequency. For complex signals,
$\Omega_{\max}$ is the largest nonzero frequency in
magnitude.}
Such a signal is obviously $\Omega_N$-bandlimited
for all $\Omega_N %\geq
> \Omega_{\max}$.
Therefore, a bandlimited signal $x(t)$ is uniquely
represented by all sequences $x[n] = x(nT)$ for which $T
\leq T_s = \pi/\Omega_{\max}$; $T_s$ is the largest sampling
period which guarantees perfect reconstruction (i.e. we
cannot take fewer than $1/T_s$ samples per second to
perfectly capture the signal; we will see in the next
Section what happens if we do). Another way to state the
above point is to say that the \emph{minimum} sampling
frequency $\Omega_s$ for \index{sampling!frequency} perfect
reconstruction is exactly twice the signal's maximum
frequency, or that the Nyquist frequency must coincide to
the highest frequency of the bandlimited signal; the
sampling frequency $\Omega$ must therefore satisfy the
following
relationship:
\[
\Omega \geq \Omega_s = 2\Omega_{\max}
\]
or, in hertz,
\[
F \geq F_s = 2F_{\max}
\]
\medskip
\section{Aliasing}\index{aliasing|(}
The ``naive'' notion of sampling, as we have seen, is
associated to the very practical idea of measuring the
instantaneous value of a continuous-time signal at uniformly
spaced instants in time. For bandlimited signals, we have
seen that this is actually equivalent to an orthogonal
decomposition in the space of bandlimited functions, which
guarantees that the set of samples $x(nT_s)$ uniquely
determines the signal and allows its perfect reconstruction.
We now want to address the following question: what happens
if we simply sample an \emph{arbitrary} continuous time
signal in the ``naive'' sense (i.e.\ in the sense of simply
taking $x[n] = x(nT_s)$) and what can we reconstruct from
the set of samples thus obtained?
\subsection{Non-Bandlimited Signals}
Given a sampling period of $T_s$ seconds, the sampling
theorem ensures that there is no loss of information by
sampling the class of $\Omega_N$-bandlimited signals, where
as usual $\Omega_N = \pi/T_s$.
If a signal $x(t)$ is not $\Omega_N$-bandlimited
(i.e.\ its spectrum is nonzero at least somewhere outside of
$[-\Omega_N,\Omega_N]$)
\pagebreak%
then the approximation properties of
orthogonal bases state that its \emph{best} approximation in
terms of uniform samples $T_s$ seconds apart is given by the
samples \emph{of its projection over the space of
$\Omega_N$-bandlimited signals\/} (Sect. \ref{ssecOrtBas}). %3.3.2
This is easily seen
in~(\ref{NBLProj}), where the projection is easily
recognizable as an ideal lowpass filtering operation on
$x(t)$ (with gain $T_s$) which truncates its spectrum
outside of the $[-\Omega_N, \Omega_N]$ interval.
Sampling as the result of a sinc basis expansion
automatically includes this lowpass filtering operation; for
a $\Omega_N$-bandlimited signal, obviously, the filtering is
just a scaling by $T_s$. For an arbitrary signal, however,
we can now decompose the sinc sampling as in
Figure~\ref{BLSampling}, where the first block is a
%9.7
continuous-time lowpass filter with cutoff frequency
$\Omega_N$ and gain $T_s = \pi/\Omega_N$. The discrete time
sequence $x[n]$ thus obtained is the best discrete-time
approximation of the original signal when the sampling is
uniform.
\medskip
%%9.7 %%%%%%%%%%%%%
\vskip1mm
\begin{figure}[h]
\center \small
\begin{psdrawBDmatrix}{1.3}{0.4}
$x(t)$~ &
\raisebox{-1.6em}{\psframebox[linewidth=0.6pt]{%
\psset{xunit=1em,yunit=1em}%
\pspicture(-3,-2)(3,2)%
\psline{->}(-2.8,-1)(2.8,-1)%
\psline{->}(0,-1.8)(0,1.8)%
\psline[linewidth=1pt]%
(-1,-1)(-1,0.8)(1,0.8)(1,-1)%
\endpspicture}} &
&
\raisebox{-1.4em}{\psframebox[linewidth=0.6pt]{%
\psset{xunit=1em,yunit=1em,linewidth=1pt}%
\pspicture(-3,-1.8)(2,1.8)%
\psline(-2.8,0)(-1.6,0)(1.2,1.4)
\psline(1.1,0)(1.8,0)
\psarc[linewidth=0.6pt]{<-}(-1.6,0){2em}{-10}{55}
\endpspicture}} &
$x[n]$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2}
\ncline{1,2}{1,4}^{$x_{LP}(t)$}
\ncline{->}{1,4}{1,5}
\end{psdrawBDmatrix}
%\vskip-2mm
\caption{Bandlimited sampling (sinc basis expansion) as a
combination of lowpass filtering (in the continuous-time
domain) and sampling; $x_{LP}(t)$ is the projection of
$x(t)$ over the space of $\Omega_N$-bandlimited functions.}\label{BLSampling}
\vskip1mm
\end{figure}
\medskip
\subsection{Aliasing: Intuition}
Now let us go back to the naive sampling scheme in which
simply $x[n] = x(nT_s)$, with $F_s = 1/T_s$, the sampling
frequency of the system; what is the error we incur if
$x(t)$ is not bandlimited or if the sampling frequency is
less than twice the maximum frequency? We %will
can develop the
intuition by starting with the simple case of a single
sinusoid %and we will move
before moving
on to a formal demonstration of
the aliasing phenomenon.
\medskip
\itempar{Sampling of Sinusoids.}\index{complex exponential!aliasing}
Consider a simple continuous-time signal\footnote{In the
following examples we will express the frequencies of
sinusoids in hertz, both out of practicality and to give an
example of a different form of notation.}
such as $x(t) = e^{j2\pi f_0 t}$ and its sampled version
$x[n] = e^{j2\pi (f_0/F_s)n} = e^{j\omega_0 n}$
with
\begin{equation}\label{InvFreqCD}
\omega_0 = 2\pi \frac{f_0}{F_s}
\end{equation}
Clearly, since $x(t)$ contains only one frequency, it is
$\Omega$-bandlimited for all $\Omega > 2\pi|f_0|$. If the
frequency
\pagebreak%
of the sinusoid satisfies $|f_0| < F_s/2 = F_N$,
then $\omega_0 \in (-\pi, \pi)$ and the frequency of the
original sinusoid can be univocally determined from the
sampled signal. Now assume that $f_0 = F_N = F_s/2$; we have
\[
x[n] = e^{j\pi n} = e^{-j\pi n}
\]
In other words, we encounter a first ambiguity with respect
to the direction of rotation of the complex exponential:
from the sampled signal we cannot determine whether the
original frequency was $f_0 = F_N$ or $f_0 = -F_N$. If we
increase the frequency further, say $f_0 = (1+\alpha)F_N$,
we have
\[
x[n] = e^{j(1+\alpha)\pi n} = e^{-j\alpha\pi n}
\]
Now the ambiguity is both on the direction and on the
frequency value: if we try to infer the original frequency
from the sampled sinusoid from~(\ref{InvFreqCD}), we cannot
discriminate between $f_0 = (1+\alpha)F_N$ or $f_0 = -\alpha
F_N$. Matters get even worse if $f_0 > F_s$. Suppose we can
write $f_0 = F_s + f_b$ with $f_b < F_s/2$; we have
\[
x[n] = e^{j(2\pi F_sT_s + 2\pi f_bT_s) n}
= e^{j(2\pi + \omega_b) n}
= e^{j\omega_b n}
\]
so that the sinusoid is completely indistinguishable from a
sinusoid of frequency $f_b$ sampled at $F_s$; the fact that
two continuous-time frequencies are mapped to the same
discrete-time frequency is called \emph{aliasing}. An
example of aliasing is depicted in Figure~\ref{AliasingFig}.
In general, because of the $2\pi$-periodicity of the
discrete-time complex exponential, we can always write
\[
\omega_b = (2\pi f_0 T_s) + 2k\pi
\]
%%9.8%%%
\begin{figure}[h]
\vspace{2mm}
\center
\begin{psCTplot}[dx=1,dy=0.5]{0}{3}{-1.2}{1.2}
\psCTplotFunction[plotstyle=dots,dotsize=3pt,%dotstyle=oplus,
linewidth=1pt,plotpoints=25]%
{x 52.78 mul RadtoDeg cos}
\psCTplotFunction[linewidth=0.8pt]{x 52.78 mul RadtoDeg cos}
\end{psCTplot}
\vskip-4mm
\caption{Example of aliasing: a sinusoid at 8400~Hz, $x(t)
= \cos(2\pi \cdot 8400 t)$ (solid line) is sampled at $F_s
= 8000$~Hz. The sampled values (dots) are
indistinguishable from those of at 400~Hz sinusoid sampled
at $F_s$.}\label{AliasingFig}
\end{figure}
\eject
%%%%%%%%%%%%
\noindent
and choose $k \in \mathbb{Z}$ so that $\omega_b$ falls in
the $[-\pi, \pi]$ interval. Seen the other way, all
continuous-time frequencies of the form
\[
f = f_b + kF_s
\]
with $f_b < F_N$ are aliased to the same discrete-time
frequency $\omega_b$.
Consider now the signal $y(t) = Ae ^{j2\pi f_b t} + Be
^{j2\pi(f_b + F_s) t}$, with $f_b < F_N$. If we sample this
signal with sampling frequency $F_s$ we obtain:
\begin{align*}
x[n] & = A\, e ^{j2\pi(f_b/F_s)n} + B\, e ^{j2\pi(f_b/F_s+1) nT_s}\\
& = A\, e ^{j\omega_b n} + B\, e ^{j\omega_b n}e^{j2\pi n} \\
&= (A+B) \,e ^{j\omega_b n}
\end{align*}
In other words, two continuous-time exponentials which are
$F_s$~Hz apart, %will
give rise to a single discrete-time
complex exponential, whose amplitude is equal to the sum of
the amplitudes of both the original sinusoids.
\medskip
\itempar{Energy Folding of the Fourier Transform.}
To understand what happens to a general signal, consider the
interpretation of the Fourier transform as a bank of
(infinitely many) complex oscillators initialized with phase
and amplitude, each contributing to the energy content of
the signal at their respective frequency. Since, in the
sampled version, any two frequencies $F_s$ apart are
indistinguishable, their contributions to the discrete-time
Fourier transform of the sampled signal %will
add up. This
aliasing can be represented as a spectral
\emph{superposition}: the continuous-time spectrum above
$F_N$ is cut, shifted back to $-F_N$, summed over $[-F_N,
F_N]$, and the process is repeated again and again; the
same applies
for the spectrum below $-F_N$. This process is nothing but
the familiar periodization of a signal:
\[
\sum_{k = -\infty}^{\infty} X(j2\pi f + j2k\pi F_s)
\]
as we will prove formally in the next Section.
\medskip
\subsection{Aliasing: Proof}
In the following, we %will
consider the relationship between
the DTFT of a sampled signal $x[n]$ and the FT of the
originating continuous-time signal $x_c(t)$. For clarity, we
%will
add the subscript ``$c$'' to all continuous-time
quantities so that, for instance, we %will
write
$x[n] = x_c(nT_s)$. %Also
Moreover, we %will
use the usual tilde notation for
periodic functions.
Consider $X(e^{j\omega})$, the DTFT of the sampled sequence
(with, as usual, $T_s = (1/F_s) = (\pi/\Omega_N)$). The
inversion formula states:
\begin{equation}\label{PSFstart}
x[n] = \frac{1}{2\pi} \int_{-\pi}^{\pi}
X(e^{j\omega}) \, e^{j\omega n} \, d\omega
\end{equation}
%\eject\noindent
We will use this result later.
We can also arrive at an expression for $x[n]$ from
$X_c(j\Omega)$, the Fourier transform of the continuous-time
function $x_c(t)$; indeed, by writing the formula for the
inverse continuous-time Fourier transform computed in $nT_s$
we %have
can state that:
\begin{equation}
x[n] = x_c(nT_s)
= \frac{1}{2\pi} \int_{-\infty}^{\infty} X_c(j\Omega)
\, e^{j\Omega\, nT_s} \, d\Omega
\end{equation}
The idea is to split the integration interval in the above
expression as the sum of non-overlapping intervals whose
width is equal to the sampling bandwidth $\Omega_s =
2\Omega_N$; this stems from the realization that, in the
inversion process, all frequencies $\Omega_s$ apart %will
give indistinguishable contributions to the discrete-time
spectrum. We have
\begin{equation*}
x[n] = \frac{1}{2\pi}
\sum_{k = -\infty}^{\infty}
\int_{(2k-1)\Omega_N}^{(2k+1)\Omega_N} X_c(j\Omega)
\, e^{j\Omega\, nT_s}
\, d\Omega
\end{equation*}
which, by exploiting the $\Omega_s$-periodicity of $e^{j\Omega\, nT_s}$
(i.e.\ $e^{j(\Omega + k\Omega_s) nT_s} = e^{j\Omega nT_s}$), becomes
\begin{equation}\label{PSFpass1}
x[n] = \frac{1}{2\pi} \sum_{k = -\infty}^{\infty}
\int_{-\Omega_N}^{\Omega_N} X_c(j\Omega - jk\Omega_s)
\, e^{j\Omega\, nT_s}\, d\Omega
\end{equation}
Now we interchange the order of integration and summation
(this can be done under fairly broad conditions %on
for
$x_c(t)$):
\begin{equation}\label{PSFpass2}
x[n] = \frac{1}{2\pi} \int_{-\Omega_N}^{\Omega_N}
\left\{ \sum_{k = -\infty}^{\infty} X_c(j\Omega -
jk\Omega_s)\right\}
e^{j\Omega\, nT_s} \, d\Omega
\end{equation}
and if we define the periodized function:
$$\tilde{X_c}(j\Omega) = \sum_{k = -\infty}^{\infty} X_c(j\Omega - jk\Omega_s)
$$
we can write
\begin{equation}\label{PSFpass3}
x[n] =
\frac{1}{2\pi} \int_{-\Omega_N}^{\Omega_N} \tilde{X}_c(j\Omega)
\, e^{j\Omega\, nT_s} \, d\Omega
\end{equation}
after which, finally,
we operate the change of variable $\theta = \Omega T_s$:
\begin{equation}\label{PSFpass4}
x[n] =
\frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{1}{T_s}
\, \tilde{X}_c \left(j\frac{\theta}{T_s} \right)
e^{j\theta n}\, d\theta
\end{equation}
It is %easy to verify
immediately verifiable that
$\tilde{X_c} \bigl( j(\theta/T_s) \bigr)$
is now $2\pi$-periodic in $\theta$.
If we now compare~(\ref{PSFpass4}) to (\ref{PSFstart})
we can easily see that~(\ref{PSFpass4}) is nothing but the
DTFT inversion formula for the $2\pi$-periodic function
$(1/T_s) \tilde{X}(j\theta/T_s)$; since the inversion
formulas (\ref{PSFpass4}) and (\ref{PSFstart}) yield the
same result (namely, $x[n]$) we can conclude that:
\begin{equation}\label{aliasedCTspecEq}
X (e^{j\omega}) = \frac{1}{T_s}
\sum_{k = -\infty}^{\infty} X_c
\left( j\frac{\omega}{T_s} - j\frac{2\pi k}{T_s} \right)
\end{equation}
which is the relationship between the Fourier transform of a
continuous-time function and the DTFT of its sampled
version, with $T_s$ being the sampling period. The above
result is a particular version of a more general result in
Fourier theory called the \emph{Poisson sum formula}. In
particular, when $x_c(t)$ is $\Omega_N$-bandlimited, the
copies in the periodized spectrum do not overlap and the
(periodic) discrete-time spectrum between $-\pi$ and $\pi$
is simply\index{aliasing|mie}
\begin{equation}\label{noAliasingSpecEq}
X(e^{j\omega}) = \frac{1}{T_s}\, X_c
\left(j\frac{\omega}{T_s} \right)
\end{equation}
\subsection{Aliasing: Examples}
Figures \ref{SamplingExFig1} to \ref{SamplingExFig4}
illustrate several examples of the relationship between the
continuous-time spectrum and the discrete-time spectrum. For
all figures, the top panel shows the continuous-time
spectrum $X(j\Omega)$, with labels indicating the Nyquist
frequency and the sampling frequency. The middle panel shows
the periodized function $\tilde{X}_c(j\Omega)$; the single
copies are plotted with a dashed line (they %will
are not be visible
if there is no overlap) and their sum is plotted in gray,
with the main period highlighted in black. Finally, the last
panel shows the DTFT after sampling over the $[-\pi,\pi]$
interval. %9.9-12
\smallskip
\itempar{Oversampling.}\index{oversampling}
Figure~\ref{SamplingExFig1}
%9.9
shows the result of sampling a
bandlimited signal with a sampling frequency in excess of
the minimum (in this case, $\Omega_N = 3 \Omega_{\max}/2$);
in this case we say that the signal has been
\emph{oversampled}. The result is that, in the periodized
spectrum, the copies do not overlap and the discrete-time
spectrum is just a scaled version of the original spectrum
(with even a narrower support than the full $[-\pi, \pi]$
range because of the oversampling; in this case
$\omega_{\max} = 2\pi / 3$).
%%%%% Parabolic shape
\def\specFun{abs \specHW div dup mul 1 exch sub }
\def\oneShotSpec{3.14 div dup dup -\specHW lt {pop pop 0} {%
\specHW gt {pop 0} {\specFun } ifelse} ifelse }
\def\aliasedSpec{%
x -18.84 sub \oneShotSpec %
x -12.56 sub \oneShotSpec %
x -6.28 sub \oneShotSpec %
x \oneShotSpec %
x 6.28 sub \oneShotSpec %
x 12.56 sub \oneShotSpec %
x 18.84 sub \oneShotSpec %
add add add add add add}
\def\doOmegaLabels#1{%
\psset{xunit=3.14}
\uput[180]{90}(-5,0.6){#1}
\selectPlotFont
\psticklab{0}{0}
\psticklab{1}{$\Omega_N$} \psticklab{-1}{$-\Omega_N$}
\psticklab{2}{$\Omega_s$} \psticklab{-2}{$-\Omega_s$}
\psticklab{4}{$2\Omega_s$} \psticklab{-4}{$-2\Omega_s$}}
%%%%%%
%%%%9.9 %%%%%%%%%%
\begin{figure}[H]
\center\small
\def\specHW{0.666 }
\noindent\kern-6mm\begin{tabular}{c}
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec}
\doOmegaLabels{$X(j\Omega)$}
\end{psDFplot}
\\
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt,linecolor=gray]{\aliasedSpec }
\psDFplotFunction[linewidth=1.2pt,fmin=-3.14,fmax=3.14]{x \oneShotSpec }
\doOmegaLabels{$\tilde{X}_c(j\Omega)$}
\end{psDFplot}
\\
\\
\begin{psDFplot}[dy=1,pifrac=3]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec }
\psset{xunit=3.14}
\uput[180]{90}(-1,0.5){$X(e^{j\omega})$}
\end{psDFplot}
\end{tabular}
\vskip-5mm
\caption{Sampling of a bandlimited signal with
$\Omega_N > \Omega_{\max}$ (in this case
$\Omega_{\max} = 2\Omega_N /3$).
Original continuous-time spectrum $X_c(j\Omega)$
(top panel); periodized spectrum $\tilde{X}_c(j\Omega)$
(middle panel, with repetitions in gray); discrete-time
spectrum $X(e^{j\omega})$ over the $[-\pi, \pi]$ interval
(bottom panel).}\label{SamplingExFig1}
\vskip-3mm
\end{figure}
\itempar{Critical Sampling.}\index{critical sampling}
Figure~\ref{SamplingExFig2}
%9.10
shows the result of sampling a bandlimited signal with a
sampling frequency exactly equal to twice the maximum
frequency; in this case we say that the signal has been
\emph{critically sampled}. In the periodized spectrum,
again the copies do not overlap and the discrete-time spectrum
is a scaled version of the original spectrum occupying the
whole $[-\pi, \pi]$ range.
\eject
%%9.10 %%
\begin{figure}[H]
\center\small
\def\specHW{1 }
\noindent\kern-6mm\begin{tabular}{c}
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec}
\doOmegaLabels{$X(j\Omega)$}
\end{psDFplot}
\\
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt,linecolor=gray]{\aliasedSpec }
\psDFplotFunction[fmin=-3.14,fmax=3.14,linewidth=1.2pt]{x \oneShotSpec }
\doOmegaLabels{$\tilde{X}_c(j\Omega)$}
\end{psDFplot}
\\
\\
\begin{psDFplot}[dy=1,pifrac=3]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec }
\psset{xunit=3.14}
\uput[180]{90}(-1,0.5){$X(e^{j\omega})$}
\end{psDFplot}
\end{tabular}
\vskip-5mm
\caption{Sampling of a bandlimited signal with
$\Omega_{\max} = \Omega_N$.}\label{SamplingExFig2}
\end{figure}
%%%%%%%%%%%%%
%\endinput
\itempar{Undersampling (Aliasing).}
Figure~\ref{SamplingExFig3}
%9.11
shows the result of sampling a bandlimited signal with a
sampling frequency less than twice the maximum frequency.
It this case, copies do overlap in the periodized spectrum and the
resulting discrete-time spectrum is an aliased version of
the original; the original spectrum can\emph{not} be
reconstructed from the sampled signal. Note, in particular,
that the original lowpass shape is now a highpass shape in
the sampled domain (energy at $\omega=\pi$ is larger than at
$\omega=0$).
%%9.11 %%%%%
\begin{figure}[H]
\center\small
\def\specHW{1.5 }
\noindent\kern-6mm\begin{tabular}{c}
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec }
\doOmegaLabels{$X(j\Omega)$}
\end{psDFplot}
\\[2mm]
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\multido{\n=-18.84+6.28}{7}{%
\psDFplotFunction[linewidth=1.2pt,linecolor=lightgray,linestyle=dashed]%
{x \n 0 sub \oneShotSpec }}
\psDFplotFunction[linecolor=gray,linewidth=1.2pt]{\aliasedSpec }
\psDFplotFunction[fmin=-3.14,fmax=3.14,linewidth=1.2pt]{\aliasedSpec }
\doOmegaLabels{$\tilde{X}_c(j\Omega)$}
\end{psDFplot}
\\
\\
\begin{psDFplot}[dy=1,pifrac=3]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{\aliasedSpec }
\psset{xunit=3.14}
\uput[180]{90}(-1,0.5){$X(e^{j\omega})$}
\end{psDFplot}
\end{tabular}
\vskip-5mm
\caption{Sampling of a bandlimited signal with
$\Omega_{\max} > \Omega_N$ (in this case $\Omega_{\max} =
3\Omega_N /2$). Original continuous-time spectrum
$X_c(j\Omega)$ (top panel); periodized spectrum
$\tilde{X}_c(j\Omega)$ (middle panel, with overlapping
repetitions dashed gray); aliased discrete-time spectrum
(bottom panel).}\label{SamplingExFig3}
\end{figure}
\itempar{Sampling of Non-Bandlimited Signals.}
Finally, Figure~\ref{SamplingExFig4}
%9.12
shows the result of sampling a
non-bandlimited signal with a sampling frequency\linebreak
which is
chosen as a tradeoff between alias and number of samples per
second. The idea is to disregard the low-energy ``tails'' of
the original spectrum so that their alias does not
exceedingly corrupt %too much
the discrete-time spectrum. In
the periodized spectrum, the copies do overlap and the
resulting discrete-time spectrum is an aliased version of
the original,
which is similar to the original;
however, the original spectrum cannot be reconstructed from the sampled
signal. In a practical sampling scenario, the correct design
choice would have been to lowpass filter (in the
continuous-time domain) the original signal so as to eliminate the
spectral tails beyond ${}\pm\Omega_N$.\index{aliasing|)}
%%9.12 %%%%%%%%
\begin{figure}[h]
\center\small
\def\oneShotSpec{abs dup mul 0.5 mul 1 add 1 exch div }
\noindent\kern-6mm\begin{tabular}{c}
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{x \oneShotSpec }
\doOmegaLabels{$X(j\Omega)$}
\end{psDFplot}
\\[2mm]
\begin{psDFplot}[dy=1,reps=5,labelx=false]{0}{1.2}
\multido{\n=-18.84+6.28}{7}{%
\psDFplotFunction[linewidth=1.2pt,linecolor=lightgray,linestyle=dashed]%
{x \n 0 sub \oneShotSpec }}
\psDFplotFunction[linecolor=gray,linewidth=1.2pt]{\aliasedSpec }
\psDFplotFunction[fmin=-3.14,fmax=3.14,linewidth=1.2pt]{\aliasedSpec }
\doOmegaLabels{$\tilde{X}_c(j\Omega)$}
\end{psDFplot}
\\
\\
\begin{psDFplot}[dy=1,pifrac=3]{0}{1.2}
\psDFplotFunction[linewidth=1.2pt]{\aliasedSpec }
\psset{xunit=3.14}
\uput[180]{90}(-1,0.5){$X(e^{j\omega})$}
\end{psDFplot}
\end{tabular}
\vskip-5mm
\caption{Sampling of a non-bandlimited signal.}
\label{SamplingExFig4}
\end{figure}
%%%%%%%%%%%
\section{Discrete-Time Processing of Analog Signals}\label{dtProcCtSig}
\noindent
Sampling and interpolation (or, more precisely, the A/D and
D/A conversions strategies which we will see in the next
Chapter) represent the entry and exit points of the powerful
processing paradigm for which discrete-time signal
processing is ``famous''. Samplers and interpolators are the
only interfaces with the physical world, while all the
processing and the analysis are performed in the abstract,
dimensionless and timeless world of a general purpose
microprocessor.
The generic setup of a real-world processing device is as
shown in Figure~\ref{DTSofCTS}.
%9.13
In most cases, the sampler's
and the interpolator's frequencies are the same, and they
are chosen as a function of the bandwidth of the class of
signals for which the device is conceived; let us assume
that the input is bandlimited to $\Omega_N = \pi/T_s$ (or to
$F_s/2$, if we reason in hertz). For the case in which the
processing block is a linear filter $H(z)$, the overall
processing chain implements an analog transfer function;
from the relations:
\begin{align}
X(e^{j\omega}) &= \frac{1}{T_s}\, X_c \left(j\frac{\omega}{T_s} \right) \\
Y(e^{j\omega}) &= H(e^{j\omega}) \, X(e^{j\omega}) \\
Y_c(j\Omega) &= T_s Y(e^{j\Omega T_s})
\end{align}
we have
\begin{equation}\label{dtpctsEq}
Y_c(j\Omega) = H(e^{j\Omega T_s})\, X_c(j\Omega)
\end{equation}
So, for instance, if $H(z)$ is a lowpass filter with cutoff
frequency $\pi/3$, the processing chain in
Figure~\ref{DTSofCTS} implements the transfer function of an
analog lowpass filter with cutoff frequency $\Omega_N / 3$
(or, in hertz, $F_s/6$).
\medskip
%9.13 %%%%%%%%%%
\begin{figure}[h]
\vskip3mm
\center\small
\begin{psdrawBDmatrix}{1.3}{0.2}
$x_c(t)$~
&
\raisebox{-.5em}{\psframebox[linewidth=\BDwidth]{%
\psset{xunit=0.5em,yunit=0.5em,linewidth=1pt}%
\pspicture(-3,-1.8)(2,1.8)%
\psline(-2.8,0)(-1.6,0)(1.2,1.4)
\psline(1.1,0)(1.8,0)
\psarc[linewidth=0.6pt]{<-}(-1.6,0){1em}{-10}{55}
\endpspicture}}
&
\BDfilter{$H(z)$}
&
\BDfilter{$I(t)$}
&
$y_c(t)$ \\
& $1/T_s$ & & $1/T_s$
\psset{linewidth=1pt}
\ncline{->}{1,1}{1,2}
\ncline{1,2}{1,3}^{$x[n]$}
\ncline{->}{1,3}{1,4}^{$y[n]$}
\ncline{->}{1,4}{1,5}
\end{psdrawBDmatrix}
\vskip-3mm
\caption{Discrete-time processing of analog signals.}\label{DTSofCTS}
\end{figure}
%%%
\medskip
\subsection{A Digital Differentiator}\index{differentiator!exact}
In Section~\ref{elopsSec} we introduced an approximation to
the differentiator operator in discrete time as the first-
order difference between neighboring samples. The processing
paradigm that we have just introduced, will now allow us to find the
exact differentiator for a discrete-time signal. We start
from a classical result of Fourier theory stating that,
under broad conditions, if $x(t)$ is a %derivable
differentiable function
and if $x(t) \stackrel{\textrm{FT}}{\longleftrightarrow}
X(j\Omega)$ then it is
\[
x'(t) \; \stackrel{\textrm{FT}}{\longleftrightarrow}\;
j\Omega \, X(j\Omega)
\]
This is actually easy to prove by applying integration by
part to the expression for the Fourier transform of $x'(t)$.
Now, it is %immediate to verify
immediately verifiable that if we set
$H(e^{j\omega}) = j\omega$ in~(\ref{dtpctsEq})
we obtain the followings:
%that
the system in Figure~\ref{DTSofCTS}
%9.14
exactly implements the transfer
function of a continuous-time differentiator or, in other
words, we can safely state that:
\[
y[n] = x'[n]
\]
where the derivative is clearly interpreted as the
derivative of the underlying continuous-time signal. From
the frequency response of the digital differentiator, it is
easy to determine its impulse response via an inverse DTFT
(and an integration by parts); we have
\[
h[n] = %\left\{
\begin{cases}
0 & \mbox{ for $n=0$} \\
\displaystyle \frac{(-1)^n}{n} & \mbox{ otherwise}
\end{cases} % \right.
\]
From its infinite, two-sided impulse response, it is
%immediate to see
readily seen that the digital differentiator is an ideal
filter; good FIR approximations can be obtained using the
Parks-McClellan algorithm.
%%9.14%
\begin{figure}[h]
\center\small
\begin{psDTplotb}[dx=10,dy=0.5]{-15}{15}{-1.2}{1.2}
\psDTplotSignal{x 0 eq {0} {x 180 mul cos x div} ifelse }
\end{psDTplotb}
\vskip-5mm
\caption{Impulse response (portion) of the ideal
discrete-time differentiator.}\label{IdealDiffIR}
\end{figure}
%%%%%%%%%%%%%%%%
\subsection{Fractional Delays}\label{fracDelSec}%
\index{fractional delay|mie}\index{delay!fractional|mie}
We know that a discrete-time delay of $D$ samples is a
linear system with transfer function $H_d(z) = z^{-D}$ (or,
alternatively, $H_d(e^{j\omega}) = e^{-j\omega D}$). If $D$
is {\em not} an integer, the transfer function is still
formally valid and we %said
have stated that it implements a so-called
{\em fractional delay,} i.e.\ a delay which lies ``in
between'' two integer delays. The processing paradigm above
will allow us to understand exactly the inner workings of a
fractional delay. Assume $D \in \mathbb{R}$ and $0 < D < 1$;
for all other values of $D$ we can always split the delay
into an integer part (which poses no problems) and a
fractional part between zero and one. If we use $H(z) = z^{-
D}$ in~(\ref{dtpctsEq}) we have
\[
Y_c(j\Omega) = e^{-j\Omega D T_s} X_c(j\Omega)
\]
which in the time domain becomes
\[
y_c(t) = x_c(t - DT_s)
\]
that is% to say
, the output continuous-time signal is just
the input continuous-time signal delayed by a fraction $D$
of the sampling interval $T_s$. In discrete time
therefore we have
\[
y[n] = x_c(nT_s - DT_s)
\]
so that the action of a fractional delay is ``resampling''
the signal at any given point between the original sampling
instants. We can appreciate this by
also considering the impulse response%as well
; it is %immediate to show
obvious that:
\[
h_d[n] = \sinc(n - D)
\]
Now, if we write out the convolution sum explicitly, we have
\begin{align*}
y[n] &= x[n] \ast h_d[n] \\
&= \sum_{k=-\infty}^{\infty} x[k] \,\sinc(n - D - k) \\
&= \sum_{k=-\infty}^{\infty} x[k] \,
\sinc\left(\frac{(nT - DT) - kT}{T}\right)
\end{align*}
which is valid for any $T$ and which clearly shows the
convolution sum as the interpolation
formula~(\ref{sincInterpoOne}) sampled at the instants
$t =nT - DT$.
\endinput
\section{Appendix}
\subsection*{The Sinc Product Expansion Formula}
The goal is to prove the product expansion\index{sinc}
\begin{equation}\label{appSinExp}
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\end{equation}
We will present two proofs; the first was proposed by Euler
in~1748 and, while it certainly lacks rigor by modern
standards, it has the irresistible charm of elegance and
simplicity in that it relies only on basic algebra. The
second proof is more rigorous, and is based on the theory of
Fourier series for periodic functions; relying on Fourier
theory, however, hides most the convergence issues.
\itempar{Euler's Proof.}
Consider the $N$ roots of unity for $N$ odd. They will be $z
= 1$ plus $N-1$ complex conjugate roots of the form $z =
e^{\pm j\omega_N k}$ for $k = 1, \ldots$,\linebreak
$ (N-1)/2$ and
$\omega_N = 2\pi/N$. If we group the complex conjugate roots
pairwise we can factor the polynomial $z^N-1$ as
\[
z^N-1 = (z-1)\prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2z\cos(\omega_N k) + 1 \bigr)
\]
The above expression can immediately be generalized to
\[
z^N-a^N = (z-a) \prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2az \cos(\omega_N k) + a^2 \bigr)
\]
Now replace $z$ and $a$ in the above formula by $z =
(1+x/N)$ and $a = (1-x/N)$; we obtain the following:
\begin{align*}
&\left(1+\frac{x}{N}\right)^{\!N} - \left(1-\frac{x}{N}\right)^{\!N} = \nonumber \\
&\qquad
= \frac{4x}{N} \prod_{k=1}^{(N-1)/2}
\left(1-\cos(\omega_N k) + \frac{x^2}{N^2} \,
\bigl(1+\cos(\omega_N k) \bigr)\right) \\
&\qquad =
\frac{4x}{N}\prod_{k=1}^{(N-1)/2}
\bigl( 1-\cos(\omega_N k) \bigr)
\left(1 + \frac{x^2}{N^2}\cdot\frac{1+\cos(\omega_N k)}{1-\cos(\omega_N k)}\right) \\
&\qquad = A\, x \prod_{k=1}^{(N-1)/2}
\left(1 + \frac{x^2 \bigl(1+\cos(\omega_N k) \bigr)}{N^2
\bigl(1-\cos(\omega_N k) \bigr)}\right)
\end{align*}
where $A$ is just the finite product
$(4/N)\prod_{k=1}^{(N-1)/2} \bigl(1-\cos(\omega_N k) \bigr)$.
The value $A$ is also the coefficient for the degree-one
term $x$ in the right-hand side and it can be easily seen
from the expansion of the left hand-side that $A=2$ for all
$N$; actually, this is an application of Pascal's triangle
and it was proven by Pascal in the general case in 1654. As
$N$ grows larger we have that:
\[
\left(1 \pm \frac{x}{N}\right)^{\!N} \approx e^{\pm x}
\]
and
at the same time, if $N$ is large, then $\omega_N = 2\pi/N$
is small and, for small values of the angle, the cosine can
be approximated as
\[
\cos(\omega) \approx 1 - \frac{\omega^2}{2}
\]
so that the denominator in the general product term can, in
turn, be approximated as
\[
N^2
\left(1-\cos \left( \frac{ 2\pi }{ N} \, k \right) \right)
\approx N^2 \cdot \frac{4k^2\pi^2}{2N^2} = 2k^2 \pi^2
\]
By the same token, for large $N$, the numerator can be
approximated as $1+\cos((2\pi/n)k) \approx 2$ and therefore
(by bringing $A=2$ over to the left-hand side)
the above expansion becomes
\[
\frac{e^x - e^{-x}}{2}
= x \! \left( 1 + \frac{x^2}{\pi^2} \right)
\left(1 + \frac{x^2}{4\pi^2}\right) \left( 1 + \frac{x^2}{9\pi^2}\right)
\cdots
\]
Finally, we replace $x$ by $j\pi t$ to obtain:
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)
\]
\itempar{Rigorous Proof.}
Consider the Fourier series expansion of the \emph{even}
function $f(x) = \cos(\tau x)$ periodized over the interval
$[-\pi, \pi]$. We have
\[
f(x) = \frac{1}{2}a_0 + \sum_{n = 1}^{\infty} a_n \cos(nx)
\]
with
\begin{align*}
a_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}\cos(\tau x)\cos(n x)\, dx \\
& = \frac{2}{\pi}\int_{0}^{\pi} \frac{1}{2}
\, \left( \cos \bigl( (\tau+n)x \bigr) + \cos \bigl((\tau-n)x \bigr)
\right) \, dx \\
& =
\frac{1}{\pi} \left( \frac{\sin \bigl((\tau+n)\pi \bigr)}{\tau + n}
+ \frac{\sin \bigl((\tau-n)\pi \bigr)}{\tau - n}\right) \\
& = \frac{2\sin (\tau\pi)}{\pi}\, \frac{(-1)^n \tau}{\tau^2 -n^2}
\end{align*}
so that
\[
\cos(\tau x) = \frac{2\tau \sin(\tau\pi)}{\pi}
\left( \frac{1}{2\tau^2} - \frac{\cos( x)}{\tau^2 - 1}
+ \frac{\cos(2x)}{\tau^2 - 2^2} -
\frac{\cos(3x)}{\tau^2 - 3^2}
+ \cdots \right)
\]
In particular, for $x = \pi$ we have
\[
\cot (\pi\tau) = \frac{2\tau}{\pi}
\left( \frac{1}{2\tau^2} + \frac{1}{\tau^2 - 1} +
\frac{1}{\tau^2 - 2^2} + \frac{1}{\tau^2 - 3^2} + \cdots\right)
\]
which we can rewrite as
\[
\pi\left(\cot(\pi\tau) -
\frac{1}{\pi\tau}\right)
= \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}
\]
If we now integrate between $0$ and $t$ both sides of the
equation we have
\[
\int_{0}^{t} \left(\cot(\pi\tau) - \frac{1}{\pi\tau}\right)
d\pi\tau =
\ln\frac{\sin(\pi\tau)}{\pi \tau} \biggr|_0^t
= \ln\left[ \frac{\sin(\pi t)}{\pi t}\right]
\]
and
\[
\int_{0}^{t} \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}\, d\tau
= \sum_{n = 1}^{\infty} \ln
\left[ 1 - \frac{t^2}{n^2}\right]
= \ln \left[ \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)\right]
\]
from which, finally,
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\]
%ch9c
\sectionsn{Examples}
\begin{example}{Another way to aliasing}
Consider a real function $f(t)$ for which the Fourier
transform is well defined:
\begin{equation}\label{exSampeq1}
F(j\Omega) = \int_{-\infty}^{\infty}f(t)\, e^{-j\Omega t}\, dt
\end{equation}
Suppose that we only possess a sampled version of $f(t)$,
that is, we only know the numeric value of $f(t)$ at times
multiples of a sampling interval $T_s$ and that we want to
obtain an approximation of the Fourier transform above.
Assume we do not know about the DTFT; an intuitive (and
standard) place to start is to write out the Fourier
integral as a Riemann sum:
\begin{equation}\label{exSampeq2}
F(j\Omega) \approx \hat{F}(j\Omega) = \sum_{n=-\infty}^{\infty}
T_s f(nT_s) \, e^{-j T_s n \Omega }
\end{equation}
indeed, this expression only uses the known sampled values of $f(t)$.
In order to understand whether~(\ref{exSampeq2}) is a good
approximation consider the periodization of $F(j\Omega)$:
\begin{equation}\label{exSampeq3}
\tilde{F}(j\Omega) = \sum_{n=-\infty}^{\infty}
F \left( j \left( \Omega + \frac{2\pi}{T_s}\, n \right) \right)
\end{equation}
in which $F(j\Omega)$ is repeated with overlap with period
$2\pi/T_s$. We will show that:
\[
\hat{F}(j\Omega)=\tilde{F}(j\Omega)
\]
that is, the Riemann approximation is equivalent to a
periodization \index{periodization} of the original Fourier
transform; in mathematics this is known as a particular form
of the
{\em Poisson sum formula}\index{Poisson sum formula}.
To see this, consider the periodic nature of
$\tilde{F}(j\Omega)$ and remember that any periodic function
$f(x)$ of period $L$ admits a
\emph{Fourier series}\index{Fourier series} expansion:
\begin{equation}\label{fseEx}
f(t) = \sum_{n=-\infty}^{\infty} A_n\, e^{j\frac{2\pi}{L}nt}
\end{equation}
where
\begin{equation}\label{fsecEx}
A_n = \frac{1}{L} \int_{-L/2}^{L/2}
f(t) \, e^{-j\frac{2\pi}{L}nt} \, dt
\end{equation}
Here's the trick: we %will
regard $\hat{F}(j\Omega)$ as an
anonymous periodic complex function and we %will
compute its
Fourier series expansion coefficients. If we replace $L$ by
$2\pi / T_s$ in~(\ref{fsecEx}) we can write
\begin{align*}
A_n &= \frac{T_s}{2\pi} \int_{-\pi/T_s}^{\pi/T_s}
\tilde{F}(j\Omega) \, e^{-jnT_s \Omega}\, d\Omega \\
& =\frac{T_s}{2\pi} \int_{-\pi/T_s}^{\pi/T_s}
\sum_{k=-\infty}^{+\infty}
F\left( j \left(\Omega+\frac{2\pi}{T_s}\, k \right) \right)
e^{-j nT_s \Omega} \, d\Omega \\
\end{align*}
%\intertext{
By inverting integral and summation, which we can do if the
Fourier transform~(\ref{exSampeq2}) is well defined:
%}
\begin{equation*}
A_n =\frac{T_s}{2\pi}
\sum_k \int_{-\pi/T_s}^{\pi/T_s}
F\left(j \left(\Omega+\frac{2\pi}{T_s}\, k\right) \right)
e^{-j nT_s \Omega} \, d\Omega
\end{equation*}
%\intertext{
and, with the change of variable
$\Omega' = \Omega + (%\frac
{2\pi}/{T_s}) k$, %}
\begin{equation*}
A_n =
\frac{T_s}{2\pi} \sum_k \int_{(2k-1)\pi/T_s}^{(2k+1)\pi/T_s}
F \bigl(j\Omega' \bigr)
\, e^{-j nT_s \Omega'} \, e^{j T_s \frac{2\pi}{T_s}nk}
\,d\Omega' %\\
\end{equation*}
% \intertext{
where $e^{jT_s\frac{2\pi}{T_s}nk}=1$. The
integrals in the sum are on contiguous and non-overlapping
intervals, therefore:
\begin{align*}
A_n & = T_s\frac{1}{2\pi}
\int_{-\infty}^{+\infty}
F\bigl(j\Omega'\bigr) \, e^{-j nT_s \Omega'} \, d\Omega' \\
& = T_s\, f (-nT_s)
\end{align*}
so that by replacing the values for all the $A_n$
in~(\ref{fseEx}) we obtain
$\tilde{F}(j\Omega) = \hat{F}(j\Omega)$.
What we just found is another derivation of the
aliasing\index{aliasing} formula. Intuitively, there is a
duality between the time domain and the frequency domain in
that a discretization of the time domain leads to a
periodization of the frequency domain; similarly, a
discretization of the frequency domain leads to a
periodization of the time domain (think of the DFS and see
also Exercise~\ref{aliasTimeEx}).
\end{example}
%% 2 %%%%%%%%%%%%
\begin{example}{Time-limited vs.\ bandlimited functions}\index{bandlimited signal}
The trick of periodizing a function and then computing its
Fourier series expansion comes very handy in proving that a
function cannot be bandlimited and have a finite support at
the same time. The proof is by contradiction and goes as
follows: assume $f(t)$ has finite support
\[
f(t) = 0, \qquad \quad \mbox{for } |t| > T_0
\]
assume that $f(t)$ has a well-defined Fourier transform:
\[
F(j\Omega) = \int_{-\infty}^{\infty}f(t)
\, e^{-j\Omega t}\,dt =
\int_{-T_0}^{T_0}f(t) \, e^{-j\Omega t} \, dt
\]
and that it is {\em also} bandlimited so that:
\[
F(j\Omega) = 0, \qquad \quad \mbox{for } |\Omega| > \Omega_0
\]
Now consider the periodized\index{periodization} version:
\[
\tilde{f}(t) = \sum_{n=-\infty}^{\infty} f(t - 2nS)
\]
since $f(t) = 0$ for $|t| > T_0$, if we choose $S>T_0$ the
copies in the sum %will
do not overlap, as shown in
Figure~\ref{tlvsblFig}. If we compute the Fourier series
expansion~(\ref{fsecEx}) for the $2S$-periodic
$\tilde{f}(t)$ we have
\begin{align*}
A_n &= \frac{1}{2S}\int_{-S}^{S}\tilde{f}(t)\, e^{-j(\pi/S)nt} \, dt \\
&= \frac{1}{2S}\int_{-T_0}^{T_0} f(t) \, e^{-j(\pi/S)nt} \, dt \\
&= F\left(j\frac{\pi}{S}\, n \right)
\end{align*}
Since we assumed that $f(t)$ is bandlimited, it %will be
is
\[
A_n = 0, \qquad \quad \mbox{for }
|n| > \left\lfloor \frac{\Omega_0 S}{\pi} \right\rfloor = N_0
\]
and therefore we can write the reconstruction
formula~(\ref{fseEx}):
\[
\tilde{f}(t) = \sum_{n = -N_0}^{N_0} A_n\, e^{j(\pi/S)nt}
\]
Now consider the complex-valued polynomial of degree $2N_0 +1$
\[
P(z) = \sum_{n = -N_0}^{N_0} A_n z^n
\]
obviously
$P\bigl(e^{j(\pi/S)t} \bigr) = \tilde{f}(t)$
but we also know that $\tilde{f}(t)$ is identically zero over
the $[T_0\,,\, 2S-T_0]$ interval (Fig.~\ref{tlvsblFig}).
%9.16
Now, a finite-degree polynomial $P(z)$ has only a finite
number of roots\index{roots!of complex polynomial} and
therefore it cannot be identically zero over an interval
unless it is zero everywhere (see also
Example~\ref{impossIdealProof}). Hence, either $f(t) = 0$
everywhere or $f(t)$ cannot be both bandlimited and
time-limited.
%%9.15 %%%%%%%%%%
\def\spectrumShape{2 mul
dup abs 3.10 gt
{pop 0}
{0.18 mul RadtoDeg %
dup cos exch %
dup 3 mul cos 2 mul exch %
0 mul cos -0.7 mul %
add add 0.31 mul 0.017 add }
ifelse }
%\vskip3mm
\begin{figure}[H]
\center\small
\def\cf{0.7 }
\noindent\kern-5mm\begin{psCTplot}[dy=10,labelx=false,height=\MRspecHeight]{-10}{10}{0}{1.5}
\psCTplotFunction[linewidth=1.2pt,linecolor=gray,tmin=2 ]{x 7 sub \spectrumShape}
\psCTplotFunction[linewidth=1.2pt,linecolor=gray,tmax=-2]{x 7 add \spectrumShape}
\psCTplotFunction[linewidth=1.2pt]{x \spectrumShape}
\selectPlotFont
\psticklab{1.6}{$T_0$}
\psticklab{7}{$2S$}
\psticklab{-7}{$-2S$}
\psbrace[rot=-90,ref=tC,nodesepB=-11pt](5.4,.5)(1.6,.5){$\tilde{f}(t) = 0$}
\end{psCTplot}
\vskip-6mm
\caption{Finite support function $f(t)$ (black) and non-overlapping periodization (gray).}\label{tlvsblFig}
\end{figure}
\vskip-3mm
\end{example}
%%%%%%%%%%%% Appendix %%%%%%
\input{t/ch9d}
%%%%%%%%%%%%%%%
\sectionsn{Further Reading}
\input{t/fur9}
%%%%%%%%%%%%%%%%%%
\sectionsn{Exercises}
\begin{exercise}{Zero-order hold}
Consider a discrete-time sequence $x[n]$ with DTFT
$X(e^{j\omega})$. Next, consider the continuous-time interpolated
signal
\[
x_0(t) = \sum_{n=-\infty}^{\infty}x[n] \,\mbox{rect}\,(t-n)
\]
i.e.\ the signal interpolated with a zero-centered zero-order hold
and\linebreak $T = 1$~sec.
\begin{enumerate}
\item Express $X_0(j\Omega)$ (the spectrum of $x_0(t)$) in terms
of $X(e^{j\omega})$. \item Compare $X_0(j\Omega)$ to $X(j\Omega)$.
We can look at $X(j\Omega)$ as the Fourier transform of the signal
obtained from the sinc interpolation of $x[n]$ (always with
$T=1$):
\[
x(t) = \sum_{n\in\mathbb{Z}} x[n]\, \textrm{sinc} (t-n)
\]
Comment on the result: you should point out two major
problems.
\end{enumerate}
So, as it appears, interpolating with a zero-order hold introduces
a distortion in the interpolated signal with respect to the sinc
interpolation in the region $-\pi \leq \Omega \leq \pi$.
% It
Furthermore, it makes the signal non-bandlimited outside the region
$-\pi\leq\Omega\leq\pi$.
The signal $x(t)$ can be obtained from
the zero-order hold interpolation $x_0(t)$ as $x(t)=x_0(t)\ast
g(t)$ for some filter $g(t)$.
\begin{enumerate}
\setcounter{enumi}{2}
\item Sketch the frequency response of
$g(t)$.
\item Propose two solutions (one in the continuous-time
domain, and another in the discrete-time domain) to eliminate or
attenuate the distortion due to the zero-order hold. Discuss the
advantages and disadvantages of each.
\end{enumerate}
\end{exercise}
%exe2%%%%%%%%
\begin{exercise}{A bizarre interpolator}
Consider the local interpolation scheme of the previous
exercise but assume that the characteristic of the
interpolator is the following:
\[
I(t) = %\left\{
\begin{cases}
1 - 2|t| & \mbox{ for } |t| \leq 1/2 \\
0 & \mbox{ otherwise}
\end{cases} % \right.
\]
This is a triangular characteristic with the same support as
the zero-order hold. If we pick an interpolation interval
\pagebreak
$T_s$ and interpolate a given discrete-time signal $x[n]$
with $I(t)$, we obtain a continuous-time signal:
\[
x(t) = \sum_n x[n]\, I\! \left( \frac{t-nT_s}{T_s} \right)
\]
which looks like this:
\begin{figure}[H]
\vskip3mm
\begin{center}\small
\begin{psDTplot}[size=medium,%
dx=1,dy=0.5,sidegap=0.9,labelnudge=3ex]{0}{5}{-1.2}{1.2}
\psset{linewidth=1pt,linecolor=gray}
\psline(-.5, 0)(0, 1)(.5, 0)
\psline(.5, 0)(1, 0.7)(1.5, 0)
\psline(1.5, 0)(2, -0.2)(2.5, 0)
\psline(2.5, 0)(3, -1)(3.5, 0)
\psline(3.5, 0)(4, 0.5)(4.5, 0)
\psline(4.5, 0)(5, -0.5)(5.5, 0)
\psset{linecolor=black}
\psDTplotData{0 1 1 0.7 2 -0.2 3 -1 4 0.5 5 -0.5}%
\end{psDTplot}
\end{center}
\vskip-5mm
\end{figure}
Assume that the spectrum of $x[n]$ between $-\pi$ and $\pi$ is
\[
X(e^{j\omega}) =
% \left\{
\begin{cases}
1 & \mbox{ for } |\omega| \leq 2\pi/3 \\
0 & \mbox{ otherwise}
\end{cases} % \right.
\]
(with the obvious $2\pi$-periodicity over the entire frequency axis).
\begin{enumerate}
\item Compute and sketch the Fourier transform $I(j\Omega)$
of the interpolating function $I(t)$. (Recall that the
triangular function can be expressed as the convolution of
$\mbox{rect}(t/2)$ with itself).
\item Sketch the Fourier transform $X(j\Omega)$ of the
interpolated signal $x(t)$; in particular, clearly mark the
Nyquist frequency $\Omega_N = \pi/T_s$.
\item The use of $I(t)$ instead of a sinc interpolator
introduces two types of errors: briefly describe them.
\item To eliminate the error \emph{in the baseband} $[-
\Omega_N, \Omega_N]$ we can pre-filter the signal $x[n]$
with a filter $h[n]$ \emph{before} interpolating with
$I(t)$. Write the frequency response of the discrete-time
filter $H(e^{j\omega})$.
\end{enumerate}
\end{exercise}
\medskip \medskip
%%exe3%%%%%%%%%
\begin{exercise}{Another view of sampling}
One of the standard ways of describing the sampling
operation relies on the concept of ``modulation by a pulse
train''. Choose a sampling interval $T_s$ and define a
continuous-time pulse train $p(t)$ as
\[
p(t) = \sum_{k=-\infty}^{\infty} \delta (t-kT_s)
\]
The Fourier Transform of the pulse train is
\[
P(j\Omega) = \frac{2\pi}{T_s}
\sum_{k=-\infty}^{\infty}
\delta \! \left( \Omega- k \frac{2\pi}{T_s} \right)
\]
This is tricky to show, so just take the result as is. The
``sampled'' signal is simply the modulation of an
arbitrary-continuous time signal $x(t)$ by the pulse train:
\[
x_s(t) = p(t)\,x(t)
\]
Note that, now, this sampled signal is still continuous time but,
by the properties of the delta function, is non-zero only at
multiples of $T_s$; in a sense, $x_s(t)$ is a discrete-time signal
brutally embedded in the continuous time world.
Here is the question: derive the Fourier transform of $x_s(t)$ and
show that if $x(t)$ is bandlimited to $\pi/T_s$ then we can
reconstruct $x(t)$ from $x_s(t)$.
\end{exercise}
\medskip
%%exe4 %%%%%%%%%%%%%%
\def\point{!}\begin{exercise}{Aliasing can be good}
Consider a real, continuous-time signal $x_c(t)$ with the
following spectrum $X_c(j\Omega)$:
\begin{figure}[H]
\vskip3mm
\begin{center}\small
\begin{psDFplot}[size=medium,dy=1,reps=3,labelx=false]{0}{1.2}
\psset{xunit=3.14}
\psline[linewidth=1pt](-3,0)(-2,0)(-2,1)(-1,0)(1,0)(2,1)(2,0)(3,0)
\selectPlotFont
\psticklab{0}{0}
\psticklab{1}{$\Omega_0$} \psticklab{-1}{$-\Omega_0$}
\psticklab{2}{$2\Omega_0$} \psticklab{-2}{$-2\Omega_0$}
\end{psDFplot}
\end{center}
\end{figure}
\begin{enumerate}
\item What is the bandwidth of the signal? What is the minimum
sampling period in order to satisfy the sampling theorem?
\item Take a sampling period $T_s = \pi/\Omega_0$; clearly, with this
sampling period, there will be aliasing. Plot the DTFT of the
discrete-time signal $x_a[n] = x_c(nT_s)$.
\item Suggest a block diagram to reconstruct $x_c(t)$ from $x_a[n]$.
\item With such a scheme available, we can therefore exploit aliasing to reduce the
sampling frequency necessary to sample a bandpass signal. In
general, what is the minimum sampling frequency to be able to
reconstruct, with the above strategy, a real signal whose frequency
support on the positive axis is $[\Omega_0, \Omega_1]$ (with the
usual symmetry around zero, of course)?
\end{enumerate}
\end{exercise}
%%%exe5%%%
\begin{exercise}{Digital processing of continuous-time signals}
For your birthday, you receive an unexpected present: a
$4$~MHz A/D converter, complete with anti-aliasing filter.
This means you can safely sample signals up to a frequency
of $2$ ~MHz; since this frequency is above the AM radio
frequency band, you decide to hook up the A/D to your
favorite signal processing system and build an entirely
digital radio receiver. In this exercise we will explore how
to do so.
%To simplify things a bit
Simply, assume that the AM radio spectrum extends from $1$~Mhz
to $1.2$~Mhz and that in this band you have ten channels side
by side, each one of which occupies $20$~KHz.
\begin{enumerate}
\item Sketch the digital spectrum at the output of the A/D
converter, and show the bands occupied by the channels,
numbered from $1$ to $10$, with their beginning and end
frequencies.
\end{enumerate}
The first thing that you need to do is to find a way to isolate
the channel you want to listen to and to eliminate the rest.
For this, you need a bandpass filter centered on the band of
interest. Of course, this filter must be {\em tunable} in
the sense that you must be able to change its spectral
location when you want to change station. An easy way to
obtain a tunable bandpass filter is by modulating a lowpass
filter with a sinusoidal oscillator whose frequency is
controllable by the user:
\begin{enumerate}\setcounter{enumi}{1}
\item As an example of a tunable filter, assume $h[n]$ is an
ideal lowpass filter with cutoff frequency $\pi/8$. Plot the
magnitude response of the filter $h_m[n] = \cos(\omega_m
n)h[n]$, where $\omega_m = \pi/2$; $\omega_m$ is called the
{\em tuning frequency}.
\item Specify the cutoff frequency of a lowpass filter which
can be used to select one of the AM channels above.
\item Specify the tuning frequencies for channel~$1$, $5$
and $10$.
\end{enumerate}
Now that you know how to select a channel, all that is left
to do is to demodulate the signal and feed it to a D/A
converter and to a loudspeaker.
\begin{enumerate}\setcounter{enumi}{4}
\item Sketch the complete block diagram of the radio
receiver, from the antenna going into the A/D converter to
the final loudspeaker. Use only one sinusoidal oscillator.
Do not forget the filter before the D/A (specify its
bandwidth).
\end{enumerate}
The whole receiver now works at a rate of $4$ MHz; since it
outputs audio signals, this is clearly a waste.
\begin{enumerate}
\setcounter{enumi}{5}
\item Which is the minimum D/A frequency you can use?
Modify the receiver's block diagram with the necessary
elements to use a low frequency D/A.
\end{enumerate}
\end{exercise}
%% exe5 %%%%%%%%%%%%%
\begin{exercise}{Acoustic aliasing}
Assume $x(t)$ is a continuous-time pure sinusoid at $10$ KHz.
It is sampled with a sampler at $8$ KHz and then
interpolated back to a continuous-time signal with an interpolator
at $8$ KHz. What is the perceived frequency of the
interpolated sinusoid?
\end{exercise}
% exe6 %%%%%%%%%%
\begin{exercise}{Interpolation subtleties}
We have seen that any discrete-time sequence can be
sinc-interpolated into a continuous-time signal which is
$\Omega_N$-bandlimited; $\Omega_N$ depends on the
interpolation interval $T_s$ via the relation $\Omega_N =
\pi/T_s$.
Consider the continuous-time signal $x_c(t) = e^{-t/T_s}$
and the discrete-time sequence $x[n] = e^{-n}$. Clearly,
$x_c(nT_s) = x[n]$; but, can we also say that $x_c(t)$ is
the signal we obtain if we apply sinc interpolation to the
sequence $x[n] = e^{-n}$ with interpolation interval $T_s$?
Explain in detail.
\end{exercise}
%% exe7 %%%%%%%%%%%%%
\begin{exercise}{Time and frequency}
Consider a real continuous-time signal $x(t)$. All you know
about the signal is that $x(t) = 0$ for $|t| > t_0$. Can you
determine a sampling frequency $F_s$ so that when you sample
$x(t)$, there is no aliasing? Explain.
\end{exercise}
%%% exe8 %%%%%%%%%
\def\point{?}\begin{exercise}{Aliasing in time}\label{aliasTimeEx}
Consider an $N$-periodic discrete-time signal
$\tilde{x}[n]$, with $N$ an \emph{even} number, and let
$\tilde{X}[k]$ be its DFS:
\[
\tilde{X}[k] = \sum_{n=0}^{N-1}\tilde{x}[n]
\, e^{-j\frac{2\pi}{N}nk}\, ,\qquad \quad k\in \mathbb{Z}
\]
Let $\tilde{Y}[m] = \tilde{X}[2m]$, i.e.\ a ``subsampled''
version of the DFS coefficients; clearly this defines a
$(N/2)$-periodic sequence of DFS coefficients.
%Consider
Now consider
the $(N/2)$-point inverse DFS of $\tilde{Y}[m]$ and call
this $(N/2)$-periodic signal $\tilde{y}[n]$:
\[
\tilde{y}[n] = \frac{2}{N}\sum_{k=0}^{N/2-1}\tilde{Y}[k]
\, e^{j\frac{2\pi}{N/2}nk},\qquad \quad n \in \mathbb{Z}
\]
Express $\tilde{y}[n]$ in terms of $\tilde{x}[n]$ and
describe in a few words what has happened to $\tilde{x}[n]$
and why.
\end{exercise}
\sectionsn{Appendix}
\subsection*{The Sinc Product Expansion Formula}
The goal is to prove the product expansion\index{sinc}
\begin{equation}\label{appSinExp}
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\end{equation}
We %will
present two proofs; the first was proposed by Euler
in~1748 and, while it certainly lacks rigor by modern
standards, it has the irresistible charm of elegance and
simplicity in that it relies only on basic algebra. The
second proof is more rigorous, and is based on the theory of
Fourier series for periodic functions; relying on Fourier
theory, however, hides most of the convergence issues.
\itempar{Euler's Proof.}
Consider the $N$ roots of unity for $N$ odd. They
%will be
comprise $z
= 1$ plus $N-1$ complex conjugate roots of the form $z =
e^{\pm j\omega_N k}$ for $k = 1, \ldots$,\linebreak
$ (N-1)/2$ and
$\omega_N = 2\pi/N$. If we group the complex conjugate roots
pairwise we can factor the polynomial $z^N-1$ as
\[
z^N-1 = (z-1)\prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2z\cos(\omega_N k) + 1 \bigr)
\]
The above expression can immediately be generalized to
\[
z^N-a^N = (z-a) \prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2az \cos(\omega_N k) + a^2 \bigr)
\]
Now replace $z$ and $a$ in the above formula by $z =
(1+x/N)$ and $a = (1-x/N)$; we obtain the following:
\begin{align*}
&\left(1+\frac{x}{N}\right)^{\!N} - \left(1-\frac{x}{N}\right)^{\!N} = \nonumber \\
&\qquad
= \frac{4x}{N} \prod_{k=1}^{(N-1)/2}
\left(1-\cos(\omega_N k) + \frac{x^2}{N^2} \,
\bigl(1+\cos(\omega_N k) \bigr)\right) \\
&\qquad =
\frac{4x}{N}\prod_{k=1}^{(N-1)/2}
\bigl( 1-\cos(\omega_N k) \bigr)
\left(1 + \frac{x^2}{N^2}\cdot\frac{1+\cos(\omega_N k)}{1-\cos(\omega_N k)}\right) \\
&\qquad = A\, x \prod_{k=1}^{(N-1)/2}
\left(1 + \frac{x^2 \bigl(1+\cos(\omega_N k) \bigr)}{N^2
\bigl(1-\cos(\omega_N k) \bigr)}\right)
\end{align*}
where $A$ is just the finite product
$(4/N)\prod_{k=1}^{(N-1)/2} \bigl(1-\cos(\omega_N k) \bigr)$.
The value $A$ is also the coefficient for the degree-one
term $x$ in the right-hand side and it can be easily seen
from the expansion of the left hand-side that $A=2$ for all
$N$; actually, this is an application of Pascal's triangle
and it was proven by Pascal in the general case in 1654. As
$N$ grows larger we have that:
\[
\left(1 \pm \frac{x}{N}\right)^{\!N} \approx e^{\pm x}
\]
and
at the same time, if $N$ is large, then $\omega_N = 2\pi/N$
is small and, for small values of the angle, the cosine can
be approximated as
\[
\cos(\omega) \approx 1 - \frac{\omega^2}{2}
\]
so that the denominator in the general product term can, in
turn, be approximated as
\[
N^2
\left(1-\cos \left( \frac{ 2\pi }{ N} \, k \right) \right)
\approx N^2 \cdot \frac{4k^2\pi^2}{2N^2} = 2k^2 \pi^2
\]
By the same token, for large $N$, the numerator can be
approximated as $1+\cos((2\pi/n)k) \approx 2$ and therefore
(by bringing $A=2$ over to the left-hand side)
the above expansion becomes
\[
\frac{e^x - e^{-x}}{2}
= x \! \left( 1 + \frac{x^2}{\pi^2} \right)
\left(1 + \frac{x^2}{4\pi^2}\right) \left( 1 + \frac{x^2}{9\pi^2}\right)
\cdots
\]
Finally, we replace $x$ by $j\pi t$ to obtain:
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)
\]
\itempar{Rigorous Proof.}
Consider the Fourier series expansion of the \emph{even}
function $f(x) = \cos(\tau x)$ periodized over the interval
$[-\pi, \pi]$. We have
\[
f(x) = \frac{1}{2}a_0 + \sum_{n = 1}^{\infty} a_n \cos(nx)
\]
with
\begin{align*}
a_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}\cos(\tau x)\cos(n x)\, dx \\
& = \frac{2}{\pi}\int_{0}^{\pi} \frac{1}{2}
\, \left( \cos \bigl( (\tau+n)x \bigr) + \cos \bigl((\tau-n)x \bigr)
\right) \, dx \\
& =
\frac{1}{\pi} \left( \frac{\sin \bigl((\tau+n)\pi \bigr)}{\tau + n}
+ \frac{\sin \bigl((\tau-n)\pi \bigr)}{\tau - n}\right) \\
& = \frac{2\sin (\tau\pi)}{\pi}\, \frac{(-1)^n \tau}{\tau^2 -n^2}
\end{align*}
so that
\[
\cos(\tau x) = \frac{2\tau \sin(\tau\pi)}{\pi}
\left( \frac{1}{2\tau^2} - \frac{\cos( x)}{\tau^2 - 1}
+ \frac{\cos(2x)}{\tau^2 - 2^2} -
\frac{\cos(3x)}{\tau^2 - 3^2}
+ \cdots \right)
\]
In particular, for $x = \pi$ we have
\[
\cot (\pi\tau) = \frac{2\tau}{\pi}
\left( \frac{1}{2\tau^2} + \frac{1}{\tau^2 - 1} +
\frac{1}{\tau^2 - 2^2} + \frac{1}{\tau^2 - 3^2} + \cdots\right)
\]
which we can rewrite as
\[
\pi\left(\cot(\pi\tau) -
\frac{1}{\pi\tau}\right)
= \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}
\]
If we now integrate between $0$ and $t$ both sides of the
equation we have
\[
\int_{0}^{t} \left(\cot(\pi\tau) - \frac{1}{\pi\tau}\right)
d\pi\tau =
\ln\frac{\sin(\pi\tau)}{\pi \tau} \biggr|_0^t
= \ln\left[ \frac{\sin(\pi t)}{\pi t}\right]
\]
and
\[
\int_{0}^{t} \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}\, d\tau
= \sum_{n = 1}^{\infty} \ln
\left[ 1 - \frac{t^2}{n^2}\right]
= \ln \left[ \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)\right]
\]
from which, finally,
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\]

Event Timeline