Page MenuHomec4science

90-is-examples.tex
No OneTemporary

File Metadata

Created
Thu, Mar 13, 15:50

90-is-examples.tex

\section{Examples}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{example}[Another way to aliasing]
Consider a real function $x(t)$ for which the Fourier transform is well defined:
\begin{equation}\label{eq:is:exSampeq1}
X(f) = \int_{-\infty}^{\infty}x(t)\, e^{-j2\pi f t}\, dt
\end{equation}
Suppose that we only possess a sampled version of $x(t)$, that is, we only know the numeric value of $x(t)$ at times multiples of a sampling interval $T_s$ and that we want to obtain an approximation of the Fourier transform above.
Assume we do not know about the DTFT; an intuitive (and standard) place to start is to express the Fourier integral as a Riemann sum:
\begin{equation}\label{eq:is:exSampeq2}
X(f) \approx \hat{X}(f) = \sum_{n=-\infty}^{\infty} T_s x(nT_s) \, e^{-j 2\pi f n T_s }
\end{equation}
an expression that only uses the known sampled values of $x(t)$. In order to understand whether~(\ref{eq:is:exSampeq2}) is a good approximation consider the periodization of $X(f)$:
\begin{equation}\label{eq:is:exSampeq3}
\tilde{X}(f) = \sum_{k=-\infty}^{\infty} X\left( f + kF_s \right)
\end{equation}
in which $X(f)$ is repeated (with overlap) with period $F_s$. We will show that:
\[
\hat{X}(f) = \tilde{X}(f)
\]
that is, the Riemann approximation is equivalent to a periodization \index{periodization} of the original Fourier transform; in mathematics this is known as a particular form of the {\em Poisson sum formula}\index{Poisson sum formula}.
Consider the periodic nature of $\tilde{X}(j\Omega)$ and remember that any periodic function $s(\tau)$ of period $L$ admits a \emph{Fourier series}\index{Fourier series} expansion:
\begin{equation}\label{eq:is:fseEx}
s(\tau) = \sum_{n=-\infty}^{\infty} A_n\, e^{j\frac{2\pi}{L}n\tau}
\end{equation}
where
\begin{equation}\label{fsecEx}
A_n = \frac{1}{L} \int_{-L/2}^{L/2} s(\tau) \, e^{-j\frac{2\pi}{L}n\tau} \, d\tau
\end{equation}
To prove our result we will consider the periodic nature of $\hat{X}(f)$ and compute its Fourier \textit{series} expansion coefficients (that is, we take a Fourier transform of a Fourier transform). Replacing $L$ by $F_s = 1 / T_s$ in~(\ref{eq:is:fseEx}) we can write
\begin{align*}
A_n &= (1/F_s) \int_{-F_s/2}^{F_s/2} \tilde{X}(f) \, e^{-j(2\pi/F_s) f n}\, df \\[3mm]
&= T_s \int_{-F_s/2}^{F_s/2} \sum_{k=-\infty}^{+\infty} X\left( f + kF_s \right) e^{-j2\pi f nT_s } \, df \\
\end{align*}
By inverting integral and summation, which we can do if the Fourier transform~(\ref{eq:is:exSampeq2}) is well defined:
\begin{equation*}
A_n =T_s \sum_k \int_{-F_s/2}^{F_s/2} X\left(f + kF_s \right) e^{-j2\pi f nT_s} \, df
\end{equation*}
and, with the change of variable $f \rightarrow f + kF_s$,
\begin{align*}
A_n &= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, e^{j T_s F_s nk} \,df \\
&= T_s \sum_k \int_{(2k-1)(F_s/2)}^{(2k+1)(F_s/2)} X(f)\, e^{-j2\pi f nT_s} \, df
\end{align*}
The integrals in the sum are over contiguous and non-overlapping intervals, therefore:
\begin{align*}
A_n & = T_s \int_{-\infty}^{+\infty} X(f) \, e^{-j 2\pi f nT_s} \, df \\
& = T_s\, f (-nT_s)
\end{align*}
so that by replacing the values for all the $A_n$ in~(\ref{eq:is:fseEx}) we obtain $\tilde{X}(f) = \hat{X}(f)$.
What we just found is another derivation of the aliasing\index{aliasing} formula. Intuitively, there is a duality between the time domain and the frequency domain in that a discretization of the time domain leads to a periodization of the frequency domain; similarly, a discretization of the frequency domain leads to a periodization of the time domain (think of the DFS and see also Exercise~\ref{ex:is:aliasTimeEx}).
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{example}[Time-limited vs.\ bandlimited functions]\index{bandlimited signal}
A fundamental result of spectral analysis states that a function cannot have finite support both in time and in frequency; in other words, a signal cannot be both time-limited and band-limited. This can be easily shown by contradiction using the sampling theorem and the properties of the $z$-transform. Let's assume that the continuous-time signal $x_c(t)$ is $2f_0$-bandlimited (that is, $X_c(f) = 0$ for $|f| > f_0$) and that there also exist a value $t_0 > 0$ so that
\[
x_c(t) = 0 \quad \mbox{for } |t| > t_0.
\]
Since the signal is bandlimited, we know that it can be perfectly represented by a sequence of equally spaced samples, provided that the sampling rate satisfies $F_s \ge 2f_0$. Let's for instance pick $F_s = 4f_0$ and call $x[n] = x_c(nT_s)$ the resulting discrete-time signal for $T_s = 1/(4f_0)$. Using~(\ref{eq:is:DTFTsampled}), the DTFT of the sampled sequence over $[-\pi, \pi]$ is simply the rescaled continuous-time spectrum between $[-2f_0, 2f_0]$:
\[
X(e^{j\omega}) = 4f_0\, X_c\left(\frac{\omega}{\pi} 2f_0\right);
\]
since by hypothesis $X_c(f)$ is zero outside of the $[-f_0, f_0]$ interval, as illustrated in Figure~\ref{fig:is:tlvsblFig}, it will be
\begin{equation}\label{eq:is:timefreq}
X(e^{j\omega}) = 0 \quad \mbox{for } \frac{\pi}{2} < \omega < \pi.
\end{equation}
On the other hand, we assumed that $x_c(t)$ is also time-limited so the sequence $x[n]$ is going to have a finite support and its $z$-transform will contain only a finite number of terms:
\[
X(z) = \sum_{n=-M}^{M} x[n] z^{-n}
\]
where
\[
M = \bigg\lfloor \frac{t_0}{T_s} \bigg\rfloor.
\]
Since the DTFT is $X(z)$ for $z=e^{j\omega}$, because of~(\ref{eq:is:timefreq}) we have that $X(z) = 0$ over a finite interval; but,
since the $z$-transform is a finite-degree polynomial, it will necessarily be zero everywhere (see also Example~\ref{ex:fil:impossIdealProof}). And so the only signal that can be both time-limited and bandlimited is the null signal.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[b!]
\center
\begin{dspPlot}[height=\dspHeightCol,sidegap=0,xticks=custom,yticks=none,ylabel={$X(f)$}]{-3,3}{0,1.8}
\dspFunc[linecolor=ColorCF]{x \dspPorkpie{0}{1}}
\dspCustomTicks[axis=x]{%
0 0
1 $f_0$ -1 $-f_0$
2 $2f_0=F_s/2$ -2 $-2f_0$}
\end{dspPlot}
\begin{dspPlot}[height=\dspHeightCol,xtype=freq,xticks=2,yticks=none,ylabel={$X(e^{j\omega})$}]{-1,1}{0,1.8}
\dspFunc[linecolor=ColorDF]{x \dspPorkpie{0}{0.5}}
\end{dspPlot}
\caption{Bandlimited signal and its discrete-time counterpart.}\label{fig:is:tlvsblFig}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{comment}
The trick of periodizing a function and then computing its Fourier series expansion comes very handy also in proving that a function cannot be both bandlimited and time-limited (that is, have a finite support both in time and in frequency). The proof is by contradiction: assume $x(t)$ has finite time support, i.e. there exists a time $T_0$ such that
\[
x(t) = 0 \quad \mbox{for } |t| > T_0;
\]
assume that $x(t)$ has a well-defined Fourier transform $X(f)$ and that it is {\em also} bandlimited so that we can find a frequency $f_0$ for which
\[
x(f) = 0 \quad \mbox{for } |f| > f_0.
\]
Consider now the periodization\index{periodization} of the function in time with period $S$:
\[
\tilde{x}(t) = \sum_{k=-\infty}^{\infty} x(t - kS);
\]
since $x(t) = 0$ for $|t| > T_0$, if we choose $S > 2T_0$ the copies in the sum do not overlap, as shown in Figure~\ref{fig:is:tlvsblFig}. If we compute the Fourier series expansion~(\ref{eq:is:fsecEx}) for the $S$-periodic function $\tilde{x}(t)$ we have
\begin{align*}
A_n &= \frac{1}{S}\int_{-S/2}^{S/2} \tilde{x}(t)\, e^{-j(2\pi/S)nt} \, dt \\
&= \frac{1}{S}\int_{-T_0}^{T_0} x(t) \, e^{-j(2\pi/S)nt} \, dt \\
&= X\left(\frac{n}{S}\right);
\end{align*}
this indicated that the Fourier series coefficients of the periodized function are samples of the Fourier transform of the original function (another duality between periodization and sampling). Since we assumed that $f(t)$ is bandlimited, there will be only a finite number of nonzero $A_n$ coefficients; indeed
\[
A_n = 0 \quad \mbox{for } |n| > \lfloor f_0 S \rfloor = N_0
\]
and therefore we can write the reconstruction formula~(\ref{eq:is:fseEx}) as:
\[
\tilde{x}(t) = \sum_{n = -N_0}^{N_0} A_n\, e^{j(2\pi/S)nt}
\]
Now consider the complex-valued polynomial of degree $2N_0 +1$
\[
P(z) = \sum_{n = -N_0}^{N_0} A_n z^n
\]
obviously $P\bigl(e^{j(2\pi/S)t} \bigr) = \tilde{x}(t)$ but we also know that $\tilde{x}(t)$ is identically zero over the $[T_0\,,\, S-T_0]$ interval, as shown in Figure~\ref{fig:is:tlvsblFig}. However, a finite-degree polynomial $P(z)$ has only a finite number of roots\index{roots!of complex polynomial} and
therefore it cannot be identically zero over an interval unless it is zero everywhere (see also Example~\ref{ex:fil:impossIdealProof}). Hence, either $x(t) = 0$ everywhere or $x(t)$ cannot be both bandlimited and time-limited.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t!]
\def\spectrumShape{2 mul
dup abs 3.10 gt
{pop 0}
{0.18 mul RadtoDeg %
dup cos exch %
dup 3 mul cos 2 mul exch %
0 mul cos -0.7 mul %
add add 0.31 mul 0.017 add }
ifelse }
%
\center
\begin{dspPlot}[height=\dspHeightCol,xticks=custom,yticks=none,sidegap=0]{-10,10}{0,1.5}
\dspCustomTicks[]{1.6 $T_0$ 7 $S$ -7 $-S$}
\dspFunc[linecolor=ColorCT!30,xmin=2 ]{x 7 sub \spectrumShape}
\dspFunc[linecolor=ColorCT!30,xmax=-2]{x 7 add \spectrumShape}
\dspFunc[linecolor=ColorCT]{x \spectrumShape}
\psbrace[rot=-90,ref=tC,nodesepB=-11pt](5.4,.5)(1.6,.5){$\tilde{x}(t) = 0$}
\end{dspPlot}
\caption{Finite support function $x(t)$ (dark) and non-overlapping periodization (light).}\label{xxfig:is:tlvsblFig}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{comment}
\end{example}
\section{Appendix}
\subsection*{The Sinc Product Expansion Formula}
The goal is to prove the product expansion\index{sinc}
\begin{equation}\label{eq:is:appSinExp}
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\end{equation}
We %will
present two proofs; the first was proposed by Euler
in~1748 and, while it certainly lacks rigor by modern
standards, it has the irresistible charm of elegance and
simplicity in that it relies only on basic algebra. The
second proof is more rigorous, and is based on the theory of
Fourier series for periodic functions; relying on Fourier
theory, however, hides most of the convergence issues.
\itempar{Euler's Proof.}
Consider the $N$ roots of unity for $N$ odd. They
%will be
comprise $z
= 1$ plus $N-1$ complex conjugate roots of the form $z =
e^{\pm j\omega_N k}$ for $k = 1, \ldots$,\linebreak
$ (N-1)/2$ and
$\omega_N = 2\pi/N$. If we group the complex conjugate roots
pairwise we can factor the polynomial $z^N-1$ as
\[
z^N-1 = (z-1)\prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2z\cos(\omega_N k) + 1 \bigr)
\]
The above expression can immediately be generalized to
\[
z^N-a^N = (z-a) \prod_{k=1}^{(N-1)/2}
\bigl( z^2 - 2az \cos(\omega_N k) + a^2 \bigr)
\]
Now replace $z$ and $a$ in the above formula by $z =
(1+x/N)$ and $a = (1-x/N)$; we obtain the following:
\begin{align*}
&\left(1+\frac{x}{N}\right)^{\!N} - \left(1-\frac{x}{N}\right)^{\!N} = \nonumber \\
&\qquad
= \frac{4x}{N} \prod_{k=1}^{(N-1)/2}
\left(1-\cos(\omega_N k) + \frac{x^2}{N^2} \,
\bigl(1+\cos(\omega_N k) \bigr)\right) \\
&\qquad =
\frac{4x}{N}\prod_{k=1}^{(N-1)/2}
\bigl( 1-\cos(\omega_N k) \bigr)
\left(1 + \frac{x^2}{N^2}\cdot\frac{1+\cos(\omega_N k)}{1-\cos(\omega_N k)}\right) \\
&\qquad = A\, x \prod_{k=1}^{(N-1)/2}
\left(1 + \frac{x^2 \bigl(1+\cos(\omega_N k) \bigr)}{N^2
\bigl(1-\cos(\omega_N k) \bigr)}\right)
\end{align*}
where $A$ is just the finite product
$(4/N)\prod_{k=1}^{(N-1)/2} \bigl(1-\cos(\omega_N k) \bigr)$.
The value $A$ is also the coefficient for the degree-one
term $x$ in the right-hand side and it can be easily seen
from the expansion of the left hand-side that $A=2$ for all
$N$; actually, this is an application of Pascal's triangle
and it was proven by Pascal in the general case in 1654. As
$N$ grows larger we have that:
\[
\left(1 \pm \frac{x}{N}\right)^{\!N} \approx e^{\pm x}
\]
and
at the same time, if $N$ is large, then $\omega_N = 2\pi/N$
is small and, for small values of the angle, the cosine can
be approximated as
\[
\cos(\omega) \approx 1 - \frac{\omega^2}{2}
\]
so that the denominator in the general product term can, in
turn, be approximated as
\[
N^2
\left(1-\cos \left( \frac{ 2\pi }{ N} \, k \right) \right)
\approx N^2 \cdot \frac{4k^2\pi^2}{2N^2} = 2k^2 \pi^2
\]
By the same token, for large $N$, the numerator can be
approximated as $1+\cos((2\pi/n)k) \approx 2$ and therefore
(by bringing $A=2$ over to the left-hand side)
the above expansion becomes
\[
\frac{e^x - e^{-x}}{2}
= x \! \left( 1 + \frac{x^2}{\pi^2} \right)
\left(1 + \frac{x^2}{4\pi^2}\right) \left( 1 + \frac{x^2}{9\pi^2}\right)
\cdots
\]
Finally, we replace $x$ by $j\pi t$ to obtain:
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)
\]
\itempar{Rigorous Proof.}
Consider the Fourier series expansion of the \emph{even}
function $f(x) = \cos(\tau x)$ periodized over the interval
$[-\pi, \pi]$. We have
\[
f(x) = \frac{1}{2}a_0 + \sum_{n = 1}^{\infty} a_n \cos(nx)
\]
with
\begin{align*}
a_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}\cos(\tau x)\cos(n x)\, dx \\
& = \frac{2}{\pi}\int_{0}^{\pi} \frac{1}{2}
\, \left( \cos \bigl( (\tau+n)x \bigr) + \cos \bigl((\tau-n)x \bigr)
\right) \, dx \\
& =
\frac{1}{\pi} \left( \frac{\sin \bigl((\tau+n)\pi \bigr)}{\tau + n}
+ \frac{\sin \bigl((\tau-n)\pi \bigr)}{\tau - n}\right) \\
& = \frac{2\sin (\tau\pi)}{\pi}\, \frac{(-1)^n \tau}{\tau^2 -n^2}
\end{align*}
so that
\[
\cos(\tau x) = \frac{2\tau \sin(\tau\pi)}{\pi}
\left( \frac{1}{2\tau^2} - \frac{\cos( x)}{\tau^2 - 1}
+ \frac{\cos(2x)}{\tau^2 - 2^2} -
\frac{\cos(3x)}{\tau^2 - 3^2}
+ \cdots \right)
\]
In particular, for $x = \pi$ we have
\[
\cot (\pi\tau) = \frac{2\tau}{\pi}
\left( \frac{1}{2\tau^2} + \frac{1}{\tau^2 - 1} +
\frac{1}{\tau^2 - 2^2} + \frac{1}{\tau^2 - 3^2} + \cdots\right)
\]
which we can rewrite as
\[
\pi\left(\cot(\pi\tau) -
\frac{1}{\pi\tau}\right)
= \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}
\]
If we now integrate between $0$ and $t$ both sides of the
equation we have
\[
\int_{0}^{t} \left(\cot(\pi\tau) - \frac{1}{\pi\tau}\right)
d\pi\tau =
\ln\frac{\sin(\pi\tau)}{\pi \tau} \biggr|_0^t
= \ln\left[ \frac{\sin(\pi t)}{\pi t}\right]
\]
and
\[
\int_{0}^{t} \sum_{n = 1}^{\infty} \frac{-2\tau}{n^2 - \tau^2}\, d\tau
= \sum_{n = 1}^{\infty} \ln
\left[ 1 - \frac{t^2}{n^2}\right]
= \ln \left[ \prod_{n = 1}^{\infty}
\left(1 - \frac{t^2}{n^2}\right)\right]
\]
from which, finally,
\[
\frac{\sin(\pi t)}{\pi t} = \prod_{n = 1}^{\infty} \left(1 -
\frac{t^2}{n^2}\right)
\]
\section{Further Reading}
The sampling theorem is often credited to C.\ Shannon, and indeed it appears with a embryonic proof in his foundational 1948 paper ``A Mathematical Theory of Communication'', \textit{Bell System Technical Journal\/}, Vol. 27, 1948, pp. 379-423 and pp. 623-656.
Contemporary treatments can be found in all signal processing books, but also in more mathematical texts, such as S.\ Mallat's \textit{A Wavelet Tour of Signal Processing\/} (Academic Press, 1998). These more modern treatments take a Hilbert space point of view, which allows the extension of sampling theorems to more general spaces than just bandlimited functions. More recently, a renewed interest in sampling theory has been spurred by applications such as nonuniform sampling or the sampling of signals that, although not bandlimited, possess a finite rate of innovation (FRI sampling).

Event Timeline