Show that its DTFT $X(e^{j\omega})$ exists [5p] and that the real part of $X(e^{j\omega})$ is equal to zero [5p]. [\textit{Note: you do not need to compute the DTFT explicitly.}]
\begin{sol}
\Solution
To show that the DTFT exists, it suffices to show that the signal has finite energy, that is, $\sum_{n}|x[n]|^2 < \infty$. We have
\[
S =\sum_{n=-\infty}^{\infty}|x[n]|^2=2\sum_{n=1}^{\infty}\frac{1}{n^6} =2\sum_{n=1}^{\infty}\left(\frac{1}{n^2}\right)^3
\]
Since $(1/x)^3\le(1/x)$ for all $x \ge1$ we can write
\[
S =2\sum_{n=1}^{\infty}\left(\frac{1}{n^2}\right)^3\le2\sum_{n=1}^{\infty}\frac{1}{n^2} =\frac{\pi^2}{3} < \infty.
\]
Since $x[-n]=-x[n]$, the signal is antisymmetric and therefore its DTFT is purely imaginary; to show the result formally:
Determine the stability of $H_A(z)$ [5p], $H_B(z)$ [5p], and $H_C(z)$ [20p]. Please justify your answers.
\begin{sol}
\Solution
From the block diagrams we can immediately see that:
\begin{itemize}
\item$H_1(z)$ is a first-order feedback loop (i.e. a leaky integrator) with transfer function
\[
H_1(z)=\frac{3}{1-(1/2)z^{-1}};
\]
the single pole of the filter is in $z=1/2$ so $H_1(z)$ is stable.
\item$H_2(z)$ is a pure feedforward structure yielding the transfer function $H_2(z)=1-(7/6) z^{-1} +(1/3)z^{-2}$ which, being an FIR, is also stable. Since this will be useful later, by finding the zeros of the transfer function, we can factor $H_2(z)$ as
\[
H_2(z)=(1-(2/3)z^{-1})(1-(1/2)z^{-1}).
\]
\end{itemize}
With this:
\begin{enumerate}
\item$H_A(z)= H_1(z)\,H_2(z)$ which is stable because it is a cascade of stable subsystems
\item$H_B(z)= H_1(z)+ H_2(z)$ which is stable because both subsystems are stable and they operate independently
\item$H_C(z)$ uses the two subsystems in a feedback loop; the overall transfer function can be computed from the input-output relation in the $z$-domain:
By finding the roots of the expression in brackets we can determine that the poles of $H_C(z)$ are in $z =1/2, z =1$, and $z =2$; therefore the system is not stable.
Most of the small, battery-operated remote controls that we use to unlock a car or to open a garage door employ a data communication technique called On-Off-Keying (OOK). When you push the button, the transmitter sends a string of binary digits (the secret password) by switching on or off an analog sinusoidal oscillator according to whether the bit to transmit is one or zero. As an example, here is a sketch of what an idealized OOK signal looks like for the binary sequence $[1, 0, 1, 1, 0, 1]$:
In the figure, $T_b$ is the time (in seconds) used to transmit one bit, so that the data rate is $1/T_b$ bits per second; in most commercial systems the data rate is low, about $1000$~bps, while the oscillator's frequency is usually around 430~MHz. The main advantages of OOK are the simplicity of the modulation and demodulation circuits, together with the inherent low power consumption of the transmitter: since the oscillator is switched off roughly half of the time, battery life is prolonged.
The implementation for the transmitter can be modeled by the following block diagram:
\begin{center}
\begin{dspBlocks}{1.3}{0.4}
&$T_b$&$\cos(2\pi f_c t)$&
{%
\psset{xunit=1em,yunit=1em}%
\pspicture(-2,-2)(2,0)%
\psline[linewidth=1pt](0,-2)(-1.5,0)(1.5,0)(0,-2)
\endpspicture}\\
$b[n]$~ &\BDfilter{$I_0$}&\BDmul&\\
\ncline{->}{1,3}{2,3}
\ncline{2,1}{2,2}
\ncline{->}{2,2}{2,3}\tbput{\color{gray}$p(t)$}
\ncline{2,3}{2,4}\tbput{\color{gray}$x(t)$}
\ncline{2,4}{1,4}
\end{dspBlocks}
\end{center}
where $b[n]$ is the discrete-time sequence of binary digits to transmit, $I_0$ is a zero-order-hold D/A interpolator working at a rate $T_b$, and $f_c$ is the oscillator's frequency; the signal sent to the antenna is thus the continuous-time signal $x(t)= p(t)\cos(2\pi f_c t)$ where
In the following exercise questions, assume the following:
\begin{itemize}
\item$b[n]$, the binary sequence to transmit, is an infinite-length signal whose DTFT looks as in the picture here on the right; formally, the DTFT can be expressed as $B(e^{j\omega})= a + b\tilde{\delta}(\omega)$ for some real-valued constants $a$ and $b$ but note that the actual values of $a$ and $b$ are not relevant to the solution and that you do not need to use the explicit expression for $B(e^{j\omega})$ anywhere in your derivations;
\item the frequency of the oscillator in the transmitter is $f_c =3/T_b$.
\end{itemize}
With these assumptions:
\begin{enumerate}
\item{[5p]} Sketch the magnitude of $X(f)$, the spectrum of the transmitted analog signal; the exact amplitude of the spectrum is not important but indicate \textit{precisely} the frequencies at which it is equal to zero and try to be the accurate with the \textit{relative}\/ magnitudes of the peaks. [\textit{Advice: start first by working out the shape of $P(f)$, the spectrum of the baseband signal produced by the interpolator\footnote{...and, if you've forgotten to write the relevant formula in your notes, here it is: given a discrete-time signal $x[n]$ whose DTFT is $X(e^{j\omega})$, and given an interpolator with kernel $i(t)$ and Fourier transform $I(f)$, the spectrum of the continuous-time interpolation of $x[n]$ at a rate of $F_s$ samples per second is $X(f)=(1/F_s)\,I\left( f / F_s \right)\, X(e^{j2\pi f / F_s})$}}.]
\item{[5p]} Do you see any problems with this implementation of the OOK transmitter? How would you fix them?
\end{enumerate}
In spite of the issues inherent to this elementary modulation scheme, the design illustrated above is in fact the one used in practical realizations, especially because the simplicity of the transmitter is matched by that of the receiver. Indeed, an estimate of the signal $p(t)$ can be easily obtained using a simple squaring nonlinearity followed by an analog lowpass filter $H(f)$:
\item{[10p]} Determine the analog lowpass filter's cutoff frequency $f_L$ that minimizes the mean square error between $\hat{p}(t)$ and $p(t)$.
\end{enumerate}
\begin{sol}
\Solution
The continuous-time signal $p(t)$ is obtained by interpolating $b[n]$ with a zero-order hold at a frequency $F_s =1/T_b$; the resulting spectrum is
\[
P(f)=(1/F_s)\,I_0\left( f / F_s \right)\, B(e^{j2\pi f / F_s})
\]
where $I_0(f)$ is the Fourier transform of the zero-order interpolation kernel. The last term in the equation is a rescaled version of the $2\pi$-periodic $B(e^{j\omega})$; since $B(e^{j\omega})$ is full band, we obtain a constant value plus a series of Dirac deltas at multiples of $F_s$:
\begin{center}
\begin{dspPlot}[height=2cm,width=14cm,xticks=custom,yticks=none,sidegap=0,xout=true,ylabel={$|B(e^{j2\pi f / F_s})|$}]{-5.5,5.5}{0,4.2}
\psline(-0.5, 0)(-0.5, 1)(0.5, 1)(0.5, 0)
\multido{\i=-5+1}{11}{%
\psline[linecolor=gray](! \i\space 0.5 sub 0)(! \i\space 0.5 sub 1)(! \i\space 0.5 add 1)(! \i\space 0.5 add 0)
\psline[linecolor=gray]{->}(\i, 0)(\i, 3)}
\def\i{0}
\psline(! \i\space 0.5 sub 0)(! \i\space 0.5 sub 1)(! \i\space 0.5 add 1)(! \i\space 0.5 add 0)
The interpolation kernel is $i_0(t)=\rect(t)$ and therefore its Fourier transform is $I_0(f)=\sinc(f)$; rescaling with respect to $F_s$ we obtain the following plot:
Finally, the magnitude spectrum $|P(f)|$ is obtained from the product of the two spectra above; note that the zeros of $I_0(f / F_s)$ at multiples of $F_s$ cancel out the Diracs in $B(f)$ except in $f=0$:
\item The problems are due to the fact that $P(f)$ is not bandlimited because of the zero-order interpolation; as a consequence:
\begin{enumerate}
\item the two spectral copies created by the modulation overlap with each other, which leads to distortion
\item the tails of the modulated copies spill out on neighboring frequency bands, which could create interference with other radio devices
\end{enumerate}
These problems could be reduced by using a higher-order interpolator and, in the limit, a sinc interpolator\footnote{This is not done in practice because the resulting design would be much more complex and the energy savings obtained by switching off the oscillator during a zero bit would no longer be there.}.
%
\item First of all notice that the signal $p(t)$ is either equal to one or to zero for all values of $t$ and therefore $p^2(t)= p(t)$. Next, recall the basic trigonometric identity $\cos^2\alpha=(1+\cos2\alpha)/2$. With this, the signal entering the lowpass filter is
that is, the original $p(t)$ plus a a copy of $p(t)$ modulated at twice the original frequency. In the frequency domain, we thus have
\[
Y(f)= P(f)+(1/2)P(f -2f_c)+(1/2)P(f +2f_c)
\]
If $P(f)$ was bandlimited so that $P(f)=0$ for $|f| < f_N$ (and $f_N < f_c)$, then we could exactly recover $p(t)$ via a lowpass filter with cutoff $f_N \le f_L < f_c-f_N$. Since $P(f)$ is not banlimited, the tails of the cross-modulation copies $P(f \pm2f_c)/2$ will ``spill over'' in the baseband and this will cause a certain amount of distortion. Since usually $f_c$ is much much larger than $F_s =1/T_b$, this distortion will be very small in practical implementations.
%
\item Because of the spectral overlap between $P(f)$ and $P(f \pm2f_c)/2$ we want a cutoff frequency that preserves most of the energy in $P(f)$ and eliminates most of the energy from the cross-modulated component. To find the optimal point, remember that $P(f)$ has the shape of a sinc function, whose amplitude is upper bounded by $|\pi f|^{-1}$, while the overlapping $P(f -2f_c)/2$ decays as $|2\pi(f-2f_c)|^{-1}$, as illustrated in the following figure:
The ideal cutoff is therefore the crossover frequency after which the magnitude of $P(f)$ becomes asymptotically smaller than the magnitude of $P(f -2f_c)/2$; this happens for
\[
\frac{1}{f} =\frac{1/2}{2f_c - f}
\]
that is for
\[
f_L =(4/3)f_c;
\]
For our particular choice of $f_c =3F_s$, this corresponds to $f_L =4F_s =4/T_b$.
Given a real-valued constant $\beta > 1$, consider the FIR filter with impulse response
\[
h[n]=\begin{cases}
1 & n =-1\\
\beta & n =1\\
0 & \mathrm{otherwise.}
\end{cases}
\]
Let $[-\omega_0, \omega_0]$ be the frequency interval over which, for our practical purposes, the small-angle approximation $\tan\omega\approx\omega$ is sufficiently accurate. Show that the filter's phase response is approximately linear over $[-\omega_0, \omega_0]$, independently of the value of $\beta$.
Since $\beta > 1$ by definition, then $-1 < \lambda < 0$ so, if $\omega\in[-\omega_0, \omega_0]$, then $\lambda\omega\in[-\omega_0, \omega_0]$ as well. Consequently
From simple inspection, the CCDE describing the system is
\[
y[n]= x[n]+ e^{j\omega_0}\, y[n]
\]
so that the impulse response is the sequence
\[
h[n]= e^{j\omega_0 n}u[n].
\]
Since $|h[n]| =1$ for $n \ge0$ the impulse response is not absolutely summable and therefore the system is not BIBO stable. For a generic causal input $x[n]$ the output is
Although the harmonic series is divergent, i.e., $\sum_{k=1}^{\infty} 1/k =\infty$, very similar derived series are surprisingly convergent. The so-called Leibniz formula, for instance, uses only the odd-indexed terms with alternating sign to yield
Consider now a continuous-time square wave $x(t)$ with period $P$, as shown in the figure below. Knowing that the signal can be expressed as the Fourier series
From simple inspection, the CCDE describing the system is
\[
y[n]= x[n]+ e^{j\omega_0}\, y[n]
\]
so that the impulse response is the sequence
\[
h[n]= e^{j\omega_0 n}u[n].
\]
Since $|h[n]| =1$ for $n \ge0$ the impulse response is not absolutely summable and therefore the system is not BIBO stable. For a generic causal input $x[n]$ the output is
As a signal processing expert, re-derive Euler's result by using the fact that the Hilbert filter, whose frequency response is allpass, has impulse response
\[
h[n]=\begin{cases}
0 & \mbox{$n$ even} \\
\frac{2}{\pi n} & \mbox{$n$ odd}.
\end{cases}
\]
\vspace{1em}
{\footnotesize\it Hint: at some point in the proof you may find it useful to split a series into even and odd terms: $\sum_{n=1}^{\infty} a_n =\sum_{n=1}^{\infty} a_{2n} +\sum_{n=1}^{\infty} a_{2n -1}$.}