Usually, discrete-time signals are introduced as the discretized version of real-world signals and this is because, traditionally, the mathematical models for natural phenomena are provided by functions of a continuous-time variable. Nowadays, however, the vast majority of signals are de facto ``born'' in discrete time and the actual act of sampling an analog measurement is made transparent by the ubiquity of digital acquisition devices. Furthermore, especially in the context of communication systems, we are just as concerned with the \emph{synthesis} of discrete-time signals as with their acquisition. With digital processing at the heart of virtually every professional and consumer device today, the discrete-time paradigm has largely supplanted its analog ancestors and discrete-time signals can therefore be introduced on their own merit from an abstract and self-contained point of view.
A discrete-time signal\index{discrete-time signal|mie} is a complex-valued \emph{sequence}\index{sequence|see{discrete-time signal}}, that is, a complex-valued function of an \textit{integer} index $n \in \mathbb{Z}$; in its most general form, it is a two-sided, infinite and countable collection of values. As explained in Section~\ref{sec:in:notation}, we will use boldface characters to denote the entire signal and square brackets to indicate a specific element. For instance, we can define a sequence $\mathbf{x}$ analytically as:
which is a complex exponential of period~$40$ samples, plotted in Figure~\ref{fig:dt:compEx}. Another example, this time of a random sequence whose samples are uniformly distributed between $-1$ and $1$, is
\begin{equation}\label{eq:dt:rand}
x[n] = \mbox{the $n$-th output of a random source } \mathcal{U}(-1,1)
\end{equation}
a realization of which is plotted in Figure~\ref{fig:dt:rand}. Finally, an example of a sequence drawn from the real world and for which, therefore, no analytical expression exists, is
\begin{equation}\label{eq:dt:dow}
x[n] = \mbox{The average Dow-Jones index in year }n
\end{equation}
plotted in Figure~\ref{fig:dt:dow} from year~1900 to~2007.
\item The sequence index $n$ is best thought of as a measure of \emph{dimensionless time}; while it has no physical unit of measure, it imposes a chronological order on the values of the sequences.
\item We consider complex-valued discrete-time signals; while physical signals can be expressed by real quantities, the generality offered by the complex domain is particularly useful in designing systems which \emph{synthesize} signal, such as data communication systems.
\item In graphical representations, when we need to emphasize the discrete-time nature of the signal, we resort to stem (or {}``lollipop''{}\index{lollipop plot}) plots as in Figure~\ref{fig:dt:triangle}; this works well for plots that show a small number of data points. For lager data sets, when the discrete-time domain is implicitly understood, we will often use a function-like representation as in Figure~\ref{fig:dt:dow}. In the latter case, each ordinate of the sequence is graphically connected to its neighboring data points, giving the illusion of a smooth line; while this makes the plot easier on the eye, it must be remembered that the signal is defined only over a \emph{discrete} set of abscissas.
The examples described by~(\ref{eq:dt:triangle}) or~(\ref{eq:dt:compEx}) represent two-sided, infinite sequences. Of course, in the practice of signal processing, it is impossible to deal with an infinite
amount of data: for a processing algorithm to execute in a finite amount of time and to use a finite amount of storage, the input must be of finite length; even for algorithms that operate on the fly, i.e.\ algorithms that produce an output sample for each new input sample, an implicit limitation on the input data size is imposed by the necessarily limited life span of the processing device.\footnote{Or, in the extreme limit, of the supervising engineer $\ldots$} This limitation was clearly implicit in our attempts to plot these sequences, as we did in Figure~\ref{fig:dt:triangle} and~\ref{fig:dt:compEx}: what the diagrams show, in fact, is only a representative, meaningful portion of the signals, and we rely on the original formulas for their full description.
Signals sampled from the real world, for which no analytical description exists, always possess a finite time support because of the finite time spent recording the underlying phenomena; for the Dow Jones index, for instance, the data does not exist for years before 1884, when the index was introduced, and future values are certainly not known. More importantly (and more often), the finiteness of a discrete-time signal is explicitly imposed by design since we are interested in concentrating our processing efforts on a small portion of an otherwise longer signal; in a speech recognition system, for instance, the practice is to cut a speech signal into small segments and try to identify the phonemes associated to each one of them.\footnote{
Note that, in the end, phonemes are pasted together into words and words into sentences; therefore, for a complete speech recognition system, long-range dependencies become important again.}
These finite sets of data are not, strictly speaking, sequences in the mathematical sense, although they can be formally extended into one, as we will see.
Another case is that of \textit{periodic} signals, as in~(\ref{eq:dt:compEx}); even though these are indeed infinite sequences, it is clear that all the relevant information is contained in just a single period. By
describing one period (graphically or otherwise), we are, in fact, providing a full description of the entire sequence. This variety of signal types demands a little taxonomic effort that we will now undertake.
A finite-length discrete-time signal of length~$N$ is simply a collection (or a \textit{tuple}) of $N$ complex values. It is both natural and convenient to consider finite-length signals as Euclidean vectors in the $N$-dimensional space $\mathbb{C}^N$, and we will explore this viewpoint in detail in the next chapter. For now, simply note that the notation to represent an $N$-point, finite-length signal is the standard vector notation
Note the transpose operator, which indicates the convention that signal vectors are \emph{column} vectors. The individual element of the signal is denoted by the expression
\[
x[n], \qquad n = 0,\, \ldots,\, N-1.
\]
Note that, for length-$N$ signals, the notation $x[n]$ is \emph{not defined} for values of $n$ outside of the $[0, N-1]$ range.
As we said, finite-length signals are the only actual entities that we can manipulate in practical signal processing applications; it would be extremely awkward, however, to develop the whole theory of signal processing only in terms of finite-length signals. In fact, it is more convenient, when possible, to develop theoretical proofs using infinite sequences since such general results will hold for all finite-size signals as well.
Infinite-length signals are proper sequences in the mathematical sense; although these kind of signals in general lie beyond our processing and storage capabilities, they are useful from a theoretical point of view since they allow us to prove general theorems that apply to all other signal classes. We can define three subclasses as follows.
\itempar{Aperiodic Signals.} The most general type of discrete-time signal is represented by an aperiodic, two-sided complex sequence. In the theory of signal processing, we tend to like like sequences that are in some way \textit{summable}, since summability is associated with desirable mathematical properties. In particular, \emph{absolute summability} (a strong condition) is associated with system stability, while \emph{square summability} (a weaker condition) is associated to finite energy. In general, theoretical proofs will be easier when the signals involved are assumed to be absolutely summable\footnote{
Mostly because of Fubini's theorem, thanks to which we can freely reorder summations and integrals.}
whereas extending the proofs to square summable sequences is often quite a bit of work.
An $N$-periodic sequence $\tilde{\mathbf{x}}$ is an infinite, two-sided sequence for which
\begin{equation}
\tilde{x}[n] = \tilde{x}[n + kN], \quad\qquad k \in \mathbb{Z}.
\end{equation}
The tilde notation will be used whenever we want to explicitly stress a periodic behavior. Clearly an $N$-periodic sequence is completely defined by its $N$ values over a period; that is, a periodic sequence {}``carries no more information''{} than a finite-length signal of length $N$, in spite of its infinite support.
\index{periodic extension}
In this sense, periodic signals represent a first possible bridge between finite-length signals and infinite sequences. Indeed, we can always {}``convert''{} a finite-length signal $\mathbf{x} \in \mathbb{C}^N$ into a sequence via its periodic extension $\tilde{\mathbf{x}}$, built as
\begin{equation}
\tilde{x}[n] = x[n \mod N], \qquad \quad n \in \mathbb{Z}.
\end{equation}
We will soon see that this type of embedding for a finite-length signal is the {}``natural''{} one in most signal processing contexts.
A discrete-time sequence $\bar{x}[n]$ is said to have \emph{finite support} if its values are zero for all
indexes outside of a given range; that is, there exist two values~$N \in \mathbb{N}^+$ and $M \in \mathbb{Z}$ such that
\[
\bar{x}[n] = 0 \qquad \mbox{for } n < M \mbox{ or } n > M + N -1;
\]
Note that, although $\bar{x}[n]$ is an infinite sequence, knowledge of its $N$~nonzero values (and of the start time $M$) completely determines the entire signal. This suggests another way to embed a finite-length signal into a sequence $\bar{\mathbf{x}}$, by simply extending the original data with zeros to the left and to the right:
\begin{equation}\label{FiniteSupportExt}
\bar{x}[n] = \begin{cases}
x[n] & \mbox{ if } 0 \leq n < N - 1 \\
0 & \mbox{ otherwise}
\end{cases}
\qquad \quad n \in \mathbb{Z}.
\end{equation}
In general, we will use the bar notation $\bar{x}[n]$ for sequences defined as the finite support extension of a finite-length signal.
The following discrete-time signals are basic building blocks that will appear repeatedly throughout the rest of the book.
\itempar{Impulse.}\index{impulse}
The discrete-time impulse $\boldsymbol{\delta}$ (also known, from its symbol, as the discrete-time \emph{delta}\footnote{Not to be confused with the Dirac delta functional, that we will encounter later.}) is the simplest discrete-time signal and it is defined as:
an example is shown in Figure~\ref{fig:dt:expDecay}. The exponential decay is an important signal since it models the response of a discrete-time first order recursive filter, as we will study in detail in Chapter~\ref{ch:lti}. Exponential sequences are ``well-behaved'' only for values of $a$ less than one in magnitude; sequences in which $| a | > 1$ grow unbounded and represent an unstable behavior.
A complex exponential signal is defined by the expression:
\begin{equation}
x[n] = e^{j(\omega n + \phi)}
\end{equation}
where $\omega$ is the \textit{frequency} and $\phi$ is the initial \textit{phase}; both phase and frequency are dimensionless quantities expressed in radians. The complex exponential describes the prototypical oscillatory behavior in signal processing; using a complex-valued formulation has multiple advantages that will be apparent in Chapter~\ref{ch:ft} and in the rest of the book. The standard real-valued oscillations can always be obtained via Euler's formula, that is:
\begin{equation}
e^{j(\omega n + \phi)} = \cos(\omega_0 n + \phi) + j \sin(\omega_0 n + \phi)
\end{equation}
as shown by the example in Figure~\ref{fig:dt:compEx}.
The complex exponential (and, equivalently, its real-valued sinusoidal components) expresses an oscillatory behavior parametrized by a frequency and, possibly, an initial phase. In the continuous-time world, where time is measured in seconds ($s$), frequency has a physical dimension of $1/s$ and is measured in Hertz (Hz) so that a sinusoidal oscillation with frequency $f_0$~Hz is described by the function
\[
f(t) = \cos(2\pi f_0 t),
\]
where the $2\pi$ factor converts the argument of the trigonometric function to radians. In discrete time, where the index $n$ represents dimensionless time, the {}``digital''{} frequency is expressed in radians, itself a dimensionless quantity.\footnote{
A radian is dimensionless since it the ratio of two measures of length, that of the arc on a circle
subtended by the measured angle and the radius of the circle itself.}
The best way to appreciate how a discrete-time oscillation works is to consider a generic algorithm to generate successive samples of a discrete-time sinusoid at a given digital frequency $\omega_0$; from the formula
\[
x[n] = \cos(\omega_0 n + \phi)
\]
we can see that the argument of the trigonometric function (that is, the instantaneous phase) is incremented by $\omega_0$ at each step, yielding:
With this in mind, it is easy to see that \emph{the highest frequency of a discrete-time oscillator is $\omega_{\max} = 2\pi$}; for any value larger than this, the inner $2\pi$-periodicity of the trigonometric functions {}``maps back''{} the output values to a frequency between $0$ and $2\pi$:
for all values of $k \in \mathbb{Z}$. This modulo-$2\pi$ equivalence of digital frequencies, also known as \textit{aliasing}, is a pervasive concept in digital signal processing and it has many important consequences which we will study in detail in the next chapters.
Finally, please note that, contrary to what happens in continuous time, in discrete time \textit{not all sinusoidal sequences are periodic.} In fact, only a very small subset of digital frequencies give rise to periodic patterns, namely only those of the form
\begin{equation}
\omega = \frac{M}{N}\pi, \quad M, N \in \mathbb{Z}.
\end{equation}
For all frequencies that are not a rational multiple of $\pi$, sinusoidal signals are two-sided, non-periodic infinite sequences. We will analyze the properties of the complex exponential in more detail again in Section~\ref{sec:fa:cexp}.
Elementary signal manipulations include, among others, the classic operations used in linear combinations; note that the following definitions apply without changes to all classes of signals.
\itempar{Scaling.}\index{scaling}
We can scale a signal $\mathbf{x}$ by a scalar factor $\alpha \in \mathbb{C}$ simply by multiplying each sample by $\alpha$:
\begin{equation}
\bigl\{\alpha\mathbf{x} \bigr\}[n] = \alpha x[n]
\end{equation}
If $\alpha$ is real, then the scaling represents a simple amplification (for $\alpha > 1$) or an attenuation (for $\alpha < 1$) of the signal. If $\alpha$ is complex, amplification and attenuation are compounded with a phase shift, whose meaning will be clearer in the next chapter.
\itempar{Sum.}\index{sum!of signals}
The sum of two sequences is their term-by-term sum:
\begin{equation}
\bigl\{\mathbf{x + y} \bigr\}[n] = x[n]+y[n]
\end{equation}
Sum and scaling are linear operators; informally, this means that they behave {}``intuitively''{}:
Operators act on the entire signal and transform it into another signal. We will study several important signal operators in the rest of this book and we introduce here some elementary examples.
\itempar{Delay.}\index{shift}
The delay operator delays or advances a sequence by a given number of samples
A sequence $x[n]$, shifted by an integer $k$ is simply:
If $k$ is positive, the signal is delayed (the value that used to occur at $n=0$ now occurs at $n=k$); graphically, this is equivalent to a shift to the right of the data plot\index{delay|mie}. Conversely, if $k$ is negative, the signal is advanced in time, which corresponds to a shift to the left of the data plot.
When dealing with finite-length signals, we need to adjust the definition of the delay operator since the notation $x[n-k]$ is not defined for all values of $n$ and $k$. There are two ways to do so, and they correspond to applying the shift operator to either a periodic extension or a finite-support extension of the original length-$N$ signal $\mathbf{x}$:
\begin{itemize}
\item if we use a finite-support extension, the delay operator $\mathcal{D}_k$ performs a \textit{logical shift} on the elements of the signal vector, that is, the elements are shifted $k$ times to the left or to the right and zeros are inserted as a replacement, as illustrated here:
\item if we use a periodic extension, the delay operator $\mathcal{D}_k$ performs a \textit{circular shift} on the elements of the signal vector, that is, the elements are shifted $k$ times to the left or to the right and zeros are inserted as a replacement, as illustrated here:
As we said, the periodic extension is the natural extension of a finite-length signal so that, unless otherwise stated, in the rest of this book we will assume that all shifts applied to finite-length signals are circular shifts. Mathematically, we can formulate the circular shift via a modulo operation:
for finite-length signals, the lower limit in the sum is replaced by zero. It is easy to verify that the unit step can be obtained by integrating the impulse:
The discrete-time differentiation operator is the first-order difference:\footnote{
We will see in Chapter~\ref{is} that a more precise approximation to actual differentiation. For many applications, however, the first-order difference is enough.}
where the sum is taken over all valid values for $n$. The formula states that any signal can be expressed as a \textit{linear combination of shifted impulses}. The weights are simply the signal values themselves and, while self-evident, this formula is important because it introduces the idea that a signal can be expressed as a linear combination of elementary building blocks. We will see in the next chapter that this is an instance of basis decomposition for a signal; while shifted impulses represent an intuitive canonical set of building blocks, by changing the basis set we can decompose a signal in many other ways, in order to highlight specific properties.
\subsection{Energy and Power}\label{ErgPowSec}
We define the \emph{energy}\index{energy!of a signal} of a discrete-time signal as
the squared-norm notation will be clearer after the next chapter but here it means that the sum is taken over all legal values for the index $n$, that is, over $\mathbb{Z}$ for infinite sequences and over the range $[0, N-1]$ for finite-length signals. This definition is consistent with the idea that, if the values of the sequence represent a time-varying voltage, the above sum would be proportional to the total energy (in joules) dissipated over a $1\,\Omega$~resistor. Obviously, the energy is finite only if the above sum converges; for finite-length signals this is always the case but for infinite sequences $\mathbf{x}$ must be \emph{square-summable}. A signal with this property is sometimes referred to as a \emph{finite- energy signal}. For a simple example of the converse, any periodic signal which is not identically zero is \emph{not} square-summable.
We define the \emph{power} \index{power!of a signal} of a infinite-length signal as the limit of the ratio of energy over time, taking the limit over the entire number of samples:
Clearly, signals whose energy is finite have zero total power since their energy dilutes to zero over an infinite time duration. Conversely, unstable sequences such as diverging exponential sequences (i.e.\ $e^{an}$ with $|a | > 1$) possess infinite power. Constant signals, on the other hand, whose energy is infinite, do have finite power; the same holds for periodic signals: although, formally, the limit in~(\ref{eq:dt:power}) is undefined for periodic sequences, we simply define their power to be their
\emph{average energy over a period}. Assuming that the period is $N$~samples, we have