The success of digital signal processing originates primarily in the versatility of the digital representation of information, in which both time and amplitude are discrete quantities. We have seen in great detail how the discrete-time paradigm allows us to encode any processing goal as step-by-step algorithmic procedure that operates on countable sequences of data points; in so doing, the actual physical nature of the processing device becomes irrelevant, as long as it can carry out the standard arithmetic operations. But the discretization of time is only half of the story: in order to exploit the power of general-purpose digital processing units, we need to be able to store the discrete-time samples in a format that is compatible with digital storage, that is, as a sequence of integer numbers. The operation that achieves this discretization of amplitude is called \textit{quantization} and the cascade of a sampler followed by a quantizer is called an analog-to-digital converted (or ADC for short). An ADC, as the name implies, lies at the boundary between the analog and the digital worlds and, as such, it is in fact a physical electronic device -- the only physical device we need before we are safely operating in the familiar world of numerical processing.
Quantization, as opposed to sampling, is a lossy operation in the sense that
While we will not
Modern digital memory consists of a large number of addressable binary cells called \textit{bits}, usually organized in groups called \textit{words} containing $N$ bits each. Irrespective of the strategy used to encode information into words, each word can contain at most $2^N$ distinct possible values, that is, an element from the set of integers in the interval from zero to $2^N-1$.
In order to store the value of a sample into digital memory, this value must first be mapped onto a finite set of pre-determined reference levels and, if the representation uses $N$ bits per sample, the cardinality of this set cannot exceed $2^N$. The mapping operation, called \textit{quantization}, returns the index of the element in the set that best represents the input value; this index is the integer between zero and $2^N-1$ that is stored in digital memory.
, where and the mapping operation is .
The conversion from the real world (or analog value of a signal to its discretized digital
counterpart is called analog-to-digital (A/D) conversion.
The discrete-time paradigm that we have used from the start originates in the need to use
The word ``digital'' in ``digital signal processing'' indicates that the representation of a signal is discrete \emph{both} in time and in amplitude.
The
necessity to discretize the {\em amplitude} values of a
discrete-time signal comes from the fact that, in the
digital world, all variables are necessarily represented
with a finite precision. Specifically, general-purpose
signal processors are nothing but streamlined processing
units which address memory locations whose granularity is
%a multiple of $8$
an integer number of bits.
Analogously, a transition in the opposite direction is
shorthanded as a D/A conversion; in this case, we are
associating a physical analog value to a digital internal
representation of a signal sample.
Note that, just as was the case with sampling, quantization
and its inverse lie at the boundary between the analog and
the digital world and, as such, they are performed by actual