You are here

Amplitude Quantization

2 June, 2016 - 15:17

The Sampling Theorem says that if we sample a bandlimited signal s (t) fast enough, it can be recovered without error from its samples s (nTs), n {..., 1, 0, 1,... }. Sampling is only the first phase of acquiring data into a computer: Computational processing further requires that the samples be quantized: analog values are converted into digital (Section 1.2.2: Digital Signals) form. In short, we will have performed analog-to-digital (A/D) conversion.

media/image587.png
Figure 5.5 A three-bit A/D converter
A three-bit A/D converter assigns voltage in the range [−1, 1] to one of eight integers between 0 and 7. For example, all inputs having values lying between 0.5 and 0.75 are assigned the integer value six and, upon conversion back to an analog value, they all become 0.625. The width of a single quantization interval Δ equals . The bottom panel shows a signal going through the analog-to-digital converter, where B is the number of bits used in the A/D conversion process (3 in the case depicted here). First it is sampled, then amplitude-quantized to three bits. Note how the sampled signal waveform becomes distorted after amplitude quantization. For example the two signal values between 0.5 and 0.75 become 0.625. This distortion is irreversible; it can be reduced (but not eliminated) by using more bits in the A/D converter.  
 

A phenomenon reminiscent of the errors incurred in representing numbers on a computer prevents signal amplitudes from being converted with no error into a binary number representation. In analog-to-digital conversion, the signal is assumed to lie within a predefined range. Assuming we can scale the signal without affecting the information it expresses, we'll define this range to be [−1, 1]. Furthermore, the A/D converter assigns amplitude values in this range to a set of integers. A B-bit converter produces one of the integers {0, 1,..., 2B 1}for each sampled input. Figure 5.5 shows how a three-bit A/D converter assigns input values to the integers. We define a quantization interval to be the range of values assigned to the same integer. Thus, for our example three-bit A/D converter, the quantization interval Δ is 0.25; in general, it is \frac{2}{2^B}.

Exercise 5.4.1

Recalling the plot of average daily highs in this frequency domain problem (Problem 4.5), why is this plot so jagged? Interpret this effect in terms of analog-to-digital conversion.

Because values lying anywhere within a quantization interval are assigned the same value for computer processing, the original amplitude value cannot be recovered without error. Typically, the D/A converter, the device that converts integers to amplitudes, assigns an amplitude equal to the value lying halfway in the quantization interval. The integer 6 would be assigned to the amplitude 0.625 in this scheme. The error introduced by converting a signal from analog to digital form by sampling and amplitude quantization then back again would be half the quantization interval for each amplitude value. Thus, the so-called A/D error equals half the width of a quantization interval: \frac{1}{2^B}. As we have fixed the input-amplitude range, the more bits available in the A/D converter, the smaller the quantization error.

To analyze the amplitude quantization error more deeply, we need to compute the signal-to-noise ratio, which equals the ratio of the signal power and the quantization error power. Assuming the signal is a sinusoid, the signal power is the square of the rms amplitude: powerpower(s)=\left ( \frac{1}{\sqrt{2}} \right )^2=\frac{1}{2}.The illustration (Figure 5.6) details a single quantization interval.

media/image591.png
Figure 5.6 A single quantization interval
A single quantization interval is shown, along with a typical signal's value before amplitude quantization s (nTs) and after Q (s (nTs)). E denotes the error thus incurred. 

Its width is Δ and the quantization error is denoted by E. To find the power in the quantization error, we note that no matter into which quantization interval the signal's value falls, the error will have the same characteristics. To calculate the rms value, we must square the error and average it over the interval.

\begin{align*} rms(\epsilon )&=\sqrt{\frac{1}{\triangle}\int_{-\left ( \frac{\triangle}{2} \right )}^{\frac{\triangle}{2}}\epsilon^2d\epsilon }\\ &=\left ( \frac{\triangle^2}{12} \right )^{\frac{1}{2}}\\ \end{align*}

Since the quantization interval width for a B-bit converter equals \frac{2}{2^{B}}=2^{-\left ( B-1 \right )}, we find that the signal-tonoise ratio for the analog-to-digital conversion process equals

SNR=\frac{\frac{1}{2}}{\frac{2^{-(2(B=1))}}{12}}=\frac{3}{2}2^{2B}=6B+10log_{10}\ 1.5dB

Thus, every bit increase in the A/D converter yields a 6 dB increase in the signal-to-noise ratio. The constant term 10log1.5 equals 1.76.

Exercise 5.4.2

This derivation assumed the signal's amplitude lay in the range [−1, 1]. What would the amplitude quantization signal-to-noise ratio be if it lay in the range [−A, A]?

Exercise 5.4.3

How many bits would be required in the A/D converter to ensure that the maximum amplitude quantization error was less than 60 db smaller than the signal's peak value?

Exercise 5.4.4

Music on a CD is stored to 16-bit accuracy. To what signal-to-noise ratio does this correspond?

Once we have acquired signals with an A/D converter, we can process them using digital hardware or software. It can be shown that if the computer processing is linear, the result of sampling, computer processing, and unsampling is equivalent to some analog linear system. Why go to all the bother if the same function can be accomplished using analog techniques? Knowing when digital processing excels and when it does not is an important issue.