oversampling

Penio Penev (penev@venezia.rockefeller.edu)
Sun, 4 Jan 1998 15:56:21 -0500


On Sun, 4 Jan 1998, Kragen wrote:

> On Sat, 3 Jan 1998, Penio Penev wrote:
> > All modern CD players have 1-bit D/A converters. The key word is
> > oversampling (with analog low-pass filtering at the end).
> 
> So how, exactly, does this work? Do you emulate a sixteen-bit D/A
> converter by running your 1-bit D/A converter 65536 times as fast?

192X. Or at least I've seen such specs in a Philips data sheet or
something. There is a whole slew of issues on audio quality D/A
converters which, I guess, are described in the literature. I've seen
laymen' descriptions on the web also, but I guess any sound engineer
would be quite familiar with what goes on. 

On Sat, 3 Jan 1998, KC5TJA wrote:

> > All modern CD players have 1-bit D/A converters. The key word is
> > oversampling (with analog low-pass filtering at the end).
> 
> I'm sorry, but you cannot use a 1-bit A/D converter for this application.

For which application?

POTS quality signaling? One can definitely use it for CD-DA quality
music, which is much harder. 

> The cost of the circuitry and software development time would make it
> completely unworthy of design for high throughput and error-free
> operation.

Well, one can definitely record the POTS signal on a CD and then play it
back to the line with a consumer 1 bit D/A converter and one shouldn't
lose much (if at all).

> Besides, you have different requirements for signal input than signal
> output. A 1-bit D/A converter is NEVER used in commercial-grade
> telecommunications equipment, and for good reason.

For what?

If we are talking about MHz bandwidth, I can imagine that it's expensive
to make good analog bandpass filters at that frequencies. 

If we are talking about low-passing 8 kHz (or 22 kHz for that matter) this
should be too expensive. 

> > As to the A/D, remember that the spectrum of the phone signal is band
> > limited. So oversampling is again the key word.
> 
> Irrelavent -- you need something that is sensitive enough to detect phase
> errors and similar effects of long-haul audio transmission. :)

I'll try to present a _rough_ argument as to why 1-bit A/D with
oversampling may be enough (given the necessary DSP power). 

Whatever you get at your end, is the result of 1) D/A conversion at your
telco central office of the 64 kbps (or 56 in some cases) representation of
a signal band-passed at 300--3600 Hz and discretized at 8Ksa/s, u-law
encoded to 8 (or 7) bits, 2) line noise between your home and the
telco CO, and 3) local echo.

This signal has _very well_ determined spectral characteristics. If we
forget about the local line noise for a while, a second of this (analog)
signal lives in a 2x(3600-300) = 6600 dimensional subspace of the space of
all analog functions one second long, whereby 3300 sines and 3300 cosines
serve as the basis. So, the signal is determined by 6600 numbers, which
may be the projections to the sines and cosines, but can be determined by
_any_ 6600 linearly independent projections to _that subspace_. 

This is usually achieved with regular interval sampling, but in principle
can be achieved by _any_ sampling -- both in the spatial, in the
frequency, in the time domain or a combination thereof. 

The trick is to know how one is sampling and to calculate the
transformation matrix correctly. Of course, one has to do the error
propagation analysis -- what error in the sampling space corresponds to
what error in the target subspace. 

One can improve the quality of the reconstruction if one has more samples
-- in that case the problem is called "least square fitting," or "what is
the signal in the target subspace, closest to the measured one, which is
in a bigger space?"

In all cases oversampling measurements gives additional information so that
the original signal can be recovered with better accuracy. 


> I don't
> care how much you over-sample; you still need resolution!

Well, think of it that way.

With a 1-bit A/D one is measuring the zero-crossings of some function. 
Note, that although we have only one bit information, it can be arbitrary
accurate -- i.e., when we read out that f(t)>=0, then we actually quite
sure, that f(t)>-2^(-20), for example. And when we read out, that f(t)<0,
we are quite sure that f(t)<+2^(-20). Then, the only uncertainty is in
the actual time of zero-crossings, which can be made arbitrary small with
arbitrary large oversampling.

So, by noting the exact time of the zero crossing, we have actually
constrained the point through with the waveform passes extremely well.

In other words, there aren't _that many_ possible waveforms in the
_band-passed subspace_ that conform to those requirements. 

So, if all one is after is 6600 coefficients in the Fourier expansion of
the signal with 13 bit accuracy, one is really searching for 13 x 6600
bits/second, or 86 Kbits/s. With a 1-bit A/D @ 5 MHz one has 5 Mbits/s,
and with 6-bit A/D @ 5 MHz -- 30 Mbits/s. Granted they are not
decorrelated -- long strings of zeroes and ones are expected more
frequently than in the coin-tossing experiment, but still, there is a lot
of information about the signal there. 

As I said, one need a helluva DSP to sort things out :-)

The whole point is that there is some functional dependency between the
number of bits in the A/D and the DSP power needed to do v.34. When this
is weighted with the price of achieving both, one can find the point of
minimum cost. 

MISC chips have the weight coefficient for the cost of the DSP smaller, so
the N-bit A/D point _may_ actually be shifted to the left of the TI-based
one.

--
Penio Penev <Penev@pisa.Rockefeller.edu> 1-212-327-7423



.