From: Andor on
On 6 Jan., 18:40, Clay <c...(a)claysturner.com> wrote:
> On Jan 6, 7:36 am, "all4dsp" <all4...(a)comcast.net> wrote:
>
>
>
>
>
> > Hello all,
>
> > I'd like to know what methods are available, if any, for upsampling a
> > large array. I have an application where I need to sinc interpolate an
> > array of 1E9 (billion) double precision data points output by an ADC by 4x.
> > The array captures a clock waveform centered at zero volts and is
> > oversampled (meets Nyquist). Don't know if it matters, but my goal is
> > simply to very accurately find the zero-point crossings. By upsampling a
> > factor of 4 times, I'll then use linear interpolation to get the final
> > number. The point is, I really only need very accurate data during the
> > rise/fall times (not sure if this opens a new degree of freedom). Sample
> > spacing is periodic. I can throw out the initial and final 5% or so of the
> > array if needed to eliminate artifacts, but I need the majority of data in
> > the middle.
>
> > My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> > to get it down to as short as possible (10 minutes?).
>
> > I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> > arrays, but I'm not sure how to use it for upsampling.
>
> > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > Could someone help me understand what the conventional wisdom is to do
> > this? I'm not a DSP expert, although I'm a fast study, if you could
> > reference literature I could access (no research papers where someone
> > created an algorithm no one uses though) I'd be very much appreciative.
> > Thanks in advance.
>
> Here is a way.
>
> http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> This can be coded up and runs reasonably well. If the "spectral
> occupancy" is a fair bit less than Nyquist then the sync kernal can be
> replaced by the quicker decaying interpolation function.

Hi Clay

The problem with your method is to find the zero-crossings in the
sampled data. It is well possible that zero-crossings can't be found
by multiplying two consecutive numbers and checking the sign of the
result. This effect does not depend on the "spectral occupancy". To
see this, consider sampling

x(t) = sin(pi f t) + sin(pi (1-f) t)

for arbitrary f (however, the effect is very nice for f close to 0 or
1) with sampling period T = 1. Your algorithm will not find any zero-
crossing. An easy modification to your searching criterion can by
circumvented by adding a small offset to x (in this case, only about
half of the zero crossings will be found).

Regards,
Andor
From: Clay on
On Jan 7, 3:31 am, Andor <andor.bari...(a)gmail.com> wrote:
> On 6 Jan., 18:40, Clay <c...(a)claysturner.com> wrote:
>
>
>
>
>
> > On Jan 6, 7:36 am, "all4dsp" <all4...(a)comcast.net> wrote:
>
> > > Hello all,
>
> > > I'd like to know what methods are available, if any, for upsampling a
> > > large array. I have an application where I need to sinc interpolate an
> > > array of 1E9 (billion) double precision data points output by an ADC by 4x.
> > > The array captures a clock waveform centered at zero volts and is
> > > oversampled (meets Nyquist). Don't know if it matters, but my goal is
> > > simply to very accurately find the zero-point crossings. By upsampling a
> > > factor of 4 times, I'll then use linear interpolation to get the final
> > > number. The point is, I really only need very accurate data during the
> > > rise/fall times (not sure if this opens a new degree of freedom). Sample
> > > spacing is periodic. I can throw out the initial and final 5% or so of the
> > > array if needed to eliminate artifacts, but I need the majority of data in
> > > the middle.
>
> > > My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> > > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> > > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> > > to get it down to as short as possible (10 minutes?).
>
> > > I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> > > arrays, but I'm not sure how to use it for upsampling.
>
> > > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > > (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> > > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > > Could someone help me understand what the conventional wisdom is to do
> > > this? I'm not a DSP expert, although I'm a fast study, if you could
> > > reference literature I could access (no research papers where someone
> > > created an algorithm no one uses though) I'd be very much appreciative.
> > > Thanks in advance.
>
> > Here is a way.
>
> >http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> > This can be coded up and runs reasonably well. If the "spectral
> > occupancy" is a fair bit less than Nyquist then the sync kernal can be
> > replaced by the quicker decaying interpolation function.
>
> Hi Clay
>
> The problem with your method is to find the zero-crossings in the
> sampled data. It is well possible that zero-crossings can't be found
> by multiplying two consecutive numbers and checking the sign of the
> result. This effect does not depend on the "spectral occupancy". To
> see this, consider sampling
>
> x(t) = sin(pi f t) + sin(pi (1-f) t)
>
> for arbitrary f (however, the effect is very nice for f close to 0 or
> 1) with sampling period T = 1. Your algorithm will not find any zero-
> crossing. An easy modification to your searching criterion can by
> circumvented by adding a small offset to x (in this case, only about
> half of the zero crossings will be found).
>
> Regards,
> Andor- Hide quoted text -
>
> - Show quoted text -

Certainly true, I did this for an application years ago where the
received data had been substantially band passed. Zero crossing
location is used for ranging lightning. If I had to do this today, I
would use better methods involving optimized interpolation kernals.

Thanks for the input,

Clay




From: Andor on
On 7 Jan., 16:42, Clay <c...(a)claysturner.com> wrote:
> On Jan 7, 3:31 am, Andor <andor.bari...(a)gmail.com> wrote:
>
>
>
>
>
> > On 6 Jan., 18:40, Clay <c...(a)claysturner.com> wrote:
>
> > > On Jan 6, 7:36 am, "all4dsp" <all4...(a)comcast.net> wrote:
>
> > > > Hello all,
>
> > > > I'd like to know what methods are available, if any, for upsampling a
> > > > large array. I have an application where I need to sinc interpolate an
> > > > array of 1E9 (billion) double precision data points output by an ADC by 4x.
> > > > The array captures a clock waveform centered at zero volts and is
> > > > oversampled (meets Nyquist). Don't know if it matters, but my goal is
> > > > simply to very accurately find the zero-point crossings. By upsampling a
> > > > factor of 4 times, I'll then use linear interpolation to get the final
> > > > number. The point is, I really only need very accurate data during the
> > > > rise/fall times (not sure if this opens a new degree of freedom). Sample
> > > > spacing is periodic. I can throw out the initial and final 5% or so of the
> > > > array if needed to eliminate artifacts, but I need the majority of data in
> > > > the middle.
>
> > > > My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> > > > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> > > > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> > > > to get it down to as short as possible (10 minutes?).
>
> > > > I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> > > > arrays, but I'm not sure how to use it for upsampling.
>
> > > > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > > > (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> > > > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > > > Could someone help me understand what the conventional wisdom is to do
> > > > this? I'm not a DSP expert, although I'm a fast study, if you could
> > > > reference literature I could access (no research papers where someone
> > > > created an algorithm no one uses though) I'd be very much appreciative.
> > > > Thanks in advance.
>
> > > Here is a way.
>
> > >http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> > > This can be coded up and runs reasonably well. If the "spectral
> > > occupancy" is a fair bit less than Nyquist then the sync kernal can be
> > > replaced by the quicker decaying interpolation function.
>
> > Hi Clay
>
> > The problem with your method is to find the zero-crossings in the
> > sampled data. It is well possible that zero-crossings can't be found
> > by multiplying two consecutive numbers and checking the sign of the
> > result. This effect does not depend on the "spectral occupancy". To
> > see this, consider sampling
>
> > x(t) = sin(pi f t) + sin(pi (1-f) t)
>
> > for arbitrary f (however, the effect is very nice for f close to 0 or
> > 1) with sampling period T = 1. Your algorithm will not find any zero-
> > crossing. An easy modification to your searching criterion can by
> > circumvented by adding a small offset to x (in this case, only about
> > half of the zero crossings will be found).
>
> > Regards,
> > Andor- Hide quoted text -
>
> > - Show quoted text -
>
> Certainly true, I did this for an application years ago where the
> received data had been substantially band passed. Zero crossing
> location is used for ranging lightning. If I had to do this today, I
> would use better methods involving optimized interpolation kernals.

I don't think the interpolation kernels are the problem (although they
certainly can be).

I think the main problem is to find the rough location of the zero-
crossing in the sampled data. As my example shows, a zero-crossing of
the underlying sampled function will not necessarily show up as a
"zero-crossing" in the samples. I would opt for Ron's idea to use
quick polynomial interpolation with order >= 3 (can be implemented
with very small kernel FIR filters) to oversample the data by 4 or so
to find "zero-crossings" in the samples and then use the method of
sinc-interpolation outlined in your paper to find the exact location.

This reminds me of the old comp.dsp debate on the maximum number of
zero-crossings that a bandlimited function may have on any given
interval. As it turned out (I think Matt Timmermans came up with the
example of the prolate spheroidal wave functions), the number of zero-
crossings is not limited ...

>
> Thanks for the input,

Thanks for the paper :-).

Regards,
Andor
From: Andor on
On 7 Jan., 16:48, "all4dsp" <all4...(a)comcast.net> wrote:
> Thanks everyone for your replies, you've given me a lot to think about.
> There are obviously a lot of tradeoffs in selecting a method.
>
> >I'm assuming that the initial FFT samples, X(m), are *VERY*
> >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> >original sample rate in Hz).
>
> This is true.
>
> >In those FFTs you're expecting the software to accuarately
> >estimate the sine and cosine of angles in the range
> >of 360/2^30 degrees
>
> Wow, good point! It sounds like polyphase filtering might be the best
> choice.

Since you are upsampling by 4 (an integer factor) you don't even need
to implement a polyphase resampler. All you need is a filterbank with
three filters. As I wrote in another post, I would suggest a two step
procedure:

1. Interpolate by 4 with polynomial FIR filters (very small kernels,
e.g. 4 taps for third-order polynomial interpolation):

x(n) ---+-----------------> x(n)
|
| +---------+
|-->| H1(z) |---> x1(n)
| +---------+
|
| +---------+
|-->| H2(z) |---> x2(n)
| +---------+
|
| +---------+
+-->| H3(z) |---> x3(n)
+---------+

H1, H2 and H3 are the interpolation filters (polynomial filters are
simple to design from scratch), your upsampled sequence will be:

(... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )

Using a third-order polynomial filter requires 3*4*n MACS to resample
by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of
seconds on a normal PC.

2. Use a simple method to find zero-crossings in the upsampled data
and then "polish" the zero-crossing location with more accurate
interpolation - this is outlined here: http://www.claysturner.com/dsp/DSPinterpolation.pdf

Regards,
Andor
From: Andor on
On 7 Jan., 17:18, Andor <andor.bari...(a)gmail.com> wrote:
> On 7 Jan., 16:48, "all4dsp" <all4...(a)comcast.net> wrote:
>
> > Thanks everyone for your replies, you've given me a lot to think about.
> > There are obviously a lot of tradeoffs in selecting a method.
>
> > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > >original sample rate in Hz).
>
> > This is true.
>
> > >In those FFTs you're expecting the software to accuarately
> > >estimate the sine and cosine of angles in the range
> > >of 360/2^30 degrees
>
> > Wow, good point! It sounds like polyphase filtering might be the best
> > choice.
>
> Since you are upsampling by 4 (an integer factor) you don't even need
> to implement a polyphase resampler. All you need is a filterbank with
> three filters. As I wrote in another post, I would suggest a two step
> procedure:
>
> 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> e.g. 4 taps for third-order polynomial interpolation):
>
> x(n) ---+-----------------> x(n)
>         |
>         |   +---------+
>         |-->|  H1(z)  |---> x1(n)
>         |   +---------+
>         |
>         |   +---------+
>         |-->|  H2(z)  |---> x2(n)
>         |   +---------+
>         |
>         |   +---------+
>         +-->|  H3(z)  |---> x3(n)
>             +---------+
>
> H1, H2 and H3 are the interpolation filters (polynomial filters are
> simple to design from scratch), your upsampled sequence will be:
>
> (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> Using a third-order polynomial filter requires 3*4*n MACS to resample
> by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of
> seconds on a normal PC.
>
> 2. Use a simple method to find zero-crossings in the upsampled data
> and then "polish" the zero-crossing location with more accurate
> interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPinterpolation.pdf

hmm. For fun I implemented this scheme and applied it to the samples
of the signal

x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)

for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
period 1, sampled with sampling interval T = 1. The sampling sequence x
[n] = x(n) has the property that all samples from the time interval

0 <= t < 1/(1-f)

are positive, and all samples from the interval

1/(1-f) <= t < 2/(1-f)

are negative. For small eps, the number of zero-crossings of x(t) in
both intervals is about 1/(1-f)/f.

Unfortunately, upsampling by a factor of 4 with third-order polynomial
FIR did not change the fact that the first half of the samples are all
positive and the second half are all negative (meaning we can't find
the zero-crossings from the sampled signal using the "multiply-two-
consecutive-samples-and-check-the-sign" trick).

So I think the OP might possibly not get around to using high-order
sinc-interpolation filters for the factor 4 upsampling on the whole
data set to find the zero-crossings.

Regards,
Andor