From: John on
On Jan 6, 11:34 am, "all4dsp" <all4...(a)comcast.net> wrote:
> Thanks Vladimir, John,
>
> Are Vladimir's points 1 and 2 related? That is, does polyphase filtering
> not interpolate the whole array? Otherwise, I'm not sure where to go with
> point 2 (can you explain)?
>
> Does polyphase method return equally accurate data as sinc interpolation
> (e.g. by zero-padding in frequency domain in middle of 1xFFT spectrum, and
> then doing an IFFT to get interpolated points, thereby avoiding non-ideal
> filter characteristics in time domain if one were to use conventional
> time-domain low pass filter method), or does the error increase (dependent
> on the filter size, etc?)? Where do the interpolated errors come from? Best
> regards

If done properly, an FFT-based filter should produce identical results
to a time domain filter. In both cases you will not be able to realize
an ideal sinc because those go on forever.

John
From: Clay on
On Jan 6, 7:36 am, "all4dsp" <all4...(a)comcast.net> wrote:
> Hello all,
>
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of the
> array if needed to eliminate artifacts, but I need the majority of data in
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> to get it down to as short as possible (10 minutes?).
>
> I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> arrays, but I'm not sure how to use it for upsampling.
>
> Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> using smaller FFTs, but I'm not sure how to do the upsampling.
>
> Could someone help me understand what the conventional wisdom is to do
> this? I'm not a DSP expert, although I'm a fast study, if you could
> reference literature I could access (no research papers where someone
> created an algorithm no one uses though) I'd be very much appreciative.
> Thanks in advance.

Here is a way.

http://www.claysturner.com/dsp/DSPinterpolation.pdf


This can be coded up and runs reasonably well. If the "spectral
occupancy" is a fair bit less than Nyquist then the sync kernal can be
replaced by the quicker decaying interpolation function.

Clay



From: Rick Lyons on
On Wed, 06 Jan 2010 06:36:03 -0600, "all4dsp" <all4dsp(a)comcast.net>
wrote:

>Hello all,
>
>I'd like to know what methods are available, if any, for upsampling a
>large array. I have an application where I need to sinc interpolate an
>array of 1E9 (billion) double precision data points output by an ADC by 4x.
>The array captures a clock waveform centered at zero volts and is
>oversampled (meets Nyquist). Don't know if it matters, but my goal is
>simply to very accurately find the zero-point crossings. By upsampling a
>factor of 4 times, I'll then use linear interpolation to get the final
>number. The point is, I really only need very accurate data during the
>rise/fall times (not sure if this opens a new degree of freedom). Sample
>spacing is periodic. I can throw out the initial and final 5% or so of the
>array if needed to eliminate artifacts, but I need the majority of data in
>the middle.
>
>My current system takes 10 hours to computer a 1E9 FFT (actually, size is
>2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
>57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
>to get it down to as short as possible (10 minutes?).
>
>I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
>arrays, but I'm not sure how to use it for upsampling.
>
>Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
>(http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
>using smaller FFTs, but I'm not sure how to do the upsampling.
>
>Could someone help me understand what the conventional wisdom is to do
>this? I'm not a DSP expert, although I'm a fast study, if you could
>reference literature I could access (no research papers where someone
>created an algorithm no one uses though) I'd be very much appreciative.
>Thanks in advance.

Hello all4dsp,
If your input signal (the signal that's to be
interpolated) is a "squarewave-like" signal, then maybe
the linear interpolation method in:

[1] R. Losada and R. lyons, "Reducing CIC Filter Complexity",
IEEE Signal processing magazine, July 2006, pp.124-126

would be of some help to you. That article describes a
linear time-domain interpolation scheme that requires no
multiplications regardless of the interpolation factor.

A slightly expanded version of the material in the above
article can be found in Chapter 6 in:

[2] R. Lyons, Editor, "Streamlining Digital Signal Processing,
A Tricks of the Trade Guidebook", IEEE-Wiley Interscience
Publishing, 2007.

all4dsp, if you're unable to gain access to the above article or
the above book chapter, then let me know.

Good Luck,
[-Rick-]


From: Rick Lyons on
On Wed, 06 Jan 2010 06:36:03 -0600, "all4dsp" <all4dsp(a)comcast.net>
wrote:

[Snipped by Lyons]
>
>I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
>arrays, but I'm not sure how to use it for upsampling.
>
>Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
>(http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
>using smaller FFTs, but I'm not sure how to do the upsampling.

Hi all4dsp,
If your input data block length (and FFT size) is N,
then you can interpolate by four by:

(1) Perform your initial N-point FFT to compute the X(m)
freq samples. (Freq index m goes from 0 -to- N-1.)

(2) Create a new freq-domain sequence comprising:

the first N/2 X(m) samples, followed by 3*N
zero-valued samples, followed by the last
N/2 X(m) samples.

This step merely inserts 3*N zero-valued samples
right in the middle of your original FFT's X(m) samples.

(3) Perform a 4N-point inverse FFT on this "expanded"
freq-domain sequence to obtain your interpolated-by-four
time-domain sequence.

I'm assuming that the initial FFT samples, X(m), are *VERY*
low in magnitude in the vicinity of Fs/2 Hz (Fs is the
original sample rate in Hz).

Of course the above scheme's gonna take a super long time
to execute because your working with such unbelievably
humungous (is that a word?) large data block lengths.
Your problem give a whole new meaning to the phrase
"number crunching".

all4dsp, thinking about this, I'm not at all sure that
the above freq-domain interpolation is the most
computationally-efficient thing to do. Time-domain
interpolation requires digital filtering, and if the
number of filter taps is not too large maybe time-domain
interpolation is the "way to go". If so, then polyphase
filtering methods should be used.

all4dsp, ya' know what I'd do if I were you? I'd carefully
double-check the need (the necessity) for working with
such large-sized blocks of data.

And another thing, as Veronica said in the movie "The Fly",
I'd "Be afraid. Be very afraid" of numerical errors in
performing such large-sized FFTs. In those FFTs you're
expecting the software to accuarately estimate the sine and
cosine of angles in the range of 360/2^30 degrees!!!
Are sine and cosine approximation algorithms (in the FFT
software) accurate enough for such super-small angles???

Good Luck,
[-Rick-]
From: Ron N. on
On Jan 6, 4:36 am, "all4dsp" <all4...(a)comcast.net> wrote:
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of the
> array if needed to eliminate artifacts, but I need the majority of data in
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT ...

What is the density of zero crossings to samples? If it's low enough,
you can do an initial linear or quadratic interpolation, then a sinc
interpolation (polyphase, etc.) only surrounding these first estimate
points. Interpolate again to get a second estimate (and possibly
rinse-and-repeat until you converge sufficiently, if needed).

YMMV.
--
rhn A.T nicholson d.0.t C-o-M
http://www.nicholson.com/dsp.html