From: Clay on
On Jan 8, 4:01 am, Andor <andor.bari...(a)gmail.com> wrote:
> On 7 Jan., 17:18, Andor <andor.bari...(a)gmail.com> wrote:
>
>
>
>
>
> > On 7 Jan., 16:48, "all4dsp" <all4...(a)comcast.net> wrote:
>
> > > Thanks everyone for your replies, you've given me a lot to think about.
> > > There are obviously a lot of tradeoffs in selecting a method.
>
> > > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > > >original sample rate in Hz).
>
> > > This is true.
>
> > > >In those FFTs you're expecting the software to accuarately
> > > >estimate the sine and cosine of angles in the range
> > > >of 360/2^30 degrees
>
> > > Wow, good point! It sounds like polyphase filtering might be the best
> > > choice.
>
> > Since you are upsampling by 4 (an integer factor) you don't even need
> > to implement a polyphase resampler. All you need is a filterbank with
> > three filters. As I wrote in another post, I would suggest a two step
> > procedure:
>
> > 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> > e.g. 4 taps for third-order polynomial interpolation):
>
> > x(n) ---+-----------------> x(n)
> >         |
> >         |   +---------+
> >         |-->|  H1(z)  |---> x1(n)
> >         |   +---------+
> >         |
> >         |   +---------+
> >         |-->|  H2(z)  |---> x2(n)
> >         |   +---------+
> >         |
> >         |   +---------+
> >         +-->|  H3(z)  |---> x3(n)
> >             +---------+
>
> > H1, H2 and H3 are the interpolation filters (polynomial filters are
> > simple to design from scratch), your upsampled sequence will be:
>
> > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> > Using a third-order polynomial filter requires 3*4*n MACS to resample
> > by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of
> > seconds on a normal PC.
>
> > 2. Use a simple method to find zero-crossings in the upsampled data
> > and then "polish" the zero-crossing location with more accurate
> > interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> hmm. For fun I implemented this scheme and applied it to the samples
> of the signal
>
> x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)
>
> for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
> period 1, sampled with sampling interval T = 1. The sampling sequence x
> [n] = x(n) has the property that all samples from the time interval
>
> 0 <= t < 1/(1-f)
>
> are positive, and all samples from the interval
>
> 1/(1-f) <= t < 2/(1-f)
>
> are negative. For small eps, the number of zero-crossings of x(t) in
> both intervals is about 1/(1-f)/f.
>
> Unfortunately, upsampling by a factor of 4 with third-order polynomial
> FIR did not change the fact that the first half of the samples are all
> positive and the second half are all negative (meaning we can't find
> the zero-crossings from the sampled signal using the "multiply-two-
> consecutive-samples-and-check-the-sign" trick).
>
> So I think the OP might possibly not get around to using high-order
> sinc-interpolation filters for the factor 4 upsampling on the whole
> data set to find the zero-crossings.
>
> Regards,
> Andor- Hide quoted text -
>
> - Show quoted text -

A question about pathelogical signals is do they represent a common or
rare occurance for what the OP is looking for.

Certainly things like this have to be tested.

Clay

From: Andor on
On 8 Jan., 22:08, Clay <c...(a)claysturner.com> wrote:
> On Jan 8, 4:01 am, Andor <andor.bari...(a)gmail.com> wrote:
>
>
>
>
>
> > On 7 Jan., 17:18, Andor <andor.bari...(a)gmail.com> wrote:
>
> > > On 7 Jan., 16:48, "all4dsp" <all4...(a)comcast.net> wrote:
>
> > > > Thanks everyone for your replies, you've given me a lot to think about.
> > > > There are obviously a lot of tradeoffs in selecting a method.
>
> > > > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > > > >original sample rate in Hz).
>
> > > > This is true.
>
> > > > >In those FFTs you're expecting the software to accuarately
> > > > >estimate the sine and cosine of angles in the range
> > > > >of 360/2^30 degrees
>
> > > > Wow, good point! It sounds like polyphase filtering might be the best
> > > > choice.
>
> > > Since you are upsampling by 4 (an integer factor) you don't even need
> > > to implement a polyphase resampler. All you need is a filterbank with
> > > three filters. As I wrote in another post, I would suggest a two step
> > > procedure:
>
> > > 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> > > e.g. 4 taps for third-order polynomial interpolation):
>
> > > x(n) ---+-----------------> x(n)
> > >         |
> > >         |   +---------+
> > >         |-->|  H1(z)  |---> x1(n)
> > >         |   +---------+
> > >         |
> > >         |   +---------+
> > >         |-->|  H2(z)  |---> x2(n)
> > >         |   +---------+
> > >         |
> > >         |   +---------+
> > >         +-->|  H3(z)  |---> x3(n)
> > >             +---------+
>
> > > H1, H2 and H3 are the interpolation filters (polynomial filters are
> > > simple to design from scratch), your upsampled sequence will be:
>
> > > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> > > Using a third-order polynomial filter requires 3*4*n MACS to resample
> > > by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of
> > > seconds on a normal PC.
>
> > > 2. Use a simple method to find zero-crossings in the upsampled data
> > > and then "polish" the zero-crossing location with more accurate
> > > interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> > hmm. For fun I implemented this scheme and applied it to the samples
> > of the signal
>
> > x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)
>
> > for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
> > period 1, sampled with sampling interval T = 1. The sampling sequence x
> > [n] = x(n) has the property that all samples from the time interval
>
> > 0 <= t < 1/(1-f)
>
> > are positive, and all samples from the interval
>
> > 1/(1-f) <= t < 2/(1-f)
>
> > are negative. For small eps, the number of zero-crossings of x(t) in
> > both intervals is about 1/(1-f)/f.
>
> > Unfortunately, upsampling by a factor of 4 with third-order polynomial
> > FIR did not change the fact that the first half of the samples are all
> > positive and the second half are all negative (meaning we can't find
> > the zero-crossings from the sampled signal using the "multiply-two-
> > consecutive-samples-and-check-the-sign" trick).
>
> > So I think the OP might possibly not get around to using high-order
> > sinc-interpolation filters for the factor 4 upsampling on the whole
> > data set to find the zero-crossings.
>
> > Regards,
> > Andor- Hide quoted text -
>
> > - Show quoted text -
>
> A question about pathelogical signals is do they represent a common or
> rare occurance for what the OP is looking for.

Clearly. I was just trying to counter the widely accepted notion that
zero-crossings of sampled signals can easily be found from the (non-
up-)sampled data.

>
> Certainly things like this have to be tested.

In the end I had to use 7th order polynomial interpolation FIR filters
on my test function to actually catch the zero-crossings using
upsampling by factor 4 (5th order was not enough). This can still be
implemented efficiently (8*3*1e9 = 24e9 MACS should easily be
computable in under a minute on a normal PC).

Regards,
Andor
From: dvsarwate on
On Jan 7, 9:59 am, Andor <andor.bari...(a)gmail.com> wrote:

> I think the main problem is to find the rough location of the zero-
> crossing in the sampled data. As my example shows, a zero-crossing of
> the underlying sampled function will not necessarily show up as a
> "zero-crossing" in the samples. I would opt for Ron's idea to use
> quick polynomial interpolation with order >= 3 (can be implemented
> with very small kernel FIR filters) to oversample the data by 4 or so
> to find "zero-crossings" in the samples and then use the method of
> sinc-interpolation outlined in your paper to find the exact location.
>
> This reminds me of the old comp.dsp debate on the maximum number of
> zero-crossings that a bandlimited function may have on any given
> interval. As it turned out (I think Matt Timmermans came up with the
> example of the prolate spheroidal wave functions), the number of zero-
> crossings is not limited ...

But if there could be infinitely many zero-crossings, then no matter
how
much we oversample, we could never be sure that we have found them
all, could we?

If a (low-pass) signal is truly band-limited to W Hz, meaning that
X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded
by
2\pi W max |x(t)| (cf. Temes, Barcilon, and Marshall, The
optimization
of bandlimited systems, Proc. IEEE, Feb. 1973). So, depending on
the sampling rate and the value of two successive positive samples,
we can be sure in *some* cases that there is no zero-crossing between
the two samples: the bound on how quickly x(t) can change does not
allow for the possibility that the signal could dip down below zero in
between the two positive samples.

--Dilip Sarwate
From: dvsarwate on
On Jan 11, 12:37 pm, Jerry Avins <j...(a)ieee.org> wrote:
> dvsarwate wrote:
> > On Jan 7, 9:59 am, Andor <andor.bari...(a)gmail.com> wrote:
>
> >> I think the main problem is to find the rough location of the zero-
> >> crossing in the sampled data. As my example shows, a zero-crossing of
> >> the underlying sampled function will not necessarily show up as a
> >> "zero-crossing" in the samples. I would opt for Ron's idea to use
> >> quick polynomial interpolation with order >= 3 (can be implemented
> >> with very small kernel FIR filters) to oversample the data by 4 or so
> >> to find "zero-crossings" in the samples and then use the method of
> >> sinc-interpolation outlined in your paper to find the exact location.
>
> >> This reminds me of the old comp.dsp debate on the maximum number of
> >> zero-crossings that a bandlimited function may have on any given
> >> interval. As it turned out (I think Matt Timmermans came up with the
> >> example of the prolate spheroidal wave functions), the number of zero-
> >> crossings is not limited ...
>
> > But if there could be infinitely many zero-crossings, then no matter
> > how
> > much we oversample, we could never be sure that we have found them
> > all, could we?
>
> > If a (low-pass) signal is truly band-limited to W Hz, meaning that
> > X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded
> > by
> > 2\pi W max |x(t)|  (cf. Temes, Barcilon, and Marshall, The
> > optimization
> > of bandlimited systems, Proc. IEEE, Feb. 1973).  So, depending on
> > the sampling rate and the value of two successive positive samples,
> > we can be sure in *some* cases that there is no zero-crossing between
> > the two samples: the bound on how quickly x(t) can change does not
> > allow for the possibility that the signal could dip down below zero in
> > between the two positive samples.
>
> But doesn't that imply some frequency well above half the sample rate?
>
> Jerry
> --
> Engineering is the art of making what you want from things you can get.
> ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯


I think Jerry and I are talking at cross-purposes. Suppose we have
two
samples spaced T seconds apart from a signal x(t) that is strictly
band-limited
to have bandwidth W; X(f) = 0 for |f| > W. Suppose also (for
simplicity)
that |x(t)| < 1 and so the magnitude of the derivative is bounded by
2\pi W.
If x(0) = a > 0, then the signal cannot decrease to 0 earlier than t =
a/(2\pi W)
because the rate of change of x(t) is bounded. Similarly, if x(T) = b
> 0 is
the next sample (note that we are not assuming any relation between T
and W),
then the signal must be above zero during the time interval (T - b/
(2\pi W), T)
since otherwise it cannot change fast enough to reach b at t = T.
Thus, if
a/(2\pi W) > T - b/(2\pi W), then there is no zero-crossing between 0
and T.
Note that the stated criterion is equivalent to a + b > 2\pi WT, and
of course
since a + b = x(0) + x(T) < 2, the criterion will never be satisfied
if 2\pi WT > 2.
But, assuming that \pi WT < 1, it is possible that we can deduce for
*some*
adjacent positive samples that x(t) does not cross zero between the
samples. If T is very small (some upsampling may be necessary to get
this),
then maybe we can say that for *most* positive adjacent sample values
a and b, there is no zero-crossing between the samples.

Hope this helps

--Dilip Sarwate
From: Andor on
On 11 Jan., 16:16, Jerry Avins <j...(a)ieee.org> wrote:
> Andor wrote:
>
>    ...
>
> > In the end I had to use 7th order polynomial interpolation FIR filters
> > on my test function to actually catch the zero-crossings using
> > upsampling by factor 4 (5th order was not enough). This can still be
> > implemented efficiently (8*3*1e9 = 24e9 MACS should easily be
> > computable in under a minute on a normal PC).
>
> I've been puzzling over this for a few days now. Set aside the search
> for actual locations of the zero crossings. If there are at least two
> samples in the period of the highest frequency present in the sampled
> signal, how can any zero crossing fail to be among those counted?

Like this:

http://www.zhaw.ch/~bara/files/zero-crossings_example.png

Plot of the function described above with f=0.98. The function has 80
zero-crossings in the displayed time window, the samples of the
function have exactly one zero-crossing.

Regards,
Andor