From: glen herrmannsfeldt on
Jerry Avins <jya(a)ieee.org> wrote:
> On 7/31/2010 3:05 AM, Rune Allnor wrote:
>> On 30 Jul, 23:27, glen herrmannsfeldt<g...(a)ugcs.caltech.edu> wrote:
(snipped comparison of sample rate and filter taps)

>>> Is this always true? For any cutoff frequency and filter order?

>> The comparisioan is all but impossible.

>> For a given filter type and order one has to balance the number
>> of flops per sample against the higher number of samples at the
>> higher sampling rates. But then, with lower sampling rates the
>> filter often enough has to be of higher order to satisfy the
>> real-life spec.
(snip)

> I think you will find that FIR filters need to be longer at higher
> sample rates, especially if a significant frequency is low. More samples
> are needed in the filter to account for the approximately fixed impulse
> response. When the only reason for a high sample rate is simplicity of
> the anti-alias filter, it is usually appropriate to decimate early in
> the processing chain.

Not having done an actual comparison, I was thinking about the case
where the cutoff frequency is getting close to the sample rate.

It seems to me that as you get closer, more and more taps would
be needed. At higher sample rates that would be less true.

I think I agree with Rune, though. It is a complicated function
of the frequencies and precision needed.

-- glen
From: rickman on
On Jul 31, 3:05 am, Rune Allnor <all...(a)tele.ntnu.no> wrote:
> On 30 Jul, 23:27, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote:
>
> > rickman <gnu...(a)gmail.com> wrote:
>
> > (snip)
>
> > > I don't see an advantage to doing any of this.  In fact, there are
> > > processing disadvantage to upsampling.  For one, a low pass digital
> > > filter requires more coefficients to get the same transition if the
> > > sample rate is higher, not to mention that you have to process more
> > > samples, unless you are downsampling at the same time.  Other
> > > processing will take longer just because of the higher sample rate.
>
> > Is this always true?  For any cutoff frequency and filter order?
>
> The comparisioan is all but impossible.
>
> For a given filter type and order one has to balance the number
> of flops per sample against the higher number of samples at the
> higher sampling rates. But then, with lower sampling rates the
> filter often enough has to be of higher order to satisfy the
> real-life spec.
>
> The one argument I have seen,that might be decisive in favour
> of the higher *sampling* rates, is that the analog anti-aliasing
> filter becomes significantly simpler.
>
> Rune

My experience has been the opposite. A digital filter doesn't care
about the absolute frequency. It only cares about the frequencies
relative to the sample rate. When the cutoff is at a lower relative
frequency, to get the same "relative" performance requires a sharper
transition band in terms of absolute Hz and so needs a higher order.

Getting close to the sample rate is not the issue. If you want a
filter of a given transition band, it is harder to get at higher
sample rates. At some point you need the same filter regardless,
unless you *never* convert to the lower sample rate. Still, if you
sample faster and have a wider transition band, or sample slower and
don't worry so much about the aliased frequencies, you get the same
effect, out of band noise in your signal.

Rick

Rick
From: Tim Wescott on
On 07/31/2010 12:05 AM, Rune Allnor wrote:
> On 30 Jul, 23:27, glen herrmannsfeldt<g...(a)ugcs.caltech.edu> wrote:
>> rickman<gnu...(a)gmail.com> wrote:
>>
>> (snip)
>>
>>> I don't see an advantage to doing any of this. In fact, there are
>>> processing disadvantage to upsampling. For one, a low pass digital
>>> filter requires more coefficients to get the same transition if the
>>> sample rate is higher, not to mention that you have to process more
>>> samples, unless you are downsampling at the same time. Other
>>> processing will take longer just because of the higher sample rate.
>>
>> Is this always true? For any cutoff frequency and filter order?
>
> The comparisioan is all but impossible.
>
> For a given filter type and order one has to balance the number
> of flops per sample against the higher number of samples at the
> higher sampling rates. But then, with lower sampling rates the
> filter often enough has to be of higher order to satisfy the
> real-life spec.
>
> The one argument I have seen,that might be decisive in favour
> of the higher *sampling* rates, is that the analog anti-aliasing
> filter becomes significantly simpler.

Not to mention the reconstruction filter.

One of my favorite tricks when implementing a digital control loop is to
sample the ADC as fast as I can, with as simple an analog anti-aliasing
filter as I can get away with, followed by some prefiltering and
decimation in the digital world.

In reality "as fast as I can sample" usually means "whatever won't load
the processor too much", and doesn't come closer than a factor of two
away from what the ADC is capable of. Usually the prefiltering is
nothing more than an average of all the samples in one control loop
sampling interval; i.e. I might sample the ADC at 64kHz, then decimate
by a factor of eight to sample the control loop at 8kHz. Each of the
control loop's samples will just be a sum of the ADC samples. This cuts
down on the math that the processor has to do, and gives a filter with
nulls at all the harmonics of the control loop's sample rate up to the
ADC's Nyquist rate. With a high enough ratio between the ADC sampling
rate and the characteristics of the plant, the analog anti-aliasing
filter often becomes a simple RC filter, or nothing at all.

I believe this sort of thing is common in the audio world, too -- the
sampling rate in a CD is fixed, but that doesn't keep you from sampling
your ADCs at 4x or 8x (and probably even mastering at the higher
sampling rate), and it doesn't keep you from digitally upsampling in the
player, just to simplify the reconstruction filter.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
From: Manny on
On Jul 31, 2:58 pm, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
> robert bristow-johnson wrote:
> > we begin with the L most current samples, x[n] to x[n-L+1].  let's
> > define the sample interval, T, to be 1.
>
> > so, what is the algorithm?  do we begin with an (L-1)th-order
> > polynomial that hits all of those L samples?  is that correct?  if so,
> > let's call that polynomial
>
> >             L-1
> >    p_n(t) = SUM{ a_(k,n) * t^k }
> >             k=0
>
> > and that p(i) = x[n+i]   for integers  -L < i <= 0 .
> > and getting the coefficients a_(k,n) is no problem.
> > now what do we do with it?  what is the next step?
>
> Describe nonlinearity as polynomial F(x).
> Then we have q_n(t) = F(p_n(t)) of the order of the product of orders of
> F(x) and p_n(t).
>
> Then do continuos Fourier transform of q_n(t). As q_n(t) is polynomial,
> the Fourier would look like:
>
> Q_n(w) ~ exp(-iWt) P_n(W)
>
> Where P_n(t) is a complex polynomial.
>
> Now do inverse Fourier, this also has analytic solution as Fn(W,t) in
> closed form. So drop frequencies higher then Nyquist and sample.
This is actually neat. BUT, think you can't do this in fixed-point
because a frequency domain round trip in fixed-point is bad.

I like to quote a reconfigurable computing professor I once met: "if
they send robots to Mars in fixed-point, everything else is damn
doable in fixed-point too."

-Momo
From: Manny on
On Aug 3, 12:57 am, Manny <mlou...(a)hotmail.com> wrote:
> On Jul 31, 2:58 pm, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
>
> > robert bristow-johnson wrote:
> > > we begin with the L most current samples, x[n] to x[n-L+1].  let's
> > > define the sample interval, T, to be 1.
>
> > > so, what is the algorithm?  do we begin with an (L-1)th-order
> > > polynomial that hits all of those L samples?  is that correct?  if so,
> > > let's call that polynomial
>
> > >             L-1
> > >    p_n(t) = SUM{ a_(k,n) * t^k }
> > >             k=0
>
> > > and that p(i) = x[n+i]   for integers  -L < i <= 0 .
> > > and getting the coefficients a_(k,n) is no problem.
> > > now what do we do with it?  what is the next step?
>
> > Describe nonlinearity as polynomial F(x).
> > Then we have q_n(t) = F(p_n(t)) of the order of the product of orders of
> > F(x) and p_n(t).
>
> > Then do continuos Fourier transform of q_n(t). As q_n(t) is polynomial,
> > the Fourier would look like:
>
> > Q_n(w) ~ exp(-iWt) P_n(W)
>
> > Where P_n(t) is a complex polynomial.
>
> > Now do inverse Fourier, this also has analytic solution as Fn(W,t) in
> > closed form. So drop frequencies higher then Nyquist and sample.
>
> This is actually neat. BUT, think you can't do this in fixed-point
> because a frequency domain round trip in fixed-point is bad.
>
> I like to quote a reconfigurable computing professor I once met: "if
> they send robots to Mars in fixed-point, everything else is damn
> doable in fixed-point too."
Ah I think I get it now.

I'd like to see this done Vlad. Can't pay though :).

There's always going to be a tube fanatic to argue that this is not
the real thing!

-Momo