From: glen herrmannsfeldt on
rickman <gnuarm(a)gmail.com> wrote:
(snip)

> I don't see an advantage to doing any of this. In fact, there are
> processing disadvantage to upsampling. For one, a low pass digital
> filter requires more coefficients to get the same transition if the
> sample rate is higher, not to mention that you have to process more
> samples, unless you are downsampling at the same time. Other
> processing will take longer just because of the higher sample rate.

Is this always true? For any cutoff frequency and filter order?

-- glen
From: Rune Allnor on
On 30 Jul, 23:27, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote:
> rickman <gnu...(a)gmail.com> wrote:
>
> (snip)
>
> > I don't see an advantage to doing any of this.  In fact, there are
> > processing disadvantage to upsampling.  For one, a low pass digital
> > filter requires more coefficients to get the same transition if the
> > sample rate is higher, not to mention that you have to process more
> > samples, unless you are downsampling at the same time.  Other
> > processing will take longer just because of the higher sample rate.
>
> Is this always true?  For any cutoff frequency and filter order?

The comparisioan is all but impossible.

For a given filter type and order one has to balance the number
of flops per sample against the higher number of samples at the
higher sampling rates. But then, with lower sampling rates the
filter often enough has to be of higher order to satisfy the
real-life spec.

The one argument I have seen,that might be decisive in favour
of the higher *sampling* rates, is that the analog anti-aliasing
filter becomes significantly simpler.

Rune
From: Jerry Avins on
On 7/31/2010 3:05 AM, Rune Allnor wrote:
> On 30 Jul, 23:27, glen herrmannsfeldt<g...(a)ugcs.caltech.edu> wrote:
>> rickman<gnu...(a)gmail.com> wrote:
>>
>> (snip)
>>
>>> I don't see an advantage to doing any of this. In fact, there are
>>> processing disadvantage to upsampling. For one, a low pass digital
>>> filter requires more coefficients to get the same transition if the
>>> sample rate is higher, not to mention that you have to process more
>>> samples, unless you are downsampling at the same time. Other
>>> processing will take longer just because of the higher sample rate.
>>
>> Is this always true? For any cutoff frequency and filter order?
>
> The comparisioan is all but impossible.
>
> For a given filter type and order one has to balance the number
> of flops per sample against the higher number of samples at the
> higher sampling rates. But then, with lower sampling rates the
> filter often enough has to be of higher order to satisfy the
> real-life spec.
>
> The one argument I have seen,that might be decisive in favour
> of the higher *sampling* rates, is that the analog anti-aliasing
> filter becomes significantly simpler.

I think you will find that FIR filters need to be longer at higher
sample rates, especially if a significant frequency is low. More samples
are needed in the filter to account for the approximately fixed impulse
response. When the only reason for a high sample rate is simplicity of
the anti-alias filter, it is usually appropriate to decimate early in
the processing chain.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
From: Vladimir Vassilevsky on


robert bristow-johnson wrote:


> we begin with the L most current samples, x[n] to x[n-L+1]. let's
> define the sample interval, T, to be 1.
>
> so, what is the algorithm? do we begin with an (L-1)th-order
> polynomial that hits all of those L samples? is that correct? if so,
> let's call that polynomial
>
> L-1
> p_n(t) = SUM{ a_(k,n) * t^k }
> k=0
>
> and that p(i) = x[n+i] for integers -L < i <= 0 .
> and getting the coefficients a_(k,n) is no problem.
> now what do we do with it? what is the next step?


Describe nonlinearity as polynomial F(x).
Then we have q_n(t) = F(p_n(t)) of the order of the product of orders of
F(x) and p_n(t).

Then do continuos Fourier transform of q_n(t). As q_n(t) is polynomial,
the Fourier would look like:

Q_n(w) ~ exp(-iWt) P_n(W)

Where P_n(t) is a complex polynomial.

Now do inverse Fourier, this also has analytic solution as Fn(W,t) in
closed form. So drop frequencies higher then Nyquist and sample.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com






From: robert bristow-johnson on
On Jul 31, 9:58 am, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
> robert bristow-johnson wrote:
> > we begin with the L most current samples, x[n] to x[n-L+1].  let's
> > define the sample interval, T, to be 1.
>
> > so, what is the algorithm?  do we begin with an (L-1)th-order
> > polynomial that hits all of those L samples?  is that correct?  if so,
> > let's call that polynomial
>
> >             L-1
> >    p_n(t) = SUM{ a_(k,n) * t^k }
> >             k=0
>
> > and that p_n(i) = x[n+i]   for integers  -L < i <= 0 .
> > and getting the coefficients a_(k,n) is no problem.
> > now what do we do with it?  what is the next step?
>
> Describe nonlinearity as polynomial F(x).

okay, i'm with you here...

> Then we have q_n(t) = F(p_n(t)) of the order of the product of orders of
> F(x) and p_n(t).

.... and here (we'll pretend it's no problem getting the coefficients
of q_n)...

> Then do continuous Fourier transform of q_n(t).

.... and here (almost). since q_n(t) is unbounded (i s'pose beyond the
region between -L < t <= 0) there are issues. the constant term of
q_n(t) (or is it a step function?) gets a delta(f), the linear term
(or is it a ramp function?) gets what?

> As q_n(t) is polynomial, the Fourier would look like:
>
> Q_n(w) ~ exp(-iWt) P_n(W)
>
> Where P_n(t) is a complex polynomial.

at this point, i am not with you at all. what's Q_n(w)? and what's
"t" doing in the Fourier Transform of q_n(t)? where did you get this?

> Now do inverse Fourier,

already? don't you "drop frequencies higher then Nyquist" before you
inverse Fourier?

> this also has analytic solution as Fn(W,t) in closed form.

does that capital W mean "omega"? isn't it F_n(t) the inverse
transform of

Q_n(w) * rect(w/(2*pi))

where Q_n(w) is the spectrum of q_n(t) which is what we got from F(x)
and p_n(t), the latter we get from the samples x[n-L+1] to x[n],
right?

> So drop frequencies higher then Nyquist

i presume you mean to multiply everything higher than Nyquist by 0
*before* you inverse Fourier transform, no?

now, isn't that the same as continuous-time convolving q_n(t) with
sinc(t) (remember we normalized the sampling time T to 1)? now, if it
can be done directly with "algebra" in the frequency domain, can't it
also directly in the time domain? you have these individual
polynomial terms each convolved with sinc(t).

> and sample.

sample it at one point (same place as x[n]), right? then we do all
this over again for the next sample?

or are we doing this for blocks of samples? do we overlap the blocks
or not? any funky things happening at the boundaries where the blocks
are spliced together?

i'm not sure you put together a well-defined algorithm here, yet,
Vlad. but even so, it's saves *what* over upsampling by a factor of R
(a 16 or 20 tap FIR filter for each of the R fractional delays),
running those R samples through the Nth-order polynomial (same as your
F(x)) which will create images (but if you limit the polynomial order
to N<2R-1, none will fold back into the original baseband), LPF back
to the original Nyquist, and (trivially) downsampling?

anyway, how all of that related to the OP's original issue is that i
don't think that R=2 (or Fs = 96 kHz) would be enough for a decent
polynomial distortion curve.

r b-j