From: Nils on
Darol Klawetter wrote:
> I'm currently trying to pack more FIR filters into my FPGA. I'm up
> against the limit of available hardware multipliers so I'm going to
> attempt to convert some number of coefficients into values that are
> powers of 2 so that I can perform the multiplies with bit shifts.

Take a look SPIRAL project.

They have (among other things) a nice online code-generator that turns a
bunch of constant multiplications into the minimum amount of shifts,
adds and subtracts:

http://spiral.ece.cmu.edu/mcm/gen.html

Cheers,
Nils
From: Vladimir Vassilevsky on


Darol Klawetter wrote:

> In my FPGA, which is being used to implement a 128
> channel, independently tunable, sub-band receiver, I have some multi-
> channel polyphase, CIC, and half-band filters. Currently, I'm just
> trying to convert my half-band filters to power-of-2 coefficients. I
> need linear phase across each sub-band, so I haven't used any IIR
> filters, which I already have from other projects I've done.

You can build pretty much any filter response using series and parallel
connections of moving averages and delays. Brute force optimization is
probably the easiest way to do that; as the number of degrees of freedom
is limited. A filterbank comprising of +/- 1 square wave functions with
different periods could be the option also.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com
From: suren on
On Feb 19, 9:24 pm, Darol Klawetter <darol.klawet...(a)l-3com.com>
wrote:
> I'm currently trying to pack more FIR filters into my FPGA. I'm up
> against the limit of available hardware multipliers so I'm going to
> attempt to convert some number of coefficients into values that are
> powers of 2 so that I can perform the multiplies with bit shifts. When
> I've done this before, I've used a trial and error approach until I
> converge on a solution. Is anyone aware of any filter software that
> will do this or maybe an analytic approach to finding the
> coefficients?

Hi,
You could implement efficient computational structures if the
coefficients are represented in canonical signed digits as Eric
pointed out. In fact a lot of HDL synthesis tools finally represent
fixed point coefficients into CSD for optimization.
I am pasting a function that I wrote long ago to compute the csd
coefficients for a fraction, given the bidwidth. I implemented this
from a paper whose title I have forgotten. So I guess, you just need
to go through the code to get an idea.

% function [a] = csd(frac_num,num_bits);
% This program computes the canonical signed digit representation
% of a number less than 1 in the number of bits input.
function [a] = csd(frac_num, num_bits)
a = zeros(1,num_bits);
flag = 0;
temp_num = frac_num;
if((temp_num >= 2.0/3) && (temp_num < 1.0))
%temp_num = temp_num - 0.5;
temp_num = temp_num - 1.0;
flag = 1;
elseif((temp_num > -1.0 ) && (temp_num < -2/3.0))
%temp_num = temp_num + 0.5;
temp_num = temp_num + 1.0;
flag = -1;
end
i=1;
while (i <= num_bits)
if((temp_num >= 1.0/3) && (temp_num < 2.0/3))
a(i) = 1;
a(i+1) = 0;
i = i+2;
temp_num = 4*(temp_num - 0.5);
elseif((temp_num >= -1.0/3) && (temp_num < 1.0/3))
a(i) = 0;
i = i+1;
temp_num = 2*(temp_num);
elseif((temp_num >= -2.0/3) && (temp_num < -1.0/3))
a(i) = -1;
a(i+1) = 0;
i = i+2;
temp_num = 4*(temp_num + 0.5);
end
end
if(flag == 1)
%a(1) =1;
disp('Number greater than 2/3, extra msb needed = 1');
elseif(flag == -1)
disp('Number less than -2/3, extra msb needed = -1');
%a(1) =-1;
end
a=a(1:num_bits);
% convert the CSD number to decimal..
j=1;
rep_num = 0;
while(j <= num_bits)
if(a(j) == 1)
rep_num = rep_num + 2^(-(j));
elseif(a(j) == -1)
rep_num = rep_num - 2^(-(j));
end
j = j + 1;
end
%rep_num




From: Fred Marshall on
Here is my dumb and, as yet, unpublished method for quantized
symmetrical FIR filters:

(It requires a modified Parks-McClellan program which should be fairly
easy to do. I know the underlying math works because I've done it with
a slightly different program).

You are going to have to decide how many bits to carry in each
coefficient - I don't know how to get around that. I don't recall
seeing many papers that deal with this one issue but there may be. The
most prolific researchers on the subject that I know of are: Y.C. Lim
and A.N. Willson.

Anyway, here is my suboptimum, guaranteed to converge approach:


First, design a FIR filter using P-M or similar.

Next, normalize the filter coefficients so that the largest coefficient
(or coefficient pair) is an even power of 2 (or reciprocal) like 1.0.
This one now need not be quantized at all!! I arbitrarily pick the
largest one under the assumption that it will yield a large error if
quantized.

Next, find the one coefficient that will yield the *highest* error in
the filter ripple when it's quantized. Assuming that you are going to
quantize it, then that is the best you can do. So, quantize it.
Finding this just requires a search - one coefficient at a time:
Quantize one; record the error;
Quantize another by itself; record the error;
etc.
Choose the one that by itself generated the largest error.

Now that there are two coefficients quantized, take them out of the
problem space like this:

For each coefficient pair there will be a sinusoid in the frequency
response. That sinusoid is a basis function in the P-M program. So,
fix the value of the basis function and subtract it from the desired
function to get a new desired function like this:

error = D - A
where D is the desired function and A is the approximant .. which
eventually becomes the realizable "design".
k=N
Generally A= sum (asubk*Fsubk)
k=1

where the asubk are the coefficients and the Fsubk are the basis functions.

So, let's say that you have picked Asub8 by normalization and Asub3 (or
Asub3 and Asub13 as a pair) by quantization as above. Now they are
fixed and you can write:

error = [D-asub8*Fsub8-asub3*Fsub3) - A'

k=N
Where A' = sum (asubk*Fsubk) k=!3 and k=!8
k=1

Then, run the P-M algorithm again using the smaller basis set that
remains to get a minimax solution for A'.

Next, find the one coefficient (i.e. coefficient pair) that will yield
the *highest* error in the (new) filter ripple when it's quantized.
Then quantize it and move the result into the "new D".
At this stage there's a pretty good chance that the increase in the
error will be smaller than at the preceding stage because you already
picked the worst case back then.

Repeat this until you're done.

I'm sure it's not rigorously "optimum" because you keep ignoring
variables that could still be changed again. But, I'll bet it's pretty
good because one is selecting the worst possible quantization effect
each time. Haven't tried it. What I have done is the moving of
variables from A to D in a Remez exchange context and that works - it's
pretty easy to prove.

Fred
From: Fred Marshall on
Fred Marshall wrote:
> Here is my dumb and, as yet, unpublished method for quantized
> symmetrical FIR filters:
>

Hey! I thought of a method that may work that would use something
closer to the standard P-M program. It goes like this:

Instead of writing a special program that allows the pruning of basis
functions, just use the standard program and do this:

Just leave the original basis functions and number of variables in the
problem to be the same from beginning to end.

At each step in the manual iteration process, modify the Desired
function (what we normally call here the filter specification) according
to the coefficients / basis functions already determined.

Now the computations at the next step should figure out that a best fit
will be by setting the coefficient to be the same as the one you chose -
in order to minimize the new error.

For a moment I thought that maybe a side benefit of this might be that
all coefficients are under consideration every time. So, the concern
that maybe the search isn't global might be averted? But I've decided
"probably not" because of the modification of the Desired function. Ah
well.

I don't know if this might cause numerical problems but I doubt it.

Oh, I lied above, the "standard" P-M can't quite do this job because it
uses fixed band specs. This approach requires that the spec be
continuous / i.e. a "function" of frequency. That's a small change and
I can mention this:

Instead of piecewise constant specs with "don't care" zones as in P-M,
the Remez algorithm works fine with an end-to-end continuous spec. The
beauty of P-M (and something I didn't realize for a long time) is that
it guarantees no error peaks in the transition bands (the "don't care
zones"). In general, it creates peaks at the band edges going into the
transition bands. That means you really, really "don't care" because
the results will always be good. Alternately, you can approximate
end-to-end and put very low weights on the transition bands. The
problem with doing that is maybe a peak will occur in the transition
band - so you might have to be careful how you specify things and weight
them. But, actually, I digress.

All that's needed and what would be best is to be able to specify the
in-band desired values sample-by-sample in the normal P-M program.
Thats a pretty simple change - a piecewise set of a function of frequency.

Fred