From: Jerry Avins on
On 7/24/2010 3:10 PM, Nasser M. Abbasi wrote:
> On 7/24/2010 11:08 AM, Steve Pope wrote:
>
>> I so often here a question of the form, "How do I use an FFT
>> to do X", without the questioner having considered that
>> they can solve their problem in the time domain.
>>
>
> The teacher keep telling our class that the most common operation in
> digital world is convolution, and fft is a fast way to do this. So why
> would any one do convolution in time domain? isn't that much slower? so,
> for convolution, first choice is fft, right?
>
> I am just talking about convolution here. btw, the overlap save
> procedure (and there was another one like it?) was hard to understand
> for me, but luckly we did not get that in the exam, else I might have
> failed.

You didn't read the other branch of the thread and follow rbj's link.
Small convolutions are faster directly in the time domain. Defining
"small" is the topic of this thread.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
From: Tim Wescott on
On 07/24/2010 12:10 PM, Nasser M. Abbasi wrote:
> On 7/24/2010 11:08 AM, Steve Pope wrote:
>
>> I so often here a question of the form, "How do I use an FFT
>> to do X", without the questioner having considered that
>> they can solve their problem in the time domain.
>>
>
> The teacher keep telling our class that the most common operation in
> digital world is convolution, and fft is a fast way to do this. So why
> would any one do convolution in time domain? isn't that much slower? so,
> for convolution, first choice is fft, right?

One: I just went to a talk by a guy who had just designed his very first
radio transmitter professionally. He thought it was really cool,
because he has a PhD in RF design and had been teaching it for years.

Think about that with respect to the _practical_, _useful_, _real world_
knowledge you're going to get from your profs.

(Note to all the "PhD Doubters" out there: a guy with a PhD _and_
practical smarts can be unbelievably useful. Guys with PhD's get a bad
rap because a guy with a PhD and _no_ practical smarts is a menace, and
often a menace with credibility in the corner offices).

Two: In order to do one cruddy convolution, after the fact, with a
fixed-length vector, an FFT is probably the fastest. But do implement a
filter that needs to convolve an input stream with a bunch of taps, on
an ongoing basis, with a low-delay output -- an FFT won't always
necessarily by the best or fastest way.

> I am just talking about convolution here. btw, the overlap save
> procedure (and there was another one like it?) was hard to understand
> for me, but luckly we did not get that in the exam, else I might have
> failed.

Yet, if you must do "filtering with FFT" for real-time data, you may
have to use it.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
From: Rune Allnor on
On 24 Jul, 21:10, "Nasser M. Abbasi" <n...(a)12000.org> wrote:
> On 7/24/2010 11:08 AM, Steve Pope wrote:
>
> > I so often here a question of the form, "How do I use an FFT
> > to do X", without the questioner having considered that
> > they can solve their problem in the time domain.
>
> The teacher keep telling our class that the most common operation in
> digital world is convolution, and fft is a fast way to do this. So why
> would any one do convolution in time domain? isn't that much slower? so,
> for convolution, first choice is fft, right?

Vlad hinted at it, but I will elaborate somewhat: With the time-
domain convolution you compute next value of the result sequence
after every recieved value from the input sequence. So with time
domain computations you obtain the result with minimum delay. The
downside to all this is that you might have to do a lot of
computations to produce each output value, but if you need to get
the result fast then you have no choise.

If, on the other hand, there is no time constraint - you already
have all the data you will do your computations on stored in memory -
then you can save some time by doing the FFT-based filtering. In this
case the *total* number of computations can be significantly smaller
than if you do a time domain computation, but the downside is that
you
need to have *all* the data available *before* you start the
computations.

Before you ask - the overlap-save and overlap-add methods are
compromises that combine the two approaches, by doing FFT-based
filtering on sub-frames of a very long sequence. I suspect these
methods are remnants from the times when one could actually get
data sets that were larger than the available RAM, and so needed
both time- and memory-efficient methods for offline processing.

These days the rationale for using overlap-add or overlap-save
FFTs would probably be to minimize numerical artifacts that would
occur in long (tens of millions data points) FFTs.

Rune
From: Steve Pope on
Rune Allnor <allnor(a)tele.ntnu.no> wrote:

>On 24 Jul, 21:10, "Nasser M. Abbasi" <n...(a)12000.org> wrote:

>> The teacher keep telling our class that the most common operation in
>> digital world is convolution, and fft is a fast way to do this. So why
>> would any one do convolution in time domain? isn't that much slower? so,
>> for convolution, first choice is fft, right?

>Vlad hinted at it, but I will elaborate somewhat: With the time-
>domain convolution you compute next value of the result sequence
>after every recieved value from the input sequence. So with time
>domain computations you obtain the result with minimum delay. The
>downside to all this is that you might have to do a lot of
>computations to produce each output value, but if you need to get
>the result fast then you have no choise.

Two other points I would make are the following:

(1) In a time-domain convolution, you can design the sequence
with which you are convolving the signal quite freely; you are
not limited to the sequences defined for DFT's and other common
transforms.

(2) Not all time-domain operations are convolutions. e.g., linear
prediction is not a convolution, but is an alternative to a DFT for
performing a frequency analysis. The statement "the most common
operations are convolutions" may reflect a bias among designers towards
using transform methods (and hence, convolutions) for analysis.
To me this is a self-fulfilling statement ("the most common operations
are convolutions, so use a convolution...").


Steve
From: Randy Yates on
Rune Allnor <allnor(a)tele.ntnu.no> writes:

> On 24 Jul, 21:10, "Nasser M. Abbasi" <n...(a)12000.org> wrote:
>> On 7/24/2010 11:08 AM, Steve Pope wrote:
>>
>> > I so often here a question of the form, "How do I use an FFT
>> > to do X", without the questioner having considered that
>> > they can solve their problem in the time domain.
>>
>> The teacher keep telling our class that the most common operation in
>> digital world is convolution, and fft is a fast way to do this. So why
>> would any one do convolution in time domain? isn't that much slower? so,
>> for convolution, first choice is fft, right?
>
> Vlad hinted at it, but I will elaborate somewhat: With the time-
> domain convolution you compute next value of the result sequence
> after every recieved value from the input sequence. So with time
> domain computations you obtain the result with minimum delay. The
> downside to all this is that you might have to do a lot of
> computations to produce each output value, but if you need to get
> the result fast then you have no choise.
>
> If, on the other hand, there is no time constraint - you already have
> all the data you will do your computations on stored in memory - then
> you can save some time by doing the FFT-based filtering. In this case
> the *total* number of computations can be significantly smaller than
> if you do a time domain computation, but the downside is that you need
> to have *all* the data available *before* you start the computations.

Not if you use the overlap-add or overlap-save method.

> Before you ask - the overlap-save and overlap-add methods are
> compromises

How so?

> that combine the two approaches, by doing FFT-based filtering on
> sub-frames of a very long sequence. I suspect these methods are
> remnants from the times when one could actually get data sets that
> were larger than the available RAM,

You mean like 2010? I didn't find anywhere in the OP's original query
that limited the platform to something like a PC. And even then, as you
yourself noted, you don't really want to do it all in one FFT.

So that opens the platform up to the possibility of just about anything,
from custom hardware to embedded to multicore.
--
Randy Yates % "Rollin' and riding and slippin' and
Digital Signal Labs % sliding, it's magic."
mailto://yates(a)ieee.org %
http://www.digitalsignallabs.com % 'Living' Thing', *A New World Record*, ELO