From: HardySpicer on
On Jan 21, 6:14 am, Greg Berchin <gberc...(a)comicast.net.invalid>
wrote:
> On Wed, 20 Jan 2010 10:29:37 -0500, Greg Berchin <gberc...(a)comicast.net.invalid>
> wrote:
>
> >I believe that Matlab uses the positive exponent form.
>
> I just checked, and currently Matlab does use the signal processing standard
> (negative exponent on the forward transform).  Seems to me that this is a change
> from early Matlab versions, but I could be wrong.
>
> Greg

Why do people in modern times leave out the 1/N scaling for direct FFT
and put it in the inverse FFT?
Makes more sense if you go from time to freq to have the 1/N in eg for
dc we get the average and must divide by N.
Don't say "it's just scaling" and doesn't matter - it does!


Hardy
From: Greg Berchin on
On Wed, 20 Jan 2010 16:31:25 -0800 (PST), HardySpicer <gyansorova(a)gmail.com>
wrote:

>Why do people in modern times leave out the 1/N scaling for direct FFT
>and put it in the inverse FFT?
>Makes more sense if you go from time to freq to have the 1/N in eg for
>dc we get the average and must divide by N.
>Don't say "it's just scaling" and doesn't matter - it does!

I agree, and this has been a topic of discussion here on comp.dsp many times in
the past. I think that the only answer is "tradition".

The issue becomes even more important when implementing convolution as
transform-domain multiplication. It's easy to accidentally end up with an extra
N or 1/N factor.

Greg
From: robert bristow-johnson on
On Jan 20, 8:04 pm, Greg Berchin <gberc...(a)comicast.net.invalid>
wrote:
> On Wed, 20 Jan 2010 16:31:25 -0800 (PST), HardySpicer <gyansor...(a)gmail.com>
> wrote:
>
> >Why do people in modern times leave out the 1/N scaling for direct FFT
> >and put it in the inverse FFT?
> >Makes more sense if you go from time to freq to have the 1/N in eg for
> >dc we get the average and must divide by N.
> >Don't say "it's just scaling" and doesn't matter - it does!
>
> I agree, and this has been a topic of discussion here on comp.dsp many times in
> the past.  I think that the only answer is "tradition".

well, there are a couple of traditions.

to me, clearly the most natural fundamental definition would be

N-1
x[n] = SUM{ X[k] * exp(+j*2*pi*n*k/N) }
k=0

the orthonormal basis functions are exp(j*2*pi*n*k/N) and X[k] are the
coefficients. X[0], unscaled, is the DC component. then the inverse
of that (the forward DFT) is


N-1
X[k] = 1/N * SUM{ x[n] * exp(-j*2*pi*n*k/N) }
n=0


the X[k] are now means rather than sums.

> The issue becomes even more important when implementing convolution as
> transform-domain multiplication.  It's easy to accidentally end up with an extra
> N or 1/N factor.

maybe it should be 1/sqrt(N) in both the forward and inverse DFT.

i think, for the most prevalent convention,


N-1
X[k] = SUM{ x[n] * exp(-j*2*pi*n*k/N) }
n=0


N-1
x[n] = 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) }
k=0



then, defining

X[k] = DFT{ x[n] }
Y[k] = DFT{ y[n] }
H[k] = DFT{ h[n] }

it is true that

if Y[k] = H[k] * X[k] (multiplication, not convolution)

then

N-1
y[n] = SUM{ h[i] * x[n-i] }
i=0

with no 1/N scaling.

if the DFT were defined the "natural" way, there would be a 1/N factor
in that convolution summation.

r b-j