From: Andreas Waldenburger on
On Tue, 8 Sep 2009 11:12:18 -0700 (PDT) sturlamolden
<sturlamolden(a)yahoo.no> wrote:

> On 8 Sep, 15:08, pdpi <pdpinhe...(a)gmail.com> wrote:
>
> > Come, come. I think it's a good rule that, where available, a
> > vendor- supplied implementation is the preferable choice until
> > proven otherwise.
>
> Even for the simplest of equations?
>
Yes. It might be implemented in some clever way that you didn't think
of, and thereby work much faster or more precisely than your own
implementation.

Or it could come with a whole library that might help you with other
tasks related to what you're doing.

And just a matter of personal opinion: I think the phrase "you are not
competent to do any scientific programming" was overly harsh. Not that
the general sentiment of "this is actually easy" shouldn't be
expressed at all, but bringing in estimations of competence based on
two sentences might hurt feelings that were in no need to be hurt.


/W

--
INVALID? DE!

From: Steven D'Aprano on
On Tue, 08 Sep 2009 11:12:18 -0700, sturlamolden wrote:

> On 8 Sep, 15:08, pdpi <pdpinhe...(a)gmail.com> wrote:
>
>> Come, come. I think it's a good rule that, where available, a vendor-
>> supplied implementation is the preferable choice until proven
>> otherwise.
>
> Even for the simplest of equations?

A decent vendor-supplied implementation will include error checking that
you otherwise would need to implement yourself, so yes.

Also, given the oddities of floating point, a decent vendor-supplied
implementation is likely to work successfully on all the corner cases
where floats act bizarrely, or at least fail less disastrously than a
naive implementation will.

Third, it's very easy to use the wrong formula, especially for something
like the Hann window function which is known by two different names and
is commonly expressed as three different versions, two of which fail for
a window width of 1.

http://en.wikipedia.org/wiki/Window_function#Hann_window
http://en.wikipedia.org/wiki/Hann_function
http://mathworld.wolfram.com/HanningFunction.html


And finally, no matter how simple the equation, why re-invent the wheel?


--
Steven
From: sturlamolden on
On 9 Sep, 00:24, Steven D'Aprano
<ste...(a)REMOVE.THIS.cybersource.com.au> wrote:

> A decent vendor-supplied implementation will include error checking that
> you otherwise would need to implement yourself, so yes.

Not for code like this:

>>> import numpy as np
>>> n = np.arange(101)
>>> w = 0.5*(1.0-np.cos(2*np.pi*n/(100.)))





From: pdpi on
On Sep 9, 3:27 am, sturlamolden <sturlamol...(a)yahoo.no> wrote:
> On 9 Sep, 00:24, Steven D'Aprano
>
> <ste...(a)REMOVE.THIS.cybersource.com.au> wrote:
> > A decent vendor-supplied implementation will include error checking that
> > you otherwise would need to implement yourself, so yes.
>
> Not for code like this:
>
>
>
> >>> import numpy as np
> >>> n = np.arange(101)
> >>> w = 0.5*(1.0-np.cos(2*np.pi*n/(100.)))

Well, I went and dug into NumPy. They write it as 0.5 - 0.5 * cos
(...), and special case N = 1, and properly error check N < 1. Still,
probably because of differences in dictionary look ups (because of
namespace scopes), np.hanning, on average, takes a wee bit over half
as long as your case, and yours is only a shade faster than

>>> window = [0.5 - math.cos(2 * x * math.pi /100.) for x in range(101)]

(Yes, I know I should've used xrange instead of range)
From: pdpi on
On Sep 9, 3:46 pm, pdpi <pdpinhe...(a)gmail.com> wrote:
> On Sep 9, 3:27 am, sturlamolden <sturlamol...(a)yahoo.no> wrote:
>
> > On 9 Sep, 00:24, Steven D'Aprano
>
> > <ste...(a)REMOVE.THIS.cybersource.com.au> wrote:
> > > A decent vendor-supplied implementation will include error checking that
> > > you otherwise would need to implement yourself, so yes.
>
> > Not for code like this:
>
> > >>> import numpy as np
> > >>> n = np.arange(101)
> > >>> w = 0.5*(1.0-np.cos(2*np.pi*n/(100.)))
>
> Well, I went and dug into NumPy. They write it as 0.5 - 0.5 * cos
> (...), and special case N = 1, and properly error check N < 1. Still,
> probably because of differences in dictionary look ups (because of
> namespace scopes), np.hanning, on average, takes a wee bit over half
> as long as your case, and yours is only a shade faster than
>
> >>> window = [0.5 - math.cos(2 * x * math.pi /100.) for x in range(101)]
>
> (Yes, I know I should've used xrange instead of range)

Sorry, should've been smarter than this.

Raising this to 1 million, rather than 100, nodes in the window, the
timing difference between your version and NumPy's is tiny (but numpy
still edges you out, but just barely), but they trounce my naive
version, being around 7 or 8 times faster the list comprehension I
suggested. So implementing this in vanilla python instead of using
numpy would hurt performance a fair bit, and odds are the OP is going
to put this to use somewhere that involves more maths, which makes
learning about numpy well worth having asked the question here.