From: Alain Ketterlin on
"Bartc" <bartc(a)freeuk.com> writes:

>> def norm(V):
>> L = math.sqrt( sum( [x**2 for x in V] ) )
>> return [ x/L for x in V ]
>
> There's a cost involved in using those fancy constructions.

Sure. The above has three loops that take some time.

> I found the following to be about twice as fast, when vectors are
> known to have 3 elements:
>
> def norm3d(v):
> L = math.sqrt((v[0]*v[0]+v[1]*v[1]+v[2]*v[2]))
> return (v[0]/L,v[1]/L,v[2]/L)
>
> (Strangely, changing those divides to multiplies made it slower.)

You mean by setting L to 1.0 / math.sqrt(...) and using v[0]*L etc.?
I think * and / have the same cost on floats, and the added / adds
some cost. But what you observe is probably caused by the overloading of
"*", that needs more type checks. You may try with operator.mul to see
if the call compensates the cost of type checking, but I doubt it.

-- Alain.
From: Bartc on

"Alain Ketterlin" <alain(a)dpt-info.u-strasbg.fr> wrote in message
news:87fwyxgvuv.fsf(a)dpt-info.u-strasbg.fr...
> "Bartc" <bartc(a)freeuk.com> writes:

>> def norm3d(v):
>> L = math.sqrt((v[0]*v[0]+v[1]*v[1]+v[2]*v[2]))
>> return (v[0]/L,v[1]/L,v[2]/L)
>>
>> (Strangely, changing those divides to multiplies made it slower.)
>
> You mean by setting L to 1.0 / math.sqrt(...) and using v[0]*L etc.?

Yes.

> I think * and / have the same cost on floats, and the added / adds
> some cost.

I expected no measurable difference, not running Python anyway (I tried it
in gcc and using divides increased runtimes by 50%, corresponding to some 1%
for Python).

I would naturally have written it using multiplies, and was just surprised
at a 3-4% slowdown.

> But what you observe is probably caused by the overloading of
> "*", that needs more type checks.

That sounds reasonable.

--
Bartc



From: Lawrence D'Oliveiro on
In message <6Dw5o.72330$Ds3.63060(a)hurricane>, Bartc wrote:

> There's a cost involved in using those fancy constructions.

Sure. But at the point that starts to matter, you have to ask yourself why
you're not rewriting the CPU-intensive part in C.
From: sturlamolden on
On 30 Jul, 13:46, Lawrence D'Oliveiro <l...(a)geek-
central.gen.new_zealand> wrote:

> Say a vector V is a tuple of 3 numbers, not all zero. You want to normalize
> it (scale all components by the same factor) so its magnitude is 1.
>
> The usual way is something like this:
>
>     L = math.sqrt(V[0] * V[0] + V[1] * V[1] + V[2] * V[2])
>     V = (V[0] / L, V[1] / L, V[2] / L)
>
> What I don’t like is having that intermediate variable L leftover after the
> computation.

L = math.sqrt(V[0] * V[0] + V[1] * V[1] + V[2] * V[2])
V = (V[0] / L, V[1] / L, V[2] / L)
del L

But this is the kind of programming tasks where NumPy is nice:

V[:] = V / np.sqrt((V**2).sum())


Sturla