From: D'Arcy J.M. Cain on
On Sun, 4 Jul 2010 23:46:10 +0900
David Cournapeau <cournape(a)gmail.com> wrote:
> On Sun, Jul 4, 2010 at 11:23 PM, D'Arcy J.M. Cain <darcy(a)druid.net> wrote:
> > Which is 99% of the real-world applications if you factor out the code
> > already written in C or other compiled languages.
>
> This may be true, but there are areas where the percentage is much
> lower. Not everybody uses python for web development. You can be a
> python fan, be reasonably competent in the language, and have good
> reasons to wish for python to be one order of magnitude faster.

I wish it was orders of magnitude faster for web development. I'm just
saying that places where we need compiled language speed that Python
already has that in C.

But, as I said in the previous message, in the end it is up to you to
write your own benchmark based on the operations you need and the usage
patterns you predict that it will need as well. If your application
needs to calculate Pi to 100 places but only needs to do it once there
is no need to include that in your benchmark a million times. A
language that is optimized for calculating Pi shouln't carry a lot of
weight for you.

> I find LUA quite interesting: instead of providing a language simple
> to develop in, it focuses heavily on implementation simplicity. Maybe
> that's the reason why it could be done at all by a single person.

Is that really true about LUA? I haven't looked that closely at it but
that paragraph probably turned off most people on this list to LUA.

--
D'Arcy J.M. Cain <darcy(a)druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
From: D'Arcy J.M. Cain on
On Sat, 3 Jul 2010 20:30:30 -0700 (PDT)
sturlamolden <sturlamolden(a)yahoo.no> wrote:
> CPython 64.6

By the way, I assume that that's Python 2.x. I wonder how Python 3.1
would fare.

--
D'Arcy J.M. Cain <darcy(a)druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
From: David Cournapeau on
On Mon, Jul 5, 2010 at 12:00 AM, D'Arcy J.M. Cain <darcy(a)druid.net> wrote:
> On Sun, 4 Jul 2010 23:46:10 +0900
> David Cournapeau <cournape(a)gmail.com> wrote:
>> On Sun, Jul 4, 2010 at 11:23 PM, D'Arcy J.M. Cain <darcy(a)druid.net> wrote:
>> > Which is 99% of the real-world applications if you factor out the code
>> > already written in C or other compiled languages.
>>
>> This may be true, but there are areas where the percentage is much
>> lower. Not everybody uses python for web development. You can be a
>> python fan, be reasonably competent in the language, and have good
>> reasons to wish for python to be one order of magnitude faster.
>
> I wish it was orders of magnitude faster for web development.  I'm just
> saying that places where we need compiled language speed that Python
> already has that in C.

Well, I wish I did not have to use C, then :) For example, as a
contributor to numpy, it bothers me at a fundamental level that so
much of numpy is in C.

Also, there are some cases where using C for speed is very difficult,
because the marshalling cost almost entirely alleviate the speed
advantages - that means you have to write in C more than you
anticipated. Granted, those may be quite specific to scientific
applications, and cython already helps quite a fair bit in those
cases.

>
> But, as I said in the previous message, in the end it is up to you to
> write your own benchmark based on the operations you need and the usage
> patterns you predict that it will need as well.  If your application
> needs to calculate Pi to 100 places but only needs to do it once there
> is no need to include that in your benchmark a million times.

I would question the sanity of anyone choosing a language because it
can compute Pi to 100 places very quickly :) I am sure google search
would beat most languages if you count implementation + running time
anyway.

>
>> I find LUA quite interesting: instead of providing a language simple
>> to develop in, it focuses heavily on implementation simplicity. Maybe
>> that's the reason why it could be done at all by a single person.
>
> Is that really true about LUA?  I haven't looked that closely at it but
> that paragraph probably turned off most people on this list to LUA.

I hope I did not turn anyone off - but it is definitely not the same
set of tradeoff as python. LUA runtime is way below 1 Mb, for example,
which is one reason why it is so popular for video games. The
following presentation gives a good overview (by LUA creator):

http://www.stanford.edu/class/ee380/Abstracts/100310-slides.pdf

To go back to the original topic: a good example is numeric types. In
python, you have many different numerical types with different
semantics. In LUA, it is much simpler. This makes implementation
simpler, and some aggressive optimizations very effective. The fact
that a LUA interpreter can fit in L1 is quite impressive.

David
From: sturlamolden on
On 4 Jul, 16:47, "bart.c" <ba...(a)freeuk.com> wrote:

> I suspect also the Lua JIT compiler optimises some of the dynamicism out of
> the language (where it can see, for example, that something is always going
> to be a number, and Lua only has one numeric type with a fixed range), so
> that must be a big help.

Python could do the same, replace int and float with a "long double".
It is 80 bit and has a 64 bit mantissa. So it can in theory do the job
of all floating point types and integers up to 64 bit (signed and
unsigned). A long double can 'duck type' all the floating point and
integer types we use. There is really no need for more than one number
type. For an interpreted language, it's just a speed killer. Other
number types belong in e.g. the ctypes, array, struct and NumPy
modules. Speed wise a long double (80 bit) is the native floating
point type on x86 FPUs. There is no penalty memory-wise either,
wrapping an int as PyObject takes more space. For a dynamic language
it can be quite clever to just have one 'native' number type,
observing that the mantissa of a floating point number is an unsigned
integer. That takes a lot of the dynamicity out of the equation. Maybe
you like to have integers and floating point types in the 'language'.
But that does not mean it should be two types in the
'implementation' (i.e. internally in the VM). The implementation could
duck type both with a suffciently long floating point type, and the
user would not notice in the syntax.

MATLAB does the same as Lua. Native number types are always double,
you have to explicitly create the other. Previously they did not even
exist. Scientists have been doing numerical maths with MATLAB for
decades. MATLAB never prevented me from working with integers
mathematically, even if I only worked with double. If I did not know,
I would not have noticed.

a = 1; % a is a double
a = 1 + 1; % a is a double and exactly 2
a = int32(1);

Sturla



From: sturlamolden on
On 4 Jul, 10:03, Stefan Behnel <stefan...(a)behnel.de> wrote:

> Sort of. One of the major differences is the "number" type, which is (by
> default) a floating point type - there is no other type for numbers. The
> main reason why Python is slow for arithmetic computations is its integer
> type (int in Py3, int/long in Py2), which has arbitrary size and is an
> immutable object. So it needs to be reallocated on each computation.

That is why Lua got it right. A floating point type has a mantissa and
can duck type an integer. MATLAB does the same.

Sturla







If it
> was easily mappable to a CPU integer, Python implementations could just do
> that and be fast. But its arbitrary size makes this impossible (or requires
> a noticeable overhead, at least). The floating point type is less of a
> problem, e.g. Cython safely maps that to a C double already. But the
> integer type is.
>
> So it's not actually surprising that Lua beats CPython (and the other
> dynamic languages) in computational benchmarks.
>
> It's also not surprising to me that a JIT compiler beats a static compiler.
> A static compiler can only see static behaviour of the code, potentially
> with an artificially constructed idea about the target data. A JIT compiler
> can see the real data that flows through the code and can optimise for that.
>
> Stefan