From: Teemu Likonen on
* 2010-07-04 10:03 (+0200), Stefan Behnel wrote:

> The main reason why Python is slow for arithmetic computations is its
> integer type (int in Py3, int/long in Py2), which has arbitrary size
> and is an immutable object. So it needs to be reallocated on each
> computation. If it was easily mappable to a CPU integer, Python
> implementations could just do that and be fast. But its arbitrary size
> makes this impossible (or requires a noticeable overhead, at least).
> The floating point type is less of a problem, e.g. Cython safely maps
> that to a C double already. But the integer type is.

You may be right. I'll just add that Common Lisp's integers are of
arbitrary size too but programmer can declare them as fixnums. Such
declarations kind of promise that the numbers really are between
most-negative-fixnum and most-positive-fixnum. Compiler can then
optimize the code to efficient machine instructions.

I guess Python might have use for some sort of

(defun foo (variable)
(declare (type fixnum variable))
...)
From: David Cournapeau on
On Sun, Jul 4, 2010 at 5:03 PM, Stefan Behnel <stefan_ml(a)behnel.de> wrote:
> sturlamolden, 04.07.2010 05:30:
>>
>> I was just looking at Debian's benchmarks. It seems LuaJIT is now (on
>> median) beating Intel Fortran!
>>
>> C (gcc) is running the benchmarks faster by less than a factor of two.
>> Consider that Lua is a dynamically typed scripting language very
>> similar to Python.
>
> Sort of. One of the major differences is the "number" type, which is (by
> default) a floating point type - there is no other type for numbers. The
> main reason why Python is slow for arithmetic computations is its integer
> type (int in Py3, int/long in Py2), which has arbitrary size and is an
> immutable object. So it needs to be reallocated on each computation. If it
> was easily mappable to a CPU integer, Python implementations could just do
> that and be fast. But its arbitrary size makes this impossible (or requires
> a noticeable overhead, at least). The floating point type is less of a
> problem, e.g. Cython safely maps that to a C double already. But the integer
> type is.

Actually, I think the main reason why Lua is much faster than other
dynamic languages is its size. The language is small. You don't list,
dict, tuples, etc... Making 50 % of python fast is "easy" (in the
sense that it has been done). I would not be surprised if it is
exponentially harder the closer you get to 100 %. Having a small
language means that the interpreter is small - small enough to be kept
in L1, which seems to matter a lot
(http://www.reddit.com/r/programming/comments/badl2/luajit_2_beta_3_is_out_support_both_x32_x64/c0lrus0).

If you are interested in facts and technical details (rather than mere
speculations), this thread is interesting
http://lambda-the-ultimate.org/node/3851. It has participation of
LuaJIT author, Pypy author and Brendan Eich :)


> It's also not surprising to me that a JIT compiler beats a static compiler.
> A static compiler can only see static behaviour of the code, potentially
> with an artificially constructed idea about the target data. A JIT compiler
> can see the real data that flows through the code and can optimise for that.

Although I agree that in theory, it is rather obvious that a JIT
compiler can do many things that static analysis cannot, this is the
first time it has happened in practice AFAIK. Even hotspot was not
faster than fortran and C, and it has received tons of work by people
who knew what they were doing. The only example of a dynamic language
being as fast/faster than C that I am aware of so far is Staline, the
aggressive compiler for scheme (used in signal processing in
particular).

David
From: D'Arcy J.M. Cain on
On 04 Jul 2010 04:15:57 GMT
Steven D'Aprano <steve(a)REMOVE-THIS-cybersource.com.au> wrote:
> "Need" is a bit strong. There are plenty of applications where if your
> code takes 0.1 millisecond to run instead of 0.001, you won't even
> notice. Or applications that are limited by the speed of I/O rather than
> the CPU.

Which is 99% of the real-world applications if you factor out the code
already written in C or other compiled languages. That's the point of
Python after all. You speed up programming rather than programs but
allow for refactoring into C when necessary. And it's not call CPython
for nothing. off-the-shelf benchmarks are fun but mostly useless for
choosing a language, priogram, OS or machine unless you know that it
checks the actual things that you need in the proportion that you need.

> But I'm nitpicking... this is a nice result, the Lua people should be
> proud, and I certainly wouldn't say no to a faster Python :)

Ditto, ditto, ditto and ditto.

> It's not like this is a race, and speed is not the only thing which a
> language is judged by. Otherwise you'd be programming in C, not Python,
> right?

Or assembler.

--
D'Arcy J.M. Cain <darcy(a)druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
From: David Cournapeau on
On Sun, Jul 4, 2010 at 11:23 PM, D'Arcy J.M. Cain <darcy(a)druid.net> wrote:
> On 04 Jul 2010 04:15:57 GMT
> Steven D'Aprano <steve(a)REMOVE-THIS-cybersource.com.au> wrote:
>> "Need" is a bit strong. There are plenty of applications where if your
>> code takes 0.1 millisecond to run instead of 0.001, you won't even
>> notice. Or applications that are limited by the speed of I/O rather than
>> the CPU.
>
> Which is 99% of the real-world applications if you factor out the code
> already written in C or other compiled languages.

This may be true, but there are areas where the percentage is much
lower. Not everybody uses python for web development. You can be a
python fan, be reasonably competent in the language, and have good
reasons to wish for python to be one order of magnitude faster.

I find LUA quite interesting: instead of providing a language simple
to develop in, it focuses heavily on implementation simplicity. Maybe
that's the reason why it could be done at all by a single person.

David
From: bart.c on

"sturlamolden" <sturlamolden(a)yahoo.no> wrote in message
news:daa07acb-d525-4e32-91f0-16490027cc42(a)w12g2000yqj.googlegroups.com...
>
> I was just looking at Debian's benchmarks. It seems LuaJIT is now (on
> median) beating Intel Fortran!
>
> C (gcc) is running the benchmarks faster by less than a factor of two.
> Consider that Lua is a dynamically typed scripting language very
> similar to Python.
>
> LuaJIT also runs the benchmarks faster than Java 6 server, OCaml, and
> SBCL.
>
> I know it's "just a benchmark" but this has to count as insanely
> impressive. Beating Intel Fortran with a dynamic scripting language,
> how is that even possible? And what about all those arguments that
> dynamic languages "have to be slow"?
>
> If this keeps up we'll need a Python to Lua bytecode compiler very
> soon. And LuaJIT 2 is rumoured to be much faster than the current...
>
> Looking at median runtimes, here is what I got:
>
> gcc 1.10
>
> LuaJIT 1.96
>
> Java 6 -server 2.13
> Intel Fortran 2.18
> OCaml 3.41
> SBCL 3.66
>
> JavaScript V8 7.57
>
> PyPy 31.5
> CPython 64.6
> Perl 67.2
> Ruby 1.9 71.1
>
> The only comfort for CPython is that Ruby and Perl did even worse.

I didn't see the same figures; LuaJIT seem to be 4-5 times as slow as one of
the C's, on average. Some benchmarks were slower than that.

But I've done my own brief tests and I was quite impressed with LuaJIT which
seemed to outperform C on some tests.

I'm developing my own language and LuaJIT is a new standard to beat for this
type of language. However, Lua is quite a lightweight language with
minimalist data types, it doesn't suit everybody.

I suspect also the Lua JIT compiler optimises some of the dynamicism out of
the language (where it can see, for example, that something is always going
to be a number, and Lua only has one numeric type with a fixed range), so
that must be a big help.

--
Bartc