From: Javier on

Joe Marshall ha escrito:

> Javier wrote:
> >
> > Static inference always produce faster code compared with dynamic
> > inference, for an equivalent code base.
>
> http://www.hpl.hp.com/techreports/1999/HPL-1999-77.pdf
>
> ``Contrary to intuition, we demonstrate that it is possible to
> use a piece of software to improve the performance of a
> native, statically optimized program binary, while it is
> executing. Dynamo not only speeds up real application
> programs, its performance improvement is often quite
> significant. For example, the performance of many +O2
> optimized SPECint95 binaries running under Dynamo is
> comparable to the performance of their +O4 optimized
> version running without Dynamo.''

In practice, you can almost reach the speed of a statically program,
but then the compiler must create multiple versions of the functions to
acommodate to the varius types it may use. This is the case of Psyco
(for Python).
SBCL is not, as far as I know, able to do so, and you must declare
everything which is not so susceptible to be inferred at compile time.
Even in the case that some implementation is able to do so, you have
got the penalty of the compiler being compiling everything in real time
for every new variable type encountered, so meanwhile it is compiling,
you are wasting CPU cycles. And of course, you also waste CPU cache
hits, because the code is more fragmented and much bigger. And of
course, you waste more memory too.
So I still believe that dynamic inference does always have less
performance for a similar codebase and compilers of similar qualities.

From: Rahul Jain on
"Javier" <javuchi(a)gmail.com> writes:

> One important thing I miss from Lisp compared to ML, is for example the
> ability to test at compile time about the type of something. For
> example you can define a list type in the style of Lisp:
>
> type 'a lisp_list = Nil | List of 'a list
>
> so now every variable using this type can be either Nil or a list
> itself. Or even binary trees:
>
> type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
>
> I found this feature very usefull.

How can you not declare types in Lisp? I don't see how this has anything
to do with "testing" anything at compile time.

>> Maybe he meant that you need to explicitly... um...
>> declare (?!?) a number to be a bignum? That would fit in with the lack
>> of a numeric tower.
>
> I don't understand what you're talking about, but you can always
> convert any int or string to big_int and viceversa.

Yes, but how do _I_ know when I need to use bignums or not?

> I think that if you are a good programmer, you must know the type of
> anything you are writing, even if it is writen in Lisp. And ML does
> allow you to use "generic" or arbitray types when you need. Just take
> the example of the binary tree above.

But you may not know what that type _means_ because that type is not
defined to mean anything. A machine word is a different size on
different processors. I can't tell ad-hoc whether a value will fit into
"the" machine word of "the" processor it might run on at any time in the
future. On the other hand, Lisp allows me to say what size the values
will have and the compiler figures out whether it will necessarily fit
into a machine word or not and emits the appropriate machine code for
that inference.

> Static inference always produce faster code compared with dynamic
> inference, for an equivalent code base.

Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
system I know of. Are you talking about recompiling the codepath each
time it's run with different types... or what?

> But you usually need to declare things on dynamic languages to help
> the inferer, while in statically infered ones you don't, so you save a
> lot of time on both writing and profiling.

Huh? No way. You need (theoretically) to declare just as much in Lisp as
you do in ML. Some compilers aren't as good about type inference, and
some programmers want to keep their codebase dynamic so they can load
fixes into a live system that might change the datatypes involved in
some codepaths.

> You don't need to maintain parallel versions of the codebase, or if you
> do, you would also do in Lisp (this is not language specific, but the
> kind of algorithm/implementation you choose).

No. Because Lisp types are more closely related to the actual
mathematical types as opposed to ML types which are more closely related
to the things the computer happens to use at the moment.

I don't need to write one version with integer declarations and another
with fixnum declarations. I just define the ranges of the inputs and the
type inferer takes care of figuring out whether that architecture on
that implementation can use fixnums or needs to do generic arithmetic.
Please refer to the examples I gave. If I compiled the code on a 64 bit
machine, I wouldn't get the efficiency note. If I wanted to have that
code optimized in ML, I'd need to write one bignum version, one int
version, write code for what the CMUCL type inferer would compute, and
have that run at compile-time to determine whether I want to use the
bignum or int version given the processor's machine word size. In Lisp,
I just write the code in the way I think about it, mathematically.

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain on
Jon Harrop <jon(a)ffconsultancy.com> writes:

> Rahul Jain wrote:
>> Jon Harrop <jon(a)ffconsultancy.com> writes:
>>> Rahul Jain wrote:
>>>> Lisp has a much richer type system than any of the MLs.
>>>
>>> I'm not sure what you mean by that. ML does lots of things that Lisp does
>>> not and vice-versa.
>>
>> Hmm. I'm not sure about that. All I can think of is inference of
>> parametrized array types (non-upgraded) across functions. Lisp just
>> doesn't _require_ those things to be done. If someone cared, they could
>> augment a compiler to do it.
>
> Sure, you can augment a compiler to do anything.

In ML, you _can't_ augment the compiler to do this without creating a
different dialect, because it doesn't _have_ the types needed to express
what I wrote in Lisp. If I'm wrong, please show me SML code that will
automatically switch to using bignums when the type inferer determines
that the values won't fit in a machine word.

> In practice, I have never had a problem caused by fixed-precision integers.

You've never needed a 2 GB file?

> However, I have lots of problems caused by floating point precision. Does
> CMUCL's type inference do anything to help with that?

Um, no. How could it change your hardware to conform to your idea of how
FP should work? Maybe it could, if there were a language for describing
your precision requirements. Hmm... actually, it might be able to figure
out based on the ranges of values whether you'd hit one of the FP
lossage situations and rearrange the expression until it doesn't. You'd
need to be _very_ careful in defining the ranges of values, tho, to
avoid it seeing a problem that won't exist when run on real data.

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Pascal Costanza on
Javier wrote:

> So I still believe that dynamic inference does always have less
> performance for a similar codebase and compilers of similar qualities.

Perform a few benchmarks, and you'll be surprised.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Rahul Jain on
"Javier" <javuchi(a)gmail.com> writes:

> And of course, you also waste CPU cache hits, because the code is
> more fragmented and much bigger.

Not so. Dynamo dynamically defragments the code.

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
First  |  Prev  |  Next  |  Last
Pages: 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Prev: Pocket Lisp Machine
Next: Next Generation of Language