From: Dr Jon D Harrop on
Rahul Jain wrote:
> Jon Harrop <jon(a)ffconsultancy.com> writes:
> > Sure, you can augment a compiler to do anything.
>
> In ML, you _can't_ augment the compiler to do this without creating a
> different dialect, because it doesn't _have_ the types needed to express
> what I wrote in Lisp.

Yes. Doesn't that sword cut both ways though? If you supplement Lisp
with features of ML then you've created a "different dialect" as well.
Doesn't Qi do something like that?

> If I'm wrong, please show me SML code that will
> automatically switch to using bignums when the type inferer determines
> that the values won't fit in a machine word.

I don't know how to do that. MLton may already try to do this. I doubt
it though. I can't imagine that being a bottleneck for many people. ;-)

> > In practice, I have never had a problem caused by fixed-precision integers.
>
> You've never needed a 2 GB file?

Yes. Maybe I will in the future, but I'm already on 64-bit. :-)

> > However, I have lots of problems caused by floating point precision. Does
> > CMUCL's type inference do anything to help with that?
>
> Um, no. How could it change your hardware to conform to your idea of how
> FP should work?

I just want the same functionality that CMUCL already provides for
ints: use machine-precision where possible and arbitrary-precision
otherwise.

> Maybe it could, if there were a language for describing
> your precision requirements. Hmm... actually, it might be able to figure
> out based on the ranges of values whether you'd hit one of the FP
> lossage situations and rearrange the expression until it doesn't. You'd
> need to be _very_ careful in defining the ranges of values, tho, to
> avoid it seeing a problem that won't exist when run on real data.

Yes. I haven't really thought about how you could do it statically...

Cheers,
Jon.

From: Javier on

Rahul Jain ha escrito:


> > type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
> >
> > I found this feature very usefull.
>
> How can you not declare types in Lisp? I don't see how this has anything
> to do with "testing" anything at compile time.

Basically, in ML every type must me known at compile time. If you use
the binary tree definition above, which is an arbitray one, then you
must provide a context to it. For example:

# let my_tree = Empty;;
=> val my_tree : 'a btree = Empty

but, if you assign a real value:

# let my_tree = Node (5, Empty, Empty);;
=> val my_tree : int btree = Node (5, Empty, Empty)

the compiler is able to create a new type called "int btree" from the
generic one.
The good thing about this is that, while being able to use a "dynamic"
type like a btree for storing any kind of data you want to, you also
are enforzed to not mix int with strings or floats in that tree if you
don't intentionally desire that.
This means that your code will always be type safe, avoiding some
annoying bugs.
In Lisp, if you create a binary tree, it may contain data of several
types, which is not safe. Or you can start declaring things, but then
you end up with not elegant spaggetti code, much like if it where made
in C.

> >> Maybe he meant that you need to explicitly... um...
> >> declare (?!?) a number to be a bignum? That would fit in with the lack
> >> of a numeric tower.
> >
> > I don't understand what you're talking about, but you can always
> > convert any int or string to big_int and viceversa.
>
> Yes, but how do _I_ know when I need to use bignums or not?

You must know, you are a programmer. Even in Lisp you must know the
type you are using for every variable.

> > I think that if you are a good programmer, you must know the type of
> > anything you are writing, even if it is writen in Lisp. And ML does
> > allow you to use "generic" or arbitray types when you need. Just take
> > the example of the binary tree above.
>
> But you may not know what that type _means_ because that type is not
> defined to mean anything.

It does mean. Even in mathemathics does mean. It is not the same to do
arithmetic with natural numbers than with real or complex ones. On
computers, this is important too.

> A machine word is a different size on
> different processors. I can't tell ad-hoc whether a value will fit into
> "the" machine word of "the" processor it might run on at any time in the
> future.
> On the other hand, Lisp allows me to say what size the values
> will have and the compiler figures out whether it will necessarily fit
> into a machine word or not and emits the appropriate machine code for
> that inference.

In respect to word size, you are right, dynamic ilanguagges allows you
to write less and more standard code. But the cost of this is that the
rest of the system may be unsafe if you don't declare things.

> > Static inference always produce faster code compared with dynamic
> > inference, for an equivalent code base.
>
> Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
> system I know of. Are you talking about recompiling the codepath each
> time it's run with different types... or what?

I meant dynamic languages in comparison with stong ones having static
inference. Sorry if I was not clear.

> > But you usually need to declare things on dynamic languages to help
> > the inferer, while in statically infered ones you don't, so you save a
> > lot of time on both writing and profiling.
>
> Huh? No way. You need (theoretically) to declare just as much in Lisp as
> you do in ML. Some compilers aren't as good about type inference, and
> some programmers want to keep their codebase dynamic so they can load
> fixes into a live system that might change the datatypes involved in
> some codepaths.

Normaly, when a data type is changed, some side effects are happening
somewhere in your code. For example, you cannot change the type of an
important variable from an integer to a string and pretend that nothing
would happen in you code.
Your statement is only true for interchanging integers and float of
different word size, in which I recognise Lisp is more concise than ML.
But not much better, you can still use different types into one single
variable in ML:

# type number = Int of int | Float of float | Big of Big_int.big_int;;
type number = Int of int | Float of float | Big of Big_int.big_int
# let n = Float 9.4;;
val n : number = Float 9.4
# match n with
| Float n -> print_string "It is a float"
| Int n -> print_string "It is an integer"
| Big n -> print_string "It is a big number";;
=> It is a float
- : unit = ()

From: Rahul Jain on
"Dr Jon D Harrop" <jon(a)ffconsultancy.com> writes:

> Rahul Jain wrote:
>> Jon Harrop <jon(a)ffconsultancy.com> writes:
>> > Sure, you can augment a compiler to do anything.
>>
>> In ML, you _can't_ augment the compiler to do this without creating a
>> different dialect, because it doesn't _have_ the types needed to express
>> what I wrote in Lisp.
>
> Yes. Doesn't that sword cut both ways though? If you supplement Lisp
> with features of ML then you've created a "different dialect" as well.
> Doesn't Qi do something like that?

Qi sounds familiar, but I don't know anything specific about it. What ML
features do you think you need to achieve what I just demonstrated? Type
inference? Nothing in the CL spec forbids that.

>> If I'm wrong, please show me SML code that will
>> automatically switch to using bignums when the type inferer determines
>> that the values won't fit in a machine word.
>
> I don't know how to do that. MLton may already try to do this. I doubt
> it though. I can't imagine that being a bottleneck for many people. ;-)
>
>> > In practice, I have never had a problem caused by fixed-precision integers.
>>
>> You've never needed a 2 GB file?
>
> Yes. Maybe I will in the future, but I'm already on 64-bit. :-)

Ah, so you don't want me to use your software because my processor is
too old. Fair enough. :)

>> > However, I have lots of problems caused by floating point precision. Does
>> > CMUCL's type inference do anything to help with that?
>>
>> Um, no. How could it change your hardware to conform to your idea of how
>> FP should work?
>
> I just want the same functionality that CMUCL already provides for
> ints: use machine-precision where possible and arbitrary-precision
> otherwise.

How would it know what precision you need? You _can_ introspect the
floating point types as far as their precision, etc. I don't know of
anyone that has come up with a language to express your precision
requirements that can then be used to compute which FP type is best for
you. Also, 80 bit floats are the fastest way to compute using an x87
FPU. Should that matter, too?

>> Maybe it could, if there were a language for describing
>> your precision requirements. Hmm... actually, it might be able to figure
>> out based on the ranges of values whether you'd hit one of the FP
>> lossage situations and rearrange the expression until it doesn't. You'd
>> need to be _very_ careful in defining the ranges of values, tho, to
>> avoid it seeing a problem that won't exist when run on real data.
>
> Yes. I haven't really thought about how you could do it statically...

Well, then what are you asking us for? :P

--
Rahul Jain
rjain(a)nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain on
"Javier" <javuchi(a)gmail.com> writes:

> Rahul Jain ha escrito:
>
>
>> > type 'a btree = Empty | Node of 'a * 'a btree * 'a btree
>> >
>> > I found this feature very usefull.
>>
>> How can you not declare types in Lisp? I don't see how this has anything
>> to do with "testing" anything at compile time.
>
> Basically, in ML every type must me known at compile time. If you use
> the binary tree definition above, which is an arbitray one, then you
> must provide a context to it. For example:
>
> # let my_tree = Empty;;
> => val my_tree : 'a btree = Empty
>
> but, if you assign a real value:
>
> # let my_tree = Node (5, Empty, Empty);;
> => val my_tree : int btree = Node (5, Empty, Empty)
>
> the compiler is able to create a new type called "int btree" from the
> generic one.

Lisp has parametrized types, too. You can't tell the compiler how to
propagate your parameters through various operators, because that's not
in the standard. But you can do it given a specific compiler.

> The good thing about this is that, while being able to use a "dynamic"
> type like a btree for storing any kind of data you want to, you also
> are enforzed to not mix int with strings or floats in that tree if you
> don't intentionally desire that.

Yes, that can be done for arrays in Lisp. As I said before, custom data
types don't have their parameters inferred automatically because there's
no good way to tell what each parameter actually means.

> This means that your code will always be type safe, avoiding some
> annoying bugs.

Oh jeez. Shouldn't this kind of discussion be in comp.programming?
I believe that's where these kinds of pointless discussions abound.

> In Lisp, if you create a binary tree, it may contain data of several
> types, which is not safe.

Wrong. By your logic, the Lisp code is not a "safe" data structure.

> Or you can start declaring things, but then
> you end up with not elegant spaggetti code, much like if it where made
> in C.

I don't see where the term "spaghetti" comes from. The code would be
easy enough to follow, and type inference allows you to not need to
declare much.

>> > I don't understand what you're talking about, but you can always
>> > convert any int or string to big_int and viceversa.
>>
>> Yes, but how do _I_ know when I need to use bignums or not?
>
> You must know, you are a programmer. Even in Lisp you must know the
> type you are using for every variable.

That doesn't jive. I know the type, but that's only _because_ I'm using
Lisp, and that's the reason why the _actual_ type exists. I don't care
about how that needs to be implemented on different hardware.

>> > I think that if you are a good programmer, you must know the type of
>> > anything you are writing, even if it is writen in Lisp. And ML does
>> > allow you to use "generic" or arbitray types when you need. Just take
>> > the example of the binary tree above.
>>
>> But you may not know what that type _means_ because that type is not
>> defined to mean anything.
>
> It does mean. Even in mathemathics does mean. It is not the same to do
> arithmetic with natural numbers than with real or complex ones. On
> computers, this is important too.

Um, huh? The reals are an analytic continuation of the naturals (with
the integers as a waypoint). Same with complex and reals. That means
that arithmetic with the "complex" operators is the same as with the
"natural" operators, except for cases where you're using that analytic
continuation and the result is a complex number.

Give me the Lisp Integer type declaration that is the same as "int" in
SML.

> In respect to word size, you are right, dynamic ilanguagges allows you
> to write less and more standard code. But the cost of this is that the
> rest of the system may be unsafe if you don't declare things.

Or not. You seem to insist that CMUCL doesn't exist. You also seem to
insist that a Lisp compiler itself must be an unsafe program because it
can take arbitrary data.

>> > Static inference always produce faster code compared with dynamic
>> > inference, for an equivalent code base.
>>
>> Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
>> system I know of. Are you talking about recompiling the codepath each
>> time it's run with different types... or what?
>
> I meant dynamic languages in comparison with stong ones having static
> inference. Sorry if I was not clear.

Strong languages? You're not getting clearer. :)

Do you mean strongly-typed? That's Lisp. And it has static inference if
the compiler so chooses.

Being a dyanamic language does nothing to preclude anything from being
in the type system.

>> > But you usually need to declare things on dynamic languages to help
>> > the inferer, while in statically infered ones you don't, so you save a
>> > lot of time on both writing and profiling.
>>
>> Huh? No way. You need (theoretically) to declare just as much in Lisp as
>> you do in ML. Some compilers aren't as good about type inference, and
>> some programmers want to keep their codebase dynamic so they can load
>> fixes into a live system that might change the datatypes involved in
>> some codepaths.
>
> Normaly, when a data type is changed, some side effects are happening
> somewhere in your code. For example, you cannot change the type of an
> important variable from an integer to a string and pretend that nothing
> would happen in you code.

Sometimes I can. I could be dealing with generic operators that will
just adapt to the new types. I change what I need to change and don't
change what I don't need to. Lisp is not ML.

> Your statement is only true for interchanging integers and float of
> different word size, in which I recognise Lisp is more concise than ML.
> But not much better, you can still use different types into one single
> variable in ML:
>
> # type number = Int of int | Float of float | Big of Big_int.big_int;;
> type number = Int of int | Float of float | Big of Big_int.big_int
> # let n = Float 9.4;;
> val n : number = Float 9.4
> # match n with
> | Float n -> print_string "It is a float"
> | Int n -> print_string "It is an integer"
> | Big n -> print_string "It is a big number";;
> => It is a float
> - : unit = ()

But you still can't choose statically, for all architectures which of
them you get your input as. If you choose wrong, you get an in
From: Thomas A. Russ on
"Javier" <javuchi(a)gmail.com> writes:

> Rahul Jain ha escrito:
> >
> > Yes, but how do _I_ know when I need to use bignums or not?
>
> You must know, you are a programmer. Even in Lisp you must know the
> type you are using for every variable.

Actually, this is not true in the case of certain distinctions such as
the (arbitrary) one between FIXNUMs and BIGNUMs. Since the point at
which these values roll over from one type to another is
implementation-dependent, it is hardly reasonable to expect that one
need to know in advance exactly which implementation the code is being
run on. And to have to produce different code for each such
implementation.

You should only need to know this if you are trying to do something very
specific where the exact type is important to you.

If all you care about is the more general type of INTEGER or NUMBER,
then it hardly seems convenient to need to declare or even know exactly
which type of number you are getting. If you want to, say, write code
to find equational roots using the Newton method, why do you need to
care whether the equation returns fixnum, bignum, single floats or
double floats?

> In respect to word size, you are right, dynamic ilanguagges allows you
> to write less and more standard code. But the cost of this is that the
> rest of the system may be unsafe if you don't declare things.

Well, safety is always relative. Just because there is compile-time
type safety doesn't help you when there is interactive input. Nor does
it protect you against algorithmic bugs, divide by zero (unless you make
rather heroic assumptions about the cleverness of the type system and
what it can figure out.)

> > > Static inference always produce faster code compared with dynamic
> > > inference, for an equivalent code base.
> >
> > Huh? Dynamic "inference" doesn't make sense in the context of any Lisp
> > system I know of. Are you talking about recompiling the codepath each
> > time it's run with different types... or what?
>
> I meant dynamic languages in comparison with stong ones having static
> inference. Sorry if I was not clear.

Well, if we need to argue generalizations, then how about dynamic
languages always let you write your code faster than statically typed
languages?

Besides, a statically typed language is not always superior in execution
speed. If what you are doing requires type dispatch, the compiler,
especially if it has specially-tweaked mechanisms for this is probably a
lot more efficient than whatever you have to add to your program to get
the same effect and flexibility. (Greenspun's 10th rule, etc.)

> # type number = Int of int | Float of float | Big of Big_int.big_int;;
> type number = Int of int | Float of float | Big of Big_int.big_int
> # let n = Float 9.4;;
> val n : number = Float 9.4
> # match n with
> | Float n -> print_string "It is a float"
> | Int n -> print_string "It is an integer"
> | Big n -> print_string "It is a big number";;
> => It is a float
> - : unit = ()

So why would this sort of construct be any more efficient than a dynamic
dispatch, when the runtime system knows it needs to handle this?




--
Thomas A. Russ, USC/Information Sciences Institute
First  |  Prev  |  Next  |  Last
Pages: 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Prev: Pocket Lisp Machine
Next: Next Generation of Language