From: MRAB on
Nobody wrote:
> On Sun, 31 Jan 2010 22:36:32 +0000, Steven D'Aprano wrote:
>
>>> for example, in if you have a function 'f' which takes two parameters to
>>> call the function and get the result you use:
>>>
>>> f 2 3
>>>
>>> If you want the function itself you use:
>>>
>>> f
>> How do you call a function of no arguments?
>
> There's no such thing. All functions take one argument and return a value.
>
> As functions don't have side-effects, there is seldom much point in having
> a function with no arguments or which doesn't return a value. In cases
> where it is useful (i.e. a value must have function type), you can use the
> unit type "()" (essentially a zero-element tuple), e.g.:
>
> f () = 1
> or:
> f x = ()
>
A function with no arguments could be used as a lazy constant, generated
only on demand.
From: Chris Rebert on
On Mon, Feb 1, 2010 at 6:14 PM, MRAB <python(a)mrabarnett.plus.com> wrote:
> Nobody wrote:
>> On Sun, 31 Jan 2010 22:36:32 +0000, Steven D'Aprano wrote:
>>>> for example, in if you have a function 'f' which takes two parameters to
>>>> call the function and get the result you use:
>>>>
>>>>  f 2 3
>>>>
>>>> If you want the function itself you use:
>>>>
>>>>   f
>>>
>>> How do you call a function of no arguments?
>>
>> There's no such thing. All functions take one argument and return a value.
>>
>> As functions don't have side-effects, there is seldom much point in having
>> a function with no arguments or which doesn't return a value. In cases
>> where it is useful (i.e. a value must have function type), you can use the
>> unit type "()" (essentially a zero-element tuple), e.g.:
>>
>>        f () = 1
>> or:
>>        f x = ()
>>
> A function with no arguments could be used as a lazy constant, generated
> only on demand.

The usefulness of that depends on a language's evaluation strategy.
Haskell, for instance, uses lazy evaluation by default, so your use
case doesn't apply in that instance.

Cheers,
Chris
--
http://blog.rebertia.com
From: Nobody on
On Mon, 01 Feb 2010 14:35:57 -0800, Jonathan Gardner wrote:

>> If it was common-place to use Curried functions and partial application in
>> Python, you'd probably prefer "f a b c" to "f(a)(b)(c)" as well.
>
> That's just the point. It isn't common to play with curried functions
> or monads or anything like that in computer science today. Yes,
> Haskell exists, and is a great experiment in how such a language could
> actually work. But at the same time, you have to have a brain the size
> of the titanic to contain all the little details about the language
> before you could write any large-scale application.

No, not really. Haskell (and previously ML) are often used as introductory
languages in Comp.Sci. courses (at least in the UK).

You don't need to know the entire language before you can use any of it
(if you did, Python would be deader than a certain parrot; Python's dark
corners are *really* dark).

The lack of mutable state (or at least, the isolation of it within monads)
eliminates a lot of potential problems. How many Python novices get
tripped up by "x = y = [] ; x.append(...); # now y has changed"?

And in spite of the category theory behind monads, Haskell's I/O system
really isn't any more complex than that of any other language, beyond the
constraint that you can only use it in "procedures" (i.e. something
returning an IO instance), not in functions. Which for the most part, is a
net win, as it forces you to maintain a reasonable degree of structure.

Now, if you actually want to use everything the language has to offer, you
can run into some fairly hairy error messages. But then I've found that to
be a common problem with generic programming in general. E.g. error
messages relating to the C++ STL can be quite insanely complex
(particularly when the actual error is "there are so many nested templates
that the mangled function name is longer than the linker can handle" and
it's trying to explain *where* the error is).

From: John Bokma on
Jonathan Gardner <jgardner(a)jonathangardner.net> writes:

> One of the bad things with languages like perl

FYI: the language is called Perl, the program that executes a Perl
program is called perl.

> without parentheses is that getting a function ref is not obvious. You
> need even more syntax to do so. In perl:
>
> foo(); # Call 'foo' with no args.
> $bar = foo; # Call 'foo; with no args, assign to '$bar'
> $bar = &foo; # Don't call 'foo', but assign a pointer to it to '$bar'
> # By the way, this '&' is not the bitwise-and '&'!!!!

It should be $bar = \&foo
Your example actually calls foo...

[..]

> One is simple, consistent, and easy to explain. The other one requires
> the introduction of advanced syntax and an entirely new syntax to make
> function calls with references.

The syntax follows that of referencing and dereferencing:

$bar = \@array; # bar contains now a reference to array
$bar->[ 0 ]; # first element of array referenced by bar
$bar = \%hash; # bar contains now a reference to a hash
$bar->{ key }; # value associated with key of hash ref. by bar
$bar = \&foo; # bar contains now a reference to a sub
$bar->( 45 ); # call sub ref. by bar with 45 as an argument

Consistent: yes. New syntax? No.

Also, it helps to think of

$ as a thing
@ as thingies indexed by numbers
% as thingies indexed by keys

--
John Bokma j3b

Hacking & Hiking in Mexico - http://johnbokma.com/
http://castleamber.com/ - Perl & Python Development
From: Nobody on
On Mon, 01 Feb 2010 14:13:38 -0800, Jonathan Gardner wrote:

> I judge a language's simplicity by how long it takes to explain the
> complete language. That is, what minimal set of documentation do you
> need to describe all of the language?

That's not a particularly good metric, IMHO.

A simple "core" language doesn't necessarily make a language simple to
use. You can explain the entirety of pure lambda calculus or combinators
in five minutes, but you wouldn't want to write real code in either (and
you certainly wouldn't want to read such code which was written by someone
else).

For a start, languages with a particularly simple "core" tend to delegate
too much to the library. One thing which puts a lot of people off of
lisp is the lack of infix operators; after all, (* 2 (+ 3 4)) works fine
and doesn't require any additional language syntax. For an alternative,
Tcl provides the "expr" function which essentially provides a sub-language
for arithmetic expressions.

A better metric is whether using N features has O(N) complexity, or O(N^2)
(where you have to understand how each feature relates to each other
feature) or even O(2^N) (where you have to understand every possible
combination of interactions).

> With a handful of statements,
> and a very short list of operators, Python beats out every language in
> the Algol family that I know of.

Not once you move beyond the 10-minute introduction, and have to start
thinking in terms of x + y is x.__add__(y) or maybe y.__radd__(x) and also
that x.__add__(y) is x.__getattribute__('__add__')(y) (but x + y *isn't*
equivalent to the latter due to __slots__), and maybe .__coerce__() gets
involved somewhere, and don't even get me started on __metaclass__ or
__init__ versus __new__ or ...

Yes, the original concept was very nice and clean, but everything since
then has been wedged in there by sheer force with a bloody great hammer.