From: tchow on
In article <4c02f044$0$14123$703f8584(a)textnews.kpn.nl>,
Reinier Post <rp(a)raampje.lan> wrote:
>Tim Chow wrote:
>>Technically you're right, but only because modern mathematics and computer
>>science has been so thoroughly conquered by this point of view.
>
>Really? I doubt it. I think it's just sloppy wording. Long division is
>an algorithm. Nobody in his right mind would think of long division as a
>procedure for a computer to divide two natural numbers that approximates
>the ideal method which would not be subject to a computer's memory
>limitations - it *is* that method.
>
>To say that an algorithm is something a computer does, including memory
>limitations, is thoroughly stupid. Dijkstra (by his own assertion
>our country's first professional computer programmer) said: computer
>science is no more about computers than astronomy is about telescopes.
>What he meant was: it is about comput*ing*. Dijkstra was right.

Dijkstra was right, but what exactly is the long division algorithm?
Pre-theoretically, I think it's not at all clear whether an "algorithm"
is something that applies only to objects of *feasible* size or to
objects of *infeasibly large* size as well. The distinction just never
comes up. We only ever perform the algorithm on numbers of feasible
size. Once you explicitly raise the question of whether extremely large
numbers that can never be realized in practice even exist, you start
getting different answers from different people. It becomes a philosophical
question.

>>I was
>>trying to speak from the point of view of someone who is tempted by strict
>>finitism, and who might think of an "algorithm" as "something that runs on
>>an actual computer."
>
>That's not just strict finitism (Euclidean geometry is finitistic
>if you ask me): it's painting yourself into a corner.

O.K., let's use the term "painting yourself into a corner" instead of
"strict finitism." What are the advantages of using that terminology?

>>How do you explain to such a person why we insist on
>>defining the word "algorithm" in an intrinsically unbounded manner when we
>>only ever run them on bounded machines?
>
>We call this 'the difference between a procedure and its implementation'.
>If you can't accept this difference, then be consistent and define
>a triangle as 'a figure of three approximately straight lines'
>and the number seven as 'a quantity roughly equal to three plus four,
>disregarding taxes'.

O.K., I'll accept your definitions of a triangle and of seven. Where do
we go from there?
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences
From: Reinier Post on
Tim Chow wrote:

>In article <4c02f044$0$14123$703f8584(a)textnews.kpn.nl>,
>Reinier Post <rp(a)raampje.lan> wrote:
>>Tim Chow wrote:
>>>Technically you're right, but only because modern mathematics and computer
>>>science has been so thoroughly conquered by this point of view.
>>
>>Really? I doubt it. I think it's just sloppy wording. Long division is
>>an algorithm. Nobody in his right mind would think of long division as a
>>procedure for a computer to divide two natural numbers that approximates
>>the ideal method which would not be subject to a computer's memory
>>limitations - it *is* that method.
>>
>>To say that an algorithm is something a computer does, including memory
>>limitations, is thoroughly stupid. Dijkstra (by his own assertion
>>our country's first professional computer programmer) said: computer
>>science is no more about computers than astronomy is about telescopes.
>>What he meant was: it is about comput*ing*. Dijkstra was right.
>
>Dijkstra was right, but what exactly is the long division algorithm?
>Pre-theoretically, I think it's not at all clear whether an "algorithm"
>is something that applies only to objects of *feasible* size or to
>objects of *infeasibly large* size as well. The distinction just never
>comes up.

I think it does. I remember a discussion I had with a boy of roughly
the same age (about 8) on which was the largest, uncountable or infinite.
I think many of us have had discussions of this kind, where it dawns on
us that there are infinitely many natural numbers, and that operations
such as addition and multiplication apply equally to all of them.
This happened well before I was taught long division, so when that
jhappened, I was already well aware that division applies equally
to all numbers. You're right, the issue of 'unfeasibly large'
never came up, but it had no reason to. It only complicates matters.

> We only ever perform the algorithm on numbers of feasible
>size. Once you explicitly raise the question of whether extremely large
>numbers that can never be realized in practice even exist, you start
>getting different answers from different people. It becomes a philosophical
>question.

I must concede that there is a question to be resolved here.
But not necessarily philosophical in nature.

I think it largely depends on whether those people have been taught
for decimal multiplicatioon and division. True, it's well possible
that many pupils, having applied those algorithms to numbers of
3, 4 or even 5 decimals, never realize that they apply just as well
to numbers of 17, 170, or 10^17 decimals. My guess would be that
most of us do. In any case, this can be treated as an empirical question,
one in psychology or educational science (if there is such a thing).

>>>I was
>>>trying to speak from the point of view of someone who is tempted by strict
>>>finitism, and who might think of an "algorithm" as "something that runs on
>>>an actual computer."
>>
>>That's not just strict finitism (Euclidean geometry is finitistic
>>if you ask me): it's painting yourself into a corner.
>
>O.K., let's use the term "painting yourself into a corner" instead of
>"strict finitism." What are the advantages of using that terminology?

I only used the terminology to suggest that defining an algorithm as
something that can be executed using a fixed amount of resources only
complicates the concept. The idea of being able to repetition a step,
or sequence of steps, *arbitrarily often*, or on working objects of
*arbitrary size*, is not fundamental to the notion of an algorithm,
and if you don't agree it's fundamental, at least it's less complex
(algorithms will tend to have lower Kolmogorov complexity if you
can leave out dealing with overflow conditions).

For instance, long division is very easy to explain, while division
as executed by a computer isn't.

>>>How do you explain to such a person why we insist on
>>>defining the word "algorithm" in an intrinsically unbounded manner when we
>>>only ever run them on bounded machines?
>>
>>We call this 'the difference between a procedure and its implementation'.
>>If you can't accept this difference, then be consistent and define
>>a triangle as 'a figure of three approximately straight lines'
>>and the number seven as 'a quantity roughly equal to three plus four,
>>disregarding taxes'.
>
>O.K., I'll accept your definitions of a triangle and of seven. Where do
>we go from there?

We have just lost the ability to do mathematics,
or more broadly, abstraction and modeling in general.
My vote for returning to abstract but conceptually simple notions of
triangle, numbner, and algorithm.

>--
>Tim Chow tchow-at-alum-dot-mit-dot-edu

--
Reinier Post
http://www.win.tue.nl/~rp/