From: Tim Little on
On 2010-08-08, Shubee <e.shubee(a)gmail.com> wrote:
> Therefore the set of all indescribable random numbers must be
> uncountably infinite. But if the set of all nonrandom numbers is
> countable, we can use Cantor's diagonalization process, which is an
> algorithm, and find an unlisted nonrandom number. That's a
> contradiction. Did I discover a new paradox in set theory?

No, it's quite an old one. Look up "computable number".

The paradox is resolved by noting that the list of all computable
numbers cannot be generated by an algorithm (look up Turing's Halting
Problem), and therefore neither can its diagonal.


- Tim
From: Mike Terry on
"Shubee" <e.shubee(a)gmail.com> wrote in message
news:78bf68f7-be2d-4d19-a724-8a67926a2990(a)v15g2000yqe.googlegroups.com...
> On Aug 8, 1:35 pm, "Mike Terry"
> <news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> > "Shubee" <e.shu...(a)gmail.com> wrote in message
> >
> >
news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...
> >
> > > I'm interested in the mathematical consequences of randomness if
> > > randomness is defined as follows. Let's represent an infinite sequence
> > > of numbers by the function f on the natural numbers N to the set
> > > {0,1,2,3,4,5,6,7,8,9}.
> >
> > > Suppose we say that the infinite sequence f is random if no algorithm
> > > exists that generates the sequence f(n). For example, the digits of pi
> > > seem random but there are many elementary formulas to choose from that
> > > represent the numerical value of pi perfectly. Thus, in theory, pi can
> > > be computed to arbitrary accuracy.
> >
> > Your definition corresponds to what most people would call "computable".
> >
> > > Question: Can it be proven that mathematically random sequences exist
> > > with this definition?
> >
> > Yes, e.g. as you do below (using a cardinality argument).
> >
> > > As John von Neumann humorous said, "Anyone who attempts to generate
> > > random numbers by deterministic means is, of course, living in a state
> > > of sin.''
> >
> > > The method I have specified for defining nonrandom numbers is clearly
> > > deterministic. But how do we know that truly random numbers exist?
> >
> > > Those were my thoughts late last night.
> >
> > > When I woke up this morning it occurred to me that the set of all
> > > nonrandom numbers must be countable because the set of all algorithms
> > > that generate the nonrandom numbers can be put into a countable
> > > lexicographical order. Therefore the set of all indescribable random
> > > numbers must be uncountably infinite.
> >
> > Good thinking - so there must be non-computable sequences...
> >
> > > But if the set of all nonrandom
> > > numbers is countable, we can use Cantor's diagonalization process,
> > > which is an algorithm, and find an unlisted nonrandom number. That's a
> > > contradiction. Did I discover a new paradox in set theory?
> >
> > No. :)
> >
> > Cantor's diagonalisation process is not an algorithm in the sense most
> > people would accept, because it requires "input" consisting of an
infinite
> > amount of data. (Namely, the infinite list.)
>
> I admit that I did not provide a precise definition of algorithm.
> Looking at the Wikipedia article on computable numbers, I see their
> definition: computable numbers are real numbers that can be computed
> to within any desired precision by a finite, terminating algorithm. I
> don't see anything infinite about determining the first n
> lexicographically ordered algorithms and their computable numbers,
> each to n decimal places.

You don't see anything infinite?

Consider: we can codify algorithms into numbers as you clearly realise,
because you use this to argue the algorithms are countable. E.g. the
algorithms correspond to Turning machines, and the state of a Turing machine
(with it's finite input data) can be encoded as a series of numbers, and
then the whole lot merged into one single number that represents the whole
Turing machine.

But how could you use this numbering to "list" them in a computable manner?

I suspect that what you're thinking is some variation of:
1) Start with n=1
2) Decide: is n a valid Turning machine state number?
(This is certainly a computable function of n, so no problem here)
3) If so, add the computable sequence for algorithm n as the next
number in your list
4) Increment n and loop back to (2)

Or you're doing the same with a more direct "lexical ordering" for Turing
machines - it doesn't matter...

The problem is step (3), because n may correspond to a valid Turing machine
state, but may not correspond to a computable sequence - specifically it may
be a non-halting Turing machine. And there is no way to determine in a
computable manner from n (or equivalently from your lexical representation)
whether or not a given Turing machine is halting.

So if you try to compute a list using my steps (1) to (4) at some point it
will "hang" and never output the next computable number.

>
> I agree that computing pi to n decimal places is unbounded as n
> approaches infinity. But everything is finite for each finite n, even
> in diagonalization.

Sure, no problem here...

>
> > If you could come up with a finite algorithm that would generate the
> > infinite list of all computable sequences, you could combine this with
the
> > anti-diagonalisation approach to obtain an algorithm which computed a
> > sequence not on the list, and this sequence would be computable. In
this
> > case, you would have achieved a contradiction. However, you *cannot*
> > produce an algorithm that will generate the list of all computable
> > sequences, so mathematics survives another day - phew!
>
> I don't think that the problem is with the algorithm. I think the
> logic breaks down due to the fact that all indescribable real numbers
> may be thought of as beginning with any kind of arbitrarily long but
> finite sequence of numbers. So I guess I doubt the mathematical
> existence of randomness in the first half of my post.

The problem is more basic - you think you could list the computable numbers
through some algorithm, but that's just a mistake as explained above.

Mike.

>
> Thanks for your input.
>
> > This has been discussed at great length many times on sci.math, so I'm
sure
> > if you search you will find plenty of previous posts to read about.
(Also
> > search in Google etc. for "computable number" for more background.)
> >
> > Regards,
> > Mike.
>


From: Tim Little on
On 2010-08-08, Shubee <e.shubee(a)gmail.com> wrote:
> I don't see anything infinite about determining the first n
> lexicographically ordered algorithms

Part of the definition of "algorithm" is that it must terminate. You
can certainly list the first n programs or Turing machines, but many
of them do not embody algorithms because they never terminate.

There is no algorithm that, given a program or Turing machine
description, can say whether or not it terminates. In fact (at least
as of a few months ago) there are binary Turing machines with only 5
states for which nobody knows whether they terminate when started on
an empty tape.


- Tim
From: Shubee on
On Aug 8, 5:17 pm, "Mike Terry"
<news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> "Shubee" <e.shu...(a)gmail.com> wrote in message
>
> news:78bf68f7-be2d-4d19-a724-8a67926a2990(a)v15g2000yqe.googlegroups.com...> On Aug 8, 1:35 pm, "Mike Terry"
> > <news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> > > "Shubee" <e.shu...(a)gmail.com> wrote in message
>
> news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...

> > I admit that I did not provide a precise definition of algorithm.
> > Looking at the Wikipedia article on computable numbers, I see their
> > definition: computable numbers are real numbers that can be computed
> > to within any desired precision by a finite, terminating algorithm. I
> > don't see anything infinite about determining the first n
> > lexicographically ordered algorithms and their computable numbers,
> > each to n decimal places.
>
> You don't see anything infinite?
>
> Consider: we can codify algorithms into numbers as you clearly realise,
> because you use this to argue the algorithms are countable. E.g. the
> algorithms correspond to Turning machines, and the state of a Turing machine
> (with it's finite input data) can be encoded as a series of numbers, and
> then the whole lot merged into one single number that represents the whole
> Turing machine.

I have not used Turing machines in my very elementary argument so I'm
a little dismayed that the work of Turing must be invoked, which I
have never studied, to explain my mistake.

Don't you have a direct rebuttal to my argument?

> > > If you could come up with a finite algorithm that would generate the
> > > infinite list of all computable sequences, you could combine this with
> the
> > > anti-diagonalisation approach to obtain an algorithm which computed a
> > > sequence not on the list, and this sequence would be computable. In
> this
> > > case, you would have achieved a contradiction. However, you *cannot*
> > > produce an algorithm that will generate the list of all computable
> > > sequences, so mathematics survives another day - phew!

I still don't see why the entire infinite set of lexicographically
ordered algorithms must be used to determine the first n decimal
places of an unlisted nonrandom number.

> > I don't think that the problem is with the algorithm. I think the
> > logic breaks down due to the fact that all indescribable real numbers
> > may be thought of as beginning with any kind of arbitrarily long but
> > finite sequence of numbers. So I guess I doubt the mathematical
> > existence of randomness in the first half of my post.
>
> The problem is more basic - you think you could list the computable numbers
> through some algorithm, but that's just a mistake as explained above.

I agree that the algorithm is terribly inelegant but still don't see
why the first, second and third lexicographically arranged algorithms
can not be determined.

From: Shubee on
On Aug 8, 5:28 pm, Tim Little <t...(a)little-possums.net> wrote:
> On 2010-08-08, Shubee <e.shu...(a)gmail.com> wrote:
>
> > I don't see anything infinite about determining the first n
> > lexicographically ordered algorithms
>
> Part of the definition of "algorithm" is that it must terminate.  You
> can certainly list the first n programs or Turing machines, but many
> of them do not embody algorithms because they never terminate.

Every one of my computable numbers has an associated well-behaved
algorithm. What's so disastrous about invoking Cantor's
diagonalization on n numbers, each written out to n decimal places.