From: Shubee on
I'm interested in the mathematical consequences of randomness if
randomness is defined as follows. Let's represent an infinite sequence
of numbers by the function f on the natural numbers N to the set
{0,1,2,3,4,5,6,7,8,9}.

Suppose we say that the infinite sequence f is random if no algorithm
exists that generates the sequence f(n). For example, the digits of pi
seem random but there are many elementary formulas to choose from that
represent the numerical value of pi perfectly. Thus, in theory, pi can
be computed to arbitrary accuracy.

Question: Can it be proven that mathematically random sequences exist
with this definition?

As John von Neumann humorous said, "Anyone who attempts to generate
random numbers by deterministic means is, of course, living in a state
of sin.''

The method I have specified for defining nonrandom numbers is clearly
deterministic. But how do we know that truly random numbers exist?

Those were my thoughts late last night.

When I woke up this morning it occurred to me that the set of all
nonrandom numbers must be countable because the set of all algorithms
that generate the nonrandom numbers can be put into a countable
lexicographical order. Therefore the set of all indescribable random
numbers must be uncountably infinite. But if the set of all nonrandom
numbers is countable, we can use Cantor's diagonalization process,
which is an algorithm, and find an unlisted nonrandom number. That's a
contradiction. Did I discover a new paradox in set theory?
From: Mike Terry on
"Shubee" <e.shubee(a)gmail.com> wrote in message
news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...
> I'm interested in the mathematical consequences of randomness if
> randomness is defined as follows. Let's represent an infinite sequence
> of numbers by the function f on the natural numbers N to the set
> {0,1,2,3,4,5,6,7,8,9}.
>
> Suppose we say that the infinite sequence f is random if no algorithm
> exists that generates the sequence f(n). For example, the digits of pi
> seem random but there are many elementary formulas to choose from that
> represent the numerical value of pi perfectly. Thus, in theory, pi can
> be computed to arbitrary accuracy.

Your definition corresponds to what most people would call "computable".

>
> Question: Can it be proven that mathematically random sequences exist
> with this definition?

Yes, e.g. as you do below (using a cardinality argument).

>
> As John von Neumann humorous said, "Anyone who attempts to generate
> random numbers by deterministic means is, of course, living in a state
> of sin.''
>
> The method I have specified for defining nonrandom numbers is clearly
> deterministic. But how do we know that truly random numbers exist?
>
> Those were my thoughts late last night.
>
> When I woke up this morning it occurred to me that the set of all
> nonrandom numbers must be countable because the set of all algorithms
> that generate the nonrandom numbers can be put into a countable
> lexicographical order. Therefore the set of all indescribable random
> numbers must be uncountably infinite.

Good thinking - so there must be non-computable sequences...

> But if the set of all nonrandom
> numbers is countable, we can use Cantor's diagonalization process,
> which is an algorithm, and find an unlisted nonrandom number. That's a
> contradiction. Did I discover a new paradox in set theory?

No. :)

Cantor's diagonalisation process is not an algorithm in the sense most
people would accept, because it requires "input" consisting of an infinite
amount of data. (Namely, the infinite list.)

If you could come up with a finite algorithm that would generate the
infinite list of all computable sequences, you could combine this with the
anti-diagonalisation approach to obtain an algorithm which computed a
sequence not on the list, and this sequence would be computable. In this
case, you would have achieved a contradiction. However, you *cannot*
produce an algorithm that will generate the list of all computable
sequences, so mathematics survives another day - phew!

This has been discussed at great length many times on sci.math, so I'm sure
if you search you will find plenty of previous posts to read about. (Also
search in Google etc. for "computable number" for more background.)

Regards,
Mike.



From: Jacko on
On 8 Aug, 19:35, "Mike Terry"
<news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> "Shubee" <e.shu...(a)gmail.com> wrote in message
>
> news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...
>
> > I'm interested in the mathematical consequences of randomness if
> > randomness is defined as follows. Let's represent an infinite sequence
> > of numbers by the function f on the natural numbers N to the set
> > {0,1,2,3,4,5,6,7,8,9}.
>
> > Suppose we say that the infinite sequence f is random if no algorithm
> > exists that generates the sequence f(n). For example, the digits of pi
> > seem random but there are many elementary formulas to choose from that
> > represent the numerical value of pi perfectly. Thus, in theory, pi can
> > be computed to arbitrary accuracy.
>
> Your definition corresponds to what most people would call "computable".
>
>
>
> > Question: Can it be proven that mathematically random sequences exist
> > with this definition?
>
> Yes, e.g. as you do below (using a cardinality argument).
>
>
>
> > As John von Neumann humorous said, "Anyone who attempts to generate
> > random numbers by deterministic means is, of course, living in a state
> > of sin.''
>
> > The method I have specified for defining nonrandom numbers is clearly
> > deterministic. But how do we know that truly random numbers exist?
>
> > Those were my thoughts late last night.
>
> > When I woke up this morning it occurred to me that the set of all
> > nonrandom numbers must be countable because the set of all algorithms
> > that generate the nonrandom numbers can be put into a countable
> > lexicographical order. Therefore the set of all indescribable random
> > numbers must be uncountably infinite.
>
> Good thinking - so there must be non-computable sequences...
>
> > But if the set of all nonrandom
> > numbers is countable, we can use Cantor's diagonalization process,
> > which is an algorithm, and find an unlisted nonrandom number. That's a
> > contradiction. Did I discover a new paradox in set theory?
>
> No.  :)
>
> Cantor's diagonalisation process is not an algorithm in the sense most
> people would accept, because it requires "input" consisting of an infinite
> amount of data.  (Namely, the infinite list.)
>
> If you could come up with a finite algorithm that would generate the
> infinite list of all computable sequences, you could combine this with the
> anti-diagonalisation approach to obtain an algorithm which computed a
> sequence not on the list, and this sequence would be computable.  In this
> case, you would have achieved a contradiction.  However, you *cannot*
> produce an algorithm that will generate the list of all computable
> sequences, so mathematics survives another day - phew!
>
> This has been discussed at great length many times on sci.math, so I'm sure
> if you search you will find plenty of previous posts to read about.  (Also
> search in Google etc. for "computable number" for more background.)
>
> Regards,
> Mike.

And the reals are countable due to an isomorphism betwee "/" and "."
as just being pen scribbles and right to left digit order of the
fractional part becomes left to right digit order in the denominator
isomorphic slot leaving the numerator isomorphic slot as the integer
part of the real.

Fun init? An unordered table diagonalized, generates a indeterminate
number of numbers due to be inserted somewhere ruining the
countability. So a messy table is uncountable, so what? The reals
aren't.
From: Shubee on
On Aug 8, 1:35 pm, "Mike Terry"
<news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> "Shubee" <e.shu...(a)gmail.com> wrote in message
>
> news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...
>
> > I'm interested in the mathematical consequences of randomness if
> > randomness is defined as follows. Let's represent an infinite sequence
> > of numbers by the function f on the natural numbers N to the set
> > {0,1,2,3,4,5,6,7,8,9}.
>
> > Suppose we say that the infinite sequence f is random if no algorithm
> > exists that generates the sequence f(n). For example, the digits of pi
> > seem random but there are many elementary formulas to choose from that
> > represent the numerical value of pi perfectly. Thus, in theory, pi can
> > be computed to arbitrary accuracy.
>
> Your definition corresponds to what most people would call "computable".
>
> > Question: Can it be proven that mathematically random sequences exist
> > with this definition?
>
> Yes, e.g. as you do below (using a cardinality argument).
>
> > As John von Neumann humorous said, "Anyone who attempts to generate
> > random numbers by deterministic means is, of course, living in a state
> > of sin.''
>
> > The method I have specified for defining nonrandom numbers is clearly
> > deterministic. But how do we know that truly random numbers exist?
>
> > Those were my thoughts late last night.
>
> > When I woke up this morning it occurred to me that the set of all
> > nonrandom numbers must be countable because the set of all algorithms
> > that generate the nonrandom numbers can be put into a countable
> > lexicographical order. Therefore the set of all indescribable random
> > numbers must be uncountably infinite.
>
> Good thinking - so there must be non-computable sequences...
>
> > But if the set of all nonrandom
> > numbers is countable, we can use Cantor's diagonalization process,
> > which is an algorithm, and find an unlisted nonrandom number. That's a
> > contradiction. Did I discover a new paradox in set theory?
>
> No. :)
>
> Cantor's diagonalisation process is not an algorithm in the sense most
> people would accept, because it requires "input" consisting of an infinite
> amount of data. (Namely, the infinite list.)

I admit that I did not provide a precise definition of algorithm.
Looking at the Wikipedia article on computable numbers, I see their
definition: computable numbers are real numbers that can be computed
to within any desired precision by a finite, terminating algorithm. I
don't see anything infinite about determining the first n
lexicographically ordered algorithms and their computable numbers,
each to n decimal places.

I agree that computing pi to n decimal places is unbounded as n
approaches infinity. But everything is finite for each finite n, even
in diagonalization.

> If you could come up with a finite algorithm that would generate the
> infinite list of all computable sequences, you could combine this with the
> anti-diagonalisation approach to obtain an algorithm which computed a
> sequence not on the list, and this sequence would be computable. In this
> case, you would have achieved a contradiction. However, you *cannot*
> produce an algorithm that will generate the list of all computable
> sequences, so mathematics survives another day - phew!

I don't think that the problem is with the algorithm. I think the
logic breaks down due to the fact that all indescribable real numbers
may be thought of as beginning with any kind of arbitrarily long but
finite sequence of numbers. So I guess I doubt the mathematical
existence of randomness in the first half of my post.

Thanks for your input.

> This has been discussed at great length many times on sci.math, so I'm sure
> if you search you will find plenty of previous posts to read about. (Also
> search in Google etc. for "computable number" for more background.)
>
> Regards,
> Mike.

From: master1729 on
all numbers computable with finite means have finite cardinality.

( e.g. binary polynomial of finite degree ( card clearly finite ) )

adding one with cantors diagonal still makes finite well finite.

the set of numbers computable with infinite means only has the cardinality of the reals , hence uncountable.

( e.g. binary taylor series : infinite analogue of binary polynomial ( card = 2^n = r ) )

regards

tommy1729