From: Mike Terry on
"Shubee" <e.shubee(a)gmail.com> wrote in message
news:333f6a61-21a5-43c6-8628-deec2047489b(a)z10g2000yqb.googlegroups.com...
> On Aug 8, 5:17 pm, "Mike Terry"
> <news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> > "Shubee" <e.shu...(a)gmail.com> wrote in message
> >
> >
news:78bf68f7-be2d-4d19-a724-8a67926a2990(a)v15g2000yqe.googlegroups.com...>
On Aug 8, 1:35 pm, "Mike Terry"
> > > <news.dead.person.sto...(a)darjeeling.plus.com> wrote:
> > > > "Shubee" <e.shu...(a)gmail.com> wrote in message
> >
> >
news:8086cabd-3a69-4ecc-bb84-064a52e5499a(a)d17g2000yqb.googlegroups.com...
>
> > > I admit that I did not provide a precise definition of algorithm.
> > > Looking at the Wikipedia article on computable numbers, I see their
> > > definition: computable numbers are real numbers that can be computed
> > > to within any desired precision by a finite, terminating algorithm. I
> > > don't see anything infinite about determining the first n
> > > lexicographically ordered algorithms and their computable numbers,
> > > each to n decimal places.
> >
> > You don't see anything infinite?
> >
> > Consider: we can codify algorithms into numbers as you clearly realise,
> > because you use this to argue the algorithms are countable. E.g. the
> > algorithms correspond to Turning machines, and the state of a Turing
machine
> > (with it's finite input data) can be encoded as a series of numbers, and
> > then the whole lot merged into one single number that represents the
whole
> > Turing machine.
>
> I have not used Turing machines in my very elementary argument so I'm
> a little dismayed that the work of Turing must be invoked, which I
> have never studied, to explain my mistake.

You need to know hardly anything about Turing machines - just think of them
as computer programs that accept a finite amount of input, and run (doing
their sequential series of variable comparisons, loops, arithmetic
operations etc.), and stop when/if they execute the special STOP
instruction. The data it has output at that point corresponds to the
program's output. The programs should be thought of as running on a machine
which has "as much memory as required", i.e. a program is not going to fail
because it runs out of memory.

It is conjectured that all algorithms can be represented as such programs.

So for example, if x is a computable number, there must be such a program
that takes a single number n (the "digit position") as input, runs and
eventually stops, having output the n'th digit of x.

>
> Don't you have a direct rebuttal to my argument?

You say you don't see anything infinite about determining and their
computable numbers,
each to n decimal places. I made an assumption about what you meant by
this, but now I'm doubting that I understood what you meant, because of what
you say below.

If you can explain what you mean by the phrase "the first n
lexicographically ordered algorithms", then I'll reply tomorrow. (It's
3:00am now, time for bed, yawn :-)

Mike.

>
> > > > If you could come up with a finite algorithm that would generate the
> > > > infinite list of all computable sequences, you could combine this
with
> > the
> > > > anti-diagonalisation approach to obtain an algorithm which computed
a
> > > > sequence not on the list, and this sequence would be computable. In
> > this
> > > > case, you would have achieved a contradiction. However, you
*cannot*
> > > > produce an algorithm that will generate the list of all computable
> > > > sequences, so mathematics survives another day - phew!
>
> I still don't see why the entire infinite set of lexicographically
> ordered algorithms must be used to determine the first n decimal
> places of an unlisted nonrandom number.
>
> > > I don't think that the problem is with the algorithm. I think the
> > > logic breaks down due to the fact that all indescribable real numbers
> > > may be thought of as beginning with any kind of arbitrarily long but
> > > finite sequence of numbers. So I guess I doubt the mathematical
> > > existence of randomness in the first half of my post.
> >
> > The problem is more basic - you think you could list the computable
numbers
> > through some algorithm, but that's just a mistake as explained above.
>
> I agree that the algorithm is terribly inelegant but still don't see
> why the first, second and third lexicographically arranged algorithms
> can not be determined.
>


From: Tim Little on
On 2010-08-09, Shubee <e.shubee(a)gmail.com> wrote:
> On Aug 8, 5:28 pm, Tim Little <t...(a)little-possums.net> wrote:
>> Part of the definition of "algorithm" is that it must terminate. You
>> can certainly list the first n programs or Turing machines, but many
>> of them do not embody algorithms because they never terminate.
>
> Every one of my computable numbers has an associated well-behaved
> algorithm.

Indeed so, but that's not enough. For the number to be computable,
there must exist a *single* finite algorithm that will accept a value
'n', and produce an approximation correct to n digits no matter how
large n is.

You have an infinite collection of algorithms. You can choose any
finite number of them to be combined into a single finite algorithm,
but that finite algorithm will only give you finite accuracy of
approximation.


> What's so disastrous about invoking Cantor's diagonalization on n
> numbers, each written out to n decimal places.

Nothing disastrous, it is simply that Cantor's diagonalization is not
a finite algorithm accepting a natural number as input, and so is
irrelevant to the definition of a computable number.


- Tim
From: Chip Eastham on
On Aug 8, 12:10 pm, Shubee <e.shu...(a)gmail.com> wrote:
> I'm interested in the mathematical consequences of randomness if
> randomness is defined as follows. Let's represent an infinite sequence
> of numbers by the function f on the natural numbers N to the set
> {0,1,2,3,4,5,6,7,8,9}.
>
> Suppose we say that the infinite sequence f is random if no algorithm
> exists that generates the sequence f(n). For example, the digits of pi
> seem random but there are many elementary formulas to choose from that
> represent the numerical value of pi perfectly. Thus, in theory, pi can
> be computed to arbitrary accuracy.
>
> Question: Can it be proven that mathematically random sequences exist
> with this definition?
>
> As John von Neumann humorous said, "Anyone who attempts to generate
> random numbers by deterministic means is, of course, living in a state
> of sin.''
>
> The method I have specified for defining nonrandom numbers is clearly
> deterministic. But how do we know that truly random numbers exist?
>
> Those were my thoughts late last night.
>
> When I woke up this morning it occurred to me that the set of all
> nonrandom numbers must be countable because the set of all algorithms
> that generate the nonrandom numbers can be put into a countable
> lexicographical order. Therefore the set of all indescribable random
> numbers must be uncountably infinite. But if the set of all nonrandom
> numbers is countable, we can use Cantor's diagonalization process,
> which is an algorithm, and find an unlisted nonrandom number. That's a
> contradiction. Did I discover a new paradox in set theory?

You might be interested in this blog essay:

[Numbers that cannot be computed -- Igor Ostrovsky]
http://igoro.com/archive/numbers-that-cannot-be-computed/

One of the comments there points out the work of
Gregory Chaitin. See the Wikipedia article for
more information:

[Chaitin's constant -- Wikipedia]
http://en.wikipedia.org/wiki/Chaitin's_constant

Essentially the problem is how to define what makes
a number incompressible, in the sense that a program
to produce the number cannot be much shorter than
the number itself. Of course when we are talking
about infinitely long numbers this seems at first
to be something of a triviality. There are the
computable numbers, which can be described by finite
programs, and everything else (of which we know the
existence by cardinality, given countably many
programs and uncountably many reals). But the
discussion can be carried to much greater degrees
of complexity.

regards, chip
From: Shubee on
On Aug 9, 10:41 pm, Tim Little <t...(a)little-possums.net> wrote:
> On 2010-08-09, Shubee <e.shu...(a)gmail.com> wrote:
>
> > On Aug 8, 5:28 pm, Tim Little <t...(a)little-possums.net> wrote:
> >> Part of the definition of "algorithm" is that it must terminate. You
> >> can certainly list the first n programs or Turing machines, but many
> >> of them do not embody algorithms because they never terminate.
>
> > Every one of my computable numbers has an associated well-behaved
> > algorithm.
>
> Indeed so, but that's not enough. For the number to be computable,
> there must exist a *single* finite algorithm that will accept a value
> 'n', and produce an approximation correct to n digits no matter how
> large n is.

I wonder then if a computable number is the same as my definition of a
nonrandom sequence. I don't see that my definition requires producing
an approximation correct to n digits in a finite number of steps, only
that the nonrandom number can be defined by a rule expressed with a
finite number of characters.

> You have an infinite collection of algorithms. You can choose any
> finite number of them to be combined into a single finite algorithm,
> but that finite algorithm will only give you finite accuracy of
> approximation.

Yes, I have an infinite collection of algorithms but they are all
numbered and defined by a single statement containing only a finite
number of characters.

> > What's so disastrous about invoking Cantor's diagonalization on n
> > numbers, each written out to n decimal places.
>
> Nothing disastrous, it is simply that Cantor's diagonalization is not
> a finite algorithm accepting a natural number as input, and so is
> irrelevant to the definition of a computable number.

I suppose then that my definition of a finite algorithm is different
than the definition canonized by Turing.

Sure, set theory can be so thoroughly sanitized that probably no
paradoxes can develop but where's the fun and excitement in doing
that?

From: Jacko on
> Sure, set theory can be so thoroughly sanitized that probably no
> paradoxes can develop but where's the fun and excitement in doing
> that?

It's not about the fun, it's about the lack of fraud.

Consider a non bernoulli pseudo-random reversable source. (not a 50:50
source).

Then given an odd number of these sources, majority selection of head/
tail state can be used to create a more biased source (h:t)^n ->
central limit -> n moves to infinity -> 2h<t is possible.

It takes 2 bits to turn a pseudo random bernoulli source into a non
bernoulli reversable source.

There exists a coding which can use the generated sequence with 2h<t
to store information choices, by considering the occasional h to be an
error in need of a correction (ignoring by stepping over) coding,
implying a slower rate always t stream, which can be written upon by
turning a t into a h.

Cheers Jacko