From: Charles A. Crayne on
On 28 Sep 2005 08:53:25 -0700
"randyhyde(a)earthlink.net" <randyhyde(a)earthlink.net> wrote:

:There is a big difference between "a" program and "any" program.

Only in the number of essentially identical items from which one may
choose. Perhaps you were thinking of "at least one program".

-- Chuck
From: Beth on
Herbert wrote:
> If you have an closed system
> with a finite number of states (N), then it is trivial to solve the
> halting problem for such a system: simulate N steps, then either the
> system has halted or it never will halt. So this is the proof, that
> you can write a perfect disassembler for a closed finite computer
> system (which means for any real existing computer system).

By definition, the whole multiverse is a closed system...as if its
definition is "everything" then there cannot be anything that's not a
member of it...if closed by definition of there being no "outside" to
the system then it is finite within those bounds (even if those bounds
are infinite)...

Leading to the conclusion that if what you say above was true of any
closed finite system then the halting problem and undecidability would
not even exist in the first place...

You haven't been reading Phil's signature, have you? :)

"If a religion is defined to be a system of ideas that contains
unprovable
statements, then Godel taught us that mathematics is not only a
religion, it
is the only religion that can prove itself to be one." -- John Barrow

Godel and Schroedinger meets Donald Rumsfeld: There are not only "known
knowns" and "known unknowns" but there are also "unknowable knowns" and
"unknowable unknowns" too :)

Beth :)


From: Beth on
Herbert wrote:
> > If an external
> > agency, such as an interrupt,
>
> It is a closed system, so no external interrupts.

But we _DO_ get external interrupts...

Thus, the execution of the program depends upon factors _EXTERNAL_ to
the program's code...it's _NOT_ a closed system (or, at least, not
closed within the boundaries of the machine alone)...

There's a variation of this in concurrent programming theory that
concurrent programs (under pre-emptive scheduling) are, in fact,
inherently non-deterministic...

I was trying to point out this fundamental difference of concurrent
programming to hutch with a simple example program I posted here...which
generates an inherently non-deterministic result...

The gist of the program was that it started 7 threads which all executed
the same procedure...and that procedure increments a global variable a
fixed number of times (let's say 1 million times)..._BUT_ no precautions
are taken to avoid the "multiple update" problem...

Hence, the result actually becomes completely non-deterministic...the
scheduler interrupts pre-emptively based on a real world timer that
bears no actual relation to anything in the program...this can be
effected by generation of other interrupts like moving the mouse
(generates an interrupt, runs interrupt code that wouldn't otherwise
execute, time ticks on by and the timer interrupt - based on real time -
triggers at a different instruction...the butterfly has flapped its
wings, so beware the ensuing tornadoes!)...

So, we have to include all the states that the external inputs - which
includes a human being, so good luck even coming close to "mapping" all
the states a human being could be in at any arbitrary moment in time
(which itself is effected by the external stimulae of their
environment...if using the program whilst climbing Mount Everest then
the lower oxygen levels and the difficulty of the climb itself might
produce slower than normal reactions when sitting at a desk in the
Northern Hemisphere...which is different again to a desk nearer the
equator where there's inadequate air conditioning, as humans tend to get
relaxed - lazy, even - and slow down in hot weather) - could be in to
effect all the states the machine could be in...

And the whole thing rests on the butterfly effect?

Sorry, for such a program, even "practical" seems increasingly
impossible...or, to put that more precisely, "indeterminable"...

Okay, so you "prove" the program doesn't halt when the mouse is
still...what if I move it one mickey left 0.1 seconds into execution?
What if I move it one mickey left at 0.2 seconds into execution? What if
I move it two mickeys? Left, right, up, down, left up? How many possible
states are we getting ourselves into here? And the movement also has to
be placed against a "time" axis...

This is even before we consider that the OS software runs first to boot
up a system before we even start running the program...as this will
effect scheduling, the state of the cache and other factors that _WILL_
effect our determination of what the program will do then we have to
consider all the states the machine could be in _BEFORE_ it even begins
executing the first instruction of the program...

For example, if that first instruction accesses memory then the state of
the cache comes into it, as to how long the program takes...which
effects where it'll be when the real world timer interrupt kicks in to
pre-empt the program to run something else (and how many others things
running effects that too...including effecting the cache state that
we've got a "rebound" here...cache effects when the program is
pre-empted...pre-emption effects the state of the cache...getting very
complex fast, eh? ;)...

Have you ever noticed that?

When you boot up Windows and you've got a whole bunch of programs that
are automatically started and add an icon to your "system tray"...your
firewall, your anti-virus, your "volume control", your "graphics card
configuration program", etc....

Have you ever noticed that as Windows allocates space in the system tray
for them on a "first come, first served" basis, then the order of the
icons is quite abitrary and "indeterminable"?

The "volume control" is a small utility so usually appears near the
clock on the right...but sometimes it doesn't...sometimes the graphics
card utility icon loads first...or it notices that your "always on"
modem is connected first and the network icon appears there (but other
times, it's slow to notice and it's all the way on the left side
instead)...

From this behaviour, what seems to be happening is that Windows simply
triggers all the programs concurrently...and the order is completely
down to all those indeterminable factors like cache state and real world
time and mouse movement...

This facet is the prime confusion people have when starting to seriously
program concurrent applications...the applications are started on their
own threads in the exact order listed in Windows' registry (always the
same order)...the utilities that Windows loads do not change in size or
what gets loaded between reboots...the machine runs at a set speed...the
instructions are all deterministic...the cache behaviour follows known
rules...

But, yet, the order the icons appear is completely arbitrary...and this
is a pretty minor example of the butterfly only jittering its wings in a
minor way, yet produces obvious and clear "large scale results" that you
can see directly in the order of icons in that tray...

You make no changes...what the machine has to do booting up these tray
icons does not alter from reboot to reboot...yet, every single time, the
icons appear in some arbitrary and indeterminable order...

A practical variation on the theoretical theme...chaos theory,
butterflies, tornadoes and non-determinism...deterministic systems
following known fixed rules - even simple ones - which are the ultimate
acme of predictability...can still end up generating an inherently
unpredictable and non-deterministic result...

Even if Godel and Schroedinger would allow you to perform the
impossible, you'd find it practically impossible...and I'm _NOT_ even
approaching the "it'll take 7 Ice Ages to execute" argument (which is
also practically valid for all non-trivial systems)...

But if the program operation depends on any independent system - the
real world clock, the whim of an idiot wiggling the mouse around when
they are bored waiting for the program to load, etc. - which it does
then you'll find simple "choas theory" screws up all practicalities
completely...it's not just every state of the machine, it's every state
of the human (and as humans are effected by their environment then it's
every state of the environment...as the Earth is effected by solar
flares then it's every state of the Sun...as the Sun must be effected by
the rest of the Milky Way, then it's every state of the galaxy...as
galaxies are capable of running near each other then it's every state of
the entire universe...if the "pinched off" multiverse theory is correct
then it's even every state of every universe in the whole
multiverse...and goodness knows if there could be something bigger again
to which they are all objects within)...

And as even a millisecond - a nanosecond - of difference is equivalent
to the butterfly's wings then you have no practical Hope of
categorically proving - by execution - whether a tornado is going to hit
or not...

Because this "system of systems" runs each part of it _independently_
(in parallel)...and then it's not just how many states it can get into
but all the combinations of state where the independent systems interact
(which, in turn, effects how subsequent interactions will occur...and so
on...that "echo" from the butterfly's wings flapping ringing out into
the whole universe...miniscule, impossible to precisely measure but
existing and making a "large scale effect" take place eventually)...

The example program I posted up for hutch about concurrency illustrates
it (as does the order you can see of your system tray icons every time
you reboot)...sometimes you run the program and get 7
million...sometimes you get some arbitrary number less than that...it is
simply inherently unpredictable...non-deterministic...you cannot know
ahead of time (without all those "infinite precision" measurements which
aren't possible...note that "uncertainty principle" applies to the CPU
here too: If you get the exact clock cycles from the MSR registers using
"RDTSC" that instruction itself takes time to execute...and the
observation itself changes what is observed)...what result you're going
to get...

So, even ignoring "theory" and looking only at practicality, you're
still asking the indeterminable...because if the program - and we are
talking "any" program, right? So this class of program must be
considered also - has any concurrent aspect (directly or
indirectly...even DOS could suffer it by the implicit "concurrency" of
responding to external interrupts to deal with the mouse pointer)...then
it will exhibit non-deterministic execution...

And as there are classes of error situation such as "deadlock" and
"livelock" where a program can "halt" in a specific set of circumstances
where it might not in all other circumstances then this _IS_ relevent to
the basic "halting problem"...

That's why - even if hutch can't see it - this is highly relevent to his
approach...

I don't know if you've ever fiddled around with the "priorities" of
processes while they are running...but I once wasn't thinking straight
and thought of knocking a process to "real-time" priority to try to help
speed it along...I should know better...priorities _DON'T_ work that
way...turns out the program had hung and in handing it "real-time
priority", I locked - halted - the entire system...absolutely no way out
of it but to hit the big old reset button...

You could also imagine hutch's problem without a "yield" and the polling
process is given higher priority than that process it's waiting on
(maybe the program doesn't ask for higher priority but the user can mess
around with what they shouldn't be messing with using "Task Manager"
;)...and - BANG! - the polling process never yields long enough for the
other process to ever terminate...starved, deadlocked...the program
never comes to a halt...and, most importantly, it doesn't come to a halt
through no fault of the source code itself per se...

Any "execution"-based simulation would also fall foul of these exact
same problems in a practical sense, even ignoring the theoretical sense
already discussed by Randy...

That's another result _mathematically proved_ by computer science that
I've mentioned before...which is similar in nature but of a broader
statement: It's not possible, through any "execution testing", to prove
a correct program is correct...not unless every single possiblity is
independently tested one by one...

And stacking up the program code, the concurrent aspects, the different
clockspeeds of the motherboard support chips and video card and sound
card (helping to make "wait states" indeterminable, as the pattern of
the chip's interactions with each other dances a merry chaotic dance),
the cache state, the pre-emption, the "boredness" of the user wiggling
the mouse, the temperature of the room (can effect both human and
hardware's running speeds...which means that the precise state of the
CPU fan - ever checked the variability in the BIOS reported "RPM" count
for the fan? - comes into it...the subtle imperfections in the materials
and construction...we can continue this right down to quantum
fluctuations and atomic vibration, especially because, inside the CPU,
things really are that small that this type of physics can exactly
become your "butterfly wings" ;), etc....

Just like you might not be able to see the "waves of chaos" that the
butterfly's wings has seeded into the universe...not be able to track
with precision the impossibly small "echoes" rebounding through all the
independent systems that interact with each other...but that tornado is
one of the consequences of the butterfly just trying to fly away...

Just reboot Windows...look at your system tray icons...hit reset and see
what they do next time...might end up the same order, might not...you
can try to predict it...but you cannot guarantee your prediction to ever
be always correct...

Some things are simply unknowable...the human ego might not like the
sound of that from time to time, as it thinks its "tamed"
everything...but, tough cookies, it is given no choice in the matter...

After all, we could be knocked down by bus tomorrow (all because the
number 7 fell out as the 5th Lottery ball...the bus driver does the
Lottery every week, you see ;)...

Beth :)


From: Herbert Kleebauer on
Beth wrote:
> Herbert wrote:

> > If you have an closed system
> > with a finite number of states (N), then it is trivial to solve the
> > halting problem for such a system: simulate N steps, then either the
> > system has halted or it never will halt.

> By definition, the whole multiverse is a closed system...as if its
> definition is "everything" then there cannot be anything that's not a
> member of it...if closed by definition of there being no "outside" to

My brain is to small to understand a "multiverse". If you simulate a
target system, then the simulator has to be outside of the simulated
target. If nothing is outside of the multiverse, then there can't be
a simulator which can simulate the multiverse.


> the system then it is finite within those bounds (even if those bounds
> are infinite)...

Even a closed system can have an infinite number of states. That's why
we are speaking about DIGITAL computers and not an ANALOG world.
From: Herbert Kleebauer on
Beth wrote:
> Herbert wrote:
> > > If an external
> > > agency, such as an interrupt,
> >
> > It is a closed system, so no external interrupts.
>
> But we _DO_ get external interrupts...

Is this female logic? If I say:

if a>b then a-b is positive

then you answer:

but a<b !

and then use thousands of words to show that a-b is not positive.