From: T.M. Sommers on
Evenbit wrote:
> T.M. Sommers wrote:
>>wolfgang kern wrote:
>>><randyhyde(a)earthlink.net> wrote:
>>>
>>>| > Mine is infinitive.
>>>|
>>>| Whatever that means. Certainly you don't live in a universe with an
>>>| infinite number of particles.
>>>
>>>I do. Regardless of Your believe.
>>>.. Show me the border of the universe ...
>>
>>It is possible for a space to be finite yet unbounded. Consider
>>the surface of a sphere, for example.
>
> That would mean that if I could travel really, really fast (much faster
> than Beth's light-speed-running man :), and you see me leaving on your
> right side, then, eventually [after the passing of 'forever' amount of
> time], you would see me approaching again on your left side. This,
> however, seems to be in disagreement with a mountain of evidence
> supporting a geometrically flat universe. I don't understand how the
> universe can be both *flat* and *finite* at the same time.

All the theory and evidence I am aware of indicates that the
universe is most definitely *not* flat.

--
Thomas M. Sommers -- tms(a)nj.net -- AB2SB


From: T.M. Sommers on
wolfgang kern wrote:
> "T.M. Sommers" wrote:
>
> | > .. Show me the border of the universe ...
> |
> | It is possible for a space to be finite yet unbounded.
> | Consider the surface of a sphere, for example.
>
> I can't see 'space' on a 'surface' without a Y-direction ;)

What are you talking about? A surface is a space. There is no
requirement that a space have exactly three dimensions.

> 'flat folks' on a sphere will find a limited environment
> by just travel straight and reaching the start point again.
>
> Yes, limits are found easy, but as only our measurments are
> limited by our capabilities, this does not neccessarily mean
> that everything must have a limit or a border line.

You are the one who wanted to see the border of the universe.

As for the rest of what you wrote, it makes no sense to me at
all. What do you mean by 'limit', or 'capability'? Are you
suggesting that the uncertainty principle is not true? It puts a
fundamental limit on measurement quite independent of 'our
capabilities', whatever they are.

--
Thomas M. Sommers -- tms(a)nj.net -- AB2SB


From: randyhyde@earthlink.net on

Beth wrote:
> Randy wrote:
> > there is not a
> > one-to-one mapping of programs to binary object files, this is why
> it's
> > fair to say that writing a perfect disassembler is an impossibility.
>
> Excellent!
>
> How many different ways can the same point be proved over and over
> before some people cotton on?
>
> For those who haven't thought what point Randy is making:
>
> If there's not a 1:1 between program and binary then more than program
> can produce the same binary...how can a "perfect disassembler" use
> _ONLY_ the information in the binary to differentiate between the many
> possible programs that could produce the same binary?


I'll even take it one step farther-
It's not possible to write a "perfect disassembler" that generates a
*semantically* equivalent source file (that is, ignoring the issues of
the one-to-one mapping of instructions with binary opcodes). Consider
the following process:

1. Disassemble some object code to produce a source file. Because of
the ambiguity of instruction encoding, different opcodes may map to
exactly the same instruction.

2. Compile the result through your favorite assembler. My only
requirement here is that the result be deterministic. That is, the
assembler *does* map each instruction to a *specific* opcode.

3. Disassemble the recompiled output.

4. Compile disassembled output from step three.

5. At this point, the object code for steps 2 and 4 *should* be the
same (no guarantees, of course, perfect disassembly is not possible in
all cases).

The real question, now, is this: "is the object code produced in step 4
*semantically* equivalent to the original object code? The answer is:
there is no way we can guarantee this. For many programs it will be.
But for some programs, the object code produced in step 4 will be
(semantically) different than the original code. Hence, the disassembly
is not perfect. We can ask the same questions for the code produced in
steps 2 and 4.

"Perfect disassembly", at least to me, means that the disassembler has
properly disassembled all machine instructions as instructions and has
not incorrectly interpreted any data in the original program as code;
that the disassembler has properly disassembled all data as data, and
as not misinterpreted any code as data; and that the disassembler has
properly associated the correct data type with the disassembled data
(e.g., relocatable pointers versus plain numeric data). IOW, if I
choose to reassemble the code at the exact same address as the original
program, I get *exactly* the same program, sans minor differences due
to the ambiguity of instruction encoding. If I assemble the program to
run at a *different* address in memory, all (and I do me *all*) the
pointers and other relocatable values in the object file are properly
adjusted to reflect the new run-time location of the program. The
object code certainly won't be the same (because of the relocation of
the addresses), but semantically, the programs will be equivalent. In
particular, I should be able to insert new statements, delete existing
statements, and recompile the program and expect it to work properly
(depending, of course, on the correctness of the changes made).

Because of the problem of data/code differentiation is undecidable, it
is not possible to write an automatic disassembler that would allow
this.

Cheers,
Randy Hyde

From: wolfgang kern on

Hi Beth,

Chuck wrote:
| > To begin with, I can easily design an 'x86-like' architecture such
| > that, for all possible programs which run on that architecture,
| > it is trivial to determine if the program halts.

| Are you sure?

| Does your x86-like CPU have any external interrupts?

If you like to imply all functional behaviour of a code piece in
a certain environment (CPU-type, motherboard and the OS), then
you'll need to make everthing known to the analysing tool.

But as we talk about disassembling rather than dynamical function
analysis, it is not important to know if an indeterminant (external
dependent) branch will take the one or the other code path as both
pathes were disassembled to their end anyway.
And if (Randy's preferred example) a table-branch uses a range
of undeterminable values (wont happen too often in the real world),
then it's easy to decide if this range is limited to file-bounds
or not. If outside then I'd say this table branch is indeterministic
and mark this branch and the code path as a possible program end.

But a disassembler may assume non-defective IRQ-handlers and working
exception handlers in an OS.
(my DEMO-disassembler already says: 'exception#..' on invalid and
obvious instructions like 'D4 00' -> 'forced div0').

Good for you mentioned the RTCL, it will be a good idea to create a
library for legacy-I/O and perhaps also for their associated API-functions
for a few OS to get a function analyser.

External referenced parts must be either made known, or at least ignored
and reported as unknown (even imperfect then).

| If the program to be tested does not correctly employ solid
| synchronisation in a pre-emptive concurrent environment then how can you
| determine that which is logically "indeterminable" and
| "non-deterministic"?

To analyse things like this means all concurrent running code
(may even include the OS) must be implied into the analysis.
This may be the cause for it haven't been already tried yet.
But we still talk just about disassembling yet ? ;)

| Instructions take time to complete...

Yes, correct calculation of timing would be the biggest part of the
story (a complex formula for every CPU and the involved hardware).
But we still talk just about disassembling yet ;)


| ... then you logically cannot
| create a system where you can guarantee, for all possible programs, that
| you can always determine its final state...

| I've openly and explicitly given you all the necessary logic to see this
| conclusion for yourself...feel free to follow it and show me where
| there's any flaw in the logic, if such does exist...

Yes correct, but we talk about static analysis/disassembling,
so we don't need to care if the program may halt if it runs.

| What Randy has said about the Halting Problem has a direct effect on
| "concurrent theory" ...
| And, of course, what "non-deterministic" means is exactly contradictory
| to the claim of full and accurate "pre-determination" of the program's
| final state...

A program can have several final states, it may even end with ie: eax=?
Dynamical analysers will have a problem with this for sure.
A fully static analyser will see it and wont try the impossible,
but it may try if the reported bits were masked ie: eax=0040???C

[external events influence]

Right, if external/global/shared references are made known to an
analyser it might uncover more (otherwise indeterminant) path branches.
But a disassembler usually don't care at all if a conditional branch
is taken or not, it will see and follow both anyway.

__
wolfgang


From: wolfgang kern on

Alex McDonald wrote:

| > | > .. Show me the border of the universe ...
| > | It is possible for a space to be finite yet unbounded.
| > | Consider the surface of a sphere, for example.

| > I can't see 'space' on a 'surface' without a Y-direction ;)
| Z direction.
:) yes.

| > 'flat folks' on a sphere will find a limited environment
| > by just travel straight and reaching the start point again.

| Yes. The surface of sphere is a 3 dimensional finite yet unbounded
| (edgeless) 2 dimensional space. It's difficult to imagine, but a
| hyper-sphere (a 4 dimensional sphere) is a finite yet unbounded 3
| dimensional space for beings like us.

Wouldn't this mean that we could light our back if we pointed
with an ideal sharp laser-beam in straight forward direction? ;)

| > Yes, limits are found easy, but as only our measurments are
| > limited by our capabilities, this does not neccessarily mean
| > that everything must have a limit or a border line.

| Our measurements of what exactly?

Everthing,
as all can be improved by increasing accuracy and precision.
I can't see a limit, neither in the universe nor for smallest quants.
and I know T.M.Sommers will protest against this :)
but I just cannot imagine a universal mother-clock which ticks at a
maximum frequency rate, as this would mean we could replace all
infinite math results with given limits.
__
wolfgang