From: Paul Wallich on
Robert Myers wrote:
> On Jan 23, 4:29 pm, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:
>> Robert Myers wrote:
>>> On Jan 22, 10:07 am, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:
>>>> As long as the clocks don't move, you don't have problems ;-).
>>> How does a clock tell time without moving something?
>> No, it's not about parts inside the clock moving, it's about the clocks
>> themselves moving relatively to each other. If they don't, you can
>> establish perfect synchronization (even within a static gravitation
>> field; all that requires is slowing down or speeding up clocks in
>> different distances from the gravitation center).
>>
> This is not the place to be debating this. Whether it is a prediction
> of special relativity or not, any realizable clock has the same
> problem in its own internal workings as do clocks separated by large
> distances. You have to make assumptions about what's happening far
> away, even if far away is only a few repeaters away. You can't ever
> know what time it is without making ancillary and unprovable
> metaphysical assumptions. Sometimes those assumptions will hold and
> sometimes they won't. They are never verifiable until it's too late
> to change any mistakes that were made because of a failure to verify.

Another way of saying this is that the high-precision answer to "what
time is it?" is a social/political/definitional matter rather than a
physical one. If you're UTC or IBM, you just proclaim the answer, and
people learn to work around it.

paul
From: nmm1 on
In article <hjka5u$qs5$1(a)reader1.panix.com>,
Paul Wallich <pw(a)panix.com> wrote:
>
>As machines get bigger and faster, it may be important to remember that
>the physical universe does not not provide the monotonicity function
>either. Stationary observers located in different places will tell you
>different things about who did what when. So you're imposing an order
>that doesn't exist -- fine as long as the things you're ordering aren't
>supposed to be causally related (aka dependencies).

That's not actually the problem. I teach that parallel time is very
like relativistic time - i.e. it is causally consistent but not
sequentially consistent [*]. The problem is that there is a lot of
experience showing that few people can get their minds around that,
at least enough to program reliably, even when they accept it in
theory.

I favour the approach of simplifying the model enough that people
can get their minds around the concepts (e.g. using a BSP-like
model), at the loss of some generality.

[*] Let's ignore the more enthusiastically over-extrapolated theories
of some writers of speculative fiction, whether they be classed as
science fiction writers or eminent professors.


Regards,
Nick Maclaren.
From: Stefan Monnier on
> There is a simple solution to this problem. Assume that the time stamp is
> updated every microsecond, and that it is a hardware register within the
> chip. Further assume that the timer field has enough bits to allow for say
> nanoseconds, but these bits are not guaranteed to be accurate. Then the
> hardware can use those bits as a "request counter". That is, the value is
> incremented once every request and reset to zero every time the clock
> increments the least significant bit (i.e microseconds in our example.)

But that's a single shared "request counter": costly.
Cheaper is to return in the lower ("unused"/"constant") bits
a CPU index. I.e. (readclock() << N) || CPUID.
This should trivially turn "locally unique timestamps" into "globally
unique timestamps" without any need for communication when reading
the time.


Stefan
From: Stephen Fuld on
On 1/25/2010 8:30 AM, Stefan Monnier wrote:
>> There is a simple solution to this problem. Assume that the time stamp is
>> updated every microsecond, and that it is a hardware register within the
>> chip. Further assume that the timer field has enough bits to allow for say
>> nanoseconds, but these bits are not guaranteed to be accurate. Then the
>> hardware can use those bits as a "request counter". That is, the value is
>> incremented once every request and reset to zero every time the clock
>> increments the least significant bit (i.e microseconds in our example.)
>
> But that's a single shared "request counter": costly.
> Cheaper is to return in the lower ("unused"/"constant") bits
> a CPU index. I.e. (readclock()<< N) || CPUID.
> This should trivially turn "locally unique timestamps" into "globally
> unique timestamps" without any need for communication when reading
> the time.

Perhaps I am missing something, but I don't think that, by itself works.
If you have multiple timers, doesn't that require a much smaller
granularity timer? i.e. say 10 ns versus 1 us. If you stuck with the 1
us granularity, nothing prevents two calls within the same us from the
same processor from getting the same value. But if you try to maintain
multiple clocks in different chips in sync with each other within 10 ns,
you run into other problems which makes that hard.


--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: Terje Mathisen "terje.mathisen at on
Stephen Fuld wrote:
>
> Perhaps I am missing something, but I don't think that, by itself works.
> If you have multiple timers, doesn't that require a much smaller
> granularity timer? i.e. say 10 ns versus 1 us. If you stuck with the 1
> us granularity, nothing prevents two calls within the same us from the
> same processor from getting the same value. But if you try to maintain
> multiple clocks in different chips in sync with each other within 10 ns,
> you run into other problems which makes that hard.

The trick is simply to add a bunch of bits below the least significant
timer bit, and then use those as a cpu/core ID.

I.e. each time cpu 0 and cpu 1 happens to record exactly the same real
timestamp, cpu 0 will be considered to have happened before cpu 1, since
those trailing bits will be ...000 for cpu 0 and ...001 for cpu 1.

With 16 such bits you can handle a 64K cluster and still guarantee that
all timestamps will be globally unique.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"