From: Dmitry A. Kazakov on
On Thu, 08 Mar 2007 21:18:11 GMT, Bj�rn Persson wrote:

> Dmitry A. Kazakov wrote:
>
>> If it were just inaccurate then the obtained values would be like
>> ThreadTime + Error where Error has zero mean.
>
> No, that's "imprecise".

No. The sets "accurate" and "precise" do not contain each other. Which
means that measurement can be precise and accurate, precise but inaccurate,
imprecise but accurate or imprecise and inaccurate.

As for GetThreadTimes, its absolute precision is 1ms. Its suggested
absolute accuracy should be one time quant (which duration depends on the
system settings). The later does not hold, because the error is in fact not
bounded.

> Shots distributed evenly over a shooting target is
> bad precision. A tight group of shots at one side of the target is good
> precision but bad accuracy.

That's right. This is why GetThreadTimes is not just inaccurate, it is
precisely wrong.

BTW, precisely wrong /= imprecise. (:-))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Randy Brukardt on
"Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
....
> As for GetThreadTimes, its absolute precision is 1ms.

No, it is 10ms. The interface offers more precision than is actually
provided.

> Its suggested
> absolute accuracy should be one time quant (which duration depends on the
> system settings). The later does not hold, because the error is in fact
not
> bounded.

I believe that the function was intended for profiling and performance
monitoring, and it surely is not different than any other technique I've
ever seen used for that. All such techniques give you a statistical
approximation to the real behavior. You just have to run them long enough to
make the results statistically significant.

It's theoretically possible for a thread to run in sync so that it never
gets a tick, but I've never seen (or heard of) an instance of that happening
in a real program being profiled. On a real DOS or Windows system, there is
too much asynchronous going on for any "lock-step" to continue for long.

In any case, it is statistical analysis that has to be applied here; it's
clear that the error can be reduced by lengthening the runtime (presuming
that you are willing to assume, as I am, that behavior is essentially random
if looked at over a long enough time period).

My main objection to this data is the gigantic tick rate, which means to get
anything meaningful, you have to run programs for a very long time (at least
a thousand times longer than the tick, and generally a thousand times the
"real value" of a counter before it is sufficiently significant).

OTOH, I don't want to use Ada.Execution_Times to control a program's
behavior. (I think that's a bit dubious, given that a hardware change would
invalidate the assumptions, and typically the important thing is the
response time: which depends on the wall-time, not the CPU time. But a
self-contained embedded system has more control than a program running on
Windows, so it might make sense somewhere.)

Randy.


From: Dmitry A. Kazakov on
On Fri, 9 Mar 2007 19:39:30 -0600, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
> news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
> ...
>> As for GetThreadTimes, its absolute precision is 1ms.
>
> No, it is 10ms. The interface offers more precision than is actually
> provided.

You can force it to 1ms using

timeBeginPeriod (1);

This what any tasking Ada program should not forget do, when it starts
under Windows. I hope that GNAT RTL does this...

(I am too lazy to check it, but probably XP already has 1ms as the default)

>> Its suggested
>> absolute accuracy should be one time quant (which duration depends on the
>> system settings). The later does not hold, because the error is in fact not
>> bounded.
>
> I believe that the function was intended for profiling and performance
> monitoring, and it surely is not different than any other technique I've
> ever seen used for that. All such techniques give you a statistical
> approximation to the real behavior. You just have to run them long enough to
> make the results statistically significant.

(under the condition that the error mean is 0, which unfortunately is not
the case)

> It's theoretically possible for a thread to run in sync so that it never
> gets a tick, but I've never seen (or heard of) an instance of that happening
> in a real program being profiled. On a real DOS or Windows system, there is
> too much asynchronous going on for any "lock-step" to continue for long.

Which theoretical case hit me. We performed a QoS studio of our distributed
middleware and wished to measure the time its services require for
publishing and subscribing, separately from delivery times. To our
amazement times of some services were solid 0, no matter how long and how
many cycles we run the test! I started to investigate and discovered that
mess.

> In any case, it is statistical analysis that has to be applied here; it's
> clear that the error can be reduced by lengthening the runtime (presuming
> that you are willing to assume, as I am, that behavior is essentially random
> if looked at over a long enough time period).

(plus some assumption about the error mean. Otherwise the averaged result
can be any.)

> OTOH, I don't want to use Ada.Execution_Times to control a program's
> behavior. (I think that's a bit dubious, given that a hardware change would
> invalidate the assumptions, and typically the important thing is the
> response time: which depends on the wall-time, not the CPU time. But a
> self-contained embedded system has more control than a program running on
> Windows, so it might make sense somewhere.)

I believe there are logical/philosophical reasons why a program shall not
change its behavior depending on its ... behaviour. (:-))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Stephen Leake on
"Randy Brukardt" <randy(a)rrsoftware.com> writes:

> "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
> news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
> ...
>> As for GetThreadTimes, its absolute precision is 1ms.
>
> No, it is 10ms. The interface offers more precision than is actually
> provided.

Technically, "precision" is the number of bits in a value. "accuracy"
is how many of those bits are meaningful.

--
-- Stephe
From: Cesar Rabak on
Stephen Leake escreveu:
> "Randy Brukardt" <randy(a)rrsoftware.com> writes:
>
>> "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
>> news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
>> ...
>>> As for GetThreadTimes, its absolute precision is 1ms.
>> No, it is 10ms. The interface offers more precision than is actually
>> provided.
>
> Technically, "precision" is the number of bits in a value. "accuracy"
> is how many of those bits are meaningful.
>
Isn't the number of bits of the value the 'resolution', precision being
a way of describing the dispersion of the values and accuracy the
distance to the actual quantity�?

--
Cesar Rabak


[1] The last two defs already written in this thread w/other words.