From: Randy Brukardt on
"Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
news:r5rrsmngabou$.nc73hmyyugax.dlg(a)40tude.net...
> On Fri, 9 Mar 2007 19:39:30 -0600, Randy Brukardt wrote:
>
> > "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
> > news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
> > ...
> >> As for GetThreadTimes, its absolute precision is 1ms.
> >
> > No, it is 10ms. The interface offers more precision than is actually
> > provided.
>
> You can force it to 1ms using
>
> timeBeginPeriod (1);

That's documented as applying to "Multimedia timers", whatever those are. I
wouldn't want to assume it would work on thread times and the like which
have nothing to do with multimedia. Besides, why wouldn't the maximum
accuracy alway be used if it is possible? What possible value is there in
using a less accurate time (given that you still have to do the math on
every switch no matter what the accuracy is involved)??

> This what any tasking Ada program should not forget do, when it starts
> under Windows. I hope that GNAT RTL does this...

Why? Ada.Real_Time is built on top of the performance counters, so is all of
your tasking programs.

> (I am too lazy to check it, but probably XP already has 1ms as the
default)

I don't think so, I tried my profiling code there, too, and didn't get any
more accuracy.

....
> > It's theoretically possible for a thread to run in sync so that it never
> > gets a tick, but I've never seen (or heard of) an instance of that
happening
> > in a real program being profiled. On a real DOS or Windows system, there
is
> > too much asynchronous going on for any "lock-step" to continue for long.
>
> Which theoretical case hit me. We performed a QoS studio of our
distributed
> middleware and wished to measure the time its services require for
> publishing and subscribing, separately from delivery times. To our
> amazement times of some services were solid 0, no matter how long and how
> many cycles we run the test! I started to investigate and discovered that
> mess.

Humm, I find that nearly impossible to believe. I'd expect some other cause
(*any* other cause) before I believed that. (Outside of device drivers,
anyway, which would be a lousy place to use this sort of timing.) I guess
I'd have to see a detailed example of that for myself before I believed it.

Randy.


From: tmoran on
> > too much asynchronous going on for any "lock-step" to continue for long.
If random stuff is independent that will generate noise that a large
sample size can minimize, but there may be "caravan" effects or
sample-frequency aliasing or some such things making the timing samples
non-independent.
From: Dmitry A. Kazakov on
On Sat, 10 Mar 2007 21:03:41 -0600, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
> news:r5rrsmngabou$.nc73hmyyugax.dlg(a)40tude.net...
>> On Fri, 9 Mar 2007 19:39:30 -0600, Randy Brukardt wrote:
>>
>>> "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> wrote in message
>>> news:p87mtsns4of0.hhld0y03415s.dlg(a)40tude.net...
>>> ...
>>>> As for GetThreadTimes, its absolute precision is 1ms.
>>>
>>> No, it is 10ms. The interface offers more precision than is actually
>>> provided.
>>
>> You can force it to 1ms using
>>
>> timeBeginPeriod (1);
>
> That's documented as applying to "Multimedia timers", whatever those are. I
> wouldn't want to assume it would work on thread times and the like which
> have nothing to do with multimedia. Besides, why wouldn't the maximum
> accuracy alway be used if it is possible? What possible value is there in
> using a less accurate time (given that you still have to do the math on
> every switch no matter what the accuracy is involved)??

The side effect of timeBeginPeriod(1) is in changing the granularity of
timing calls, which in turn has an impact on the overall threads
scheduling. For example, Sleep(1) would indeed wait for 1ms, not 10ms. That
would be difficult to have if threads wouldn't be rescheduled faster. So
timeBeginPeriod achieves this by changing the time resolution of the system
scheduler, the accuracy of the time slices should change as well.

>> This what any tasking Ada program should not forget do, when it starts
>> under Windows. I hope that GNAT RTL does this...
>
> Why? Ada.Real_Time is built on top of the performance counters, so is all of
> your tasking programs.

No, the reason is to get a finer scheduler resolution. 10ms was chosen in
the times when PCs were sufficiently slower. Now one can and should
reschedule at 1ms tact, or even faster.

BTW, Ada.Calendar should use the performance counters as well, because
system time calls have catastrophic accuracy. In C++ programs I translate
performance counters into system time using some statistical algorithm.
Better it would be to do on the driver level. I don't know why MS still
keeps it this way.

>>> It's theoretically possible for a thread to run in sync so that it never
>>> gets a tick, but I've never seen (or heard of) an instance of that happening
>>> in a real program being profiled. On a real DOS or Windows system, there is
>>> too much asynchronous going on for any "lock-step" to continue for long.
>>
>> Which theoretical case hit me. We performed a QoS studio of our distributed
>> middleware and wished to measure the time its services require for
>> publishing and subscribing, separately from delivery times. To our
>> amazement times of some services were solid 0, no matter how long and how
>> many cycles we run the test! I started to investigate and discovered that
>> mess.
>
> Humm, I find that nearly impossible to believe. I'd expect some other cause
> (*any* other cause) before I believed that. (Outside of device drivers,
> anyway, which would be a lousy place to use this sort of timing.) I guess
> I'd have to see a detailed example of that for myself before I believed it.

There is a plausible explanation of the effect. When a middleware variable
gets changed the middleware stores it in its memory updates some internal
structures and returns to the caller. The physical publishing I/O activity
happens on the context of another thread and even other process. This is
why the thread time of the publisher was always 0, it simply took less than
1ms and the caller in the test application entered sleep immediately after
publishing the variable. Even delivery was shorter, about 250um total
latency. GetThreadTimes is absolutely unsuitable to measure anything like
that.

When I faced the problem, I found that some guys in a similar study
(something about Java) had it as well. They wrote an OS extension. (Some
people have much time to spare (:-)) They interrupted Windows each nus,
inspected which thread had the processor and let it continue. This way they
could get at true thread times. Quite complicated for Ada.Execution_Times,
isn't it? (:-))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Pascal Obry on
Dmitry A. Kazakov a �crit :

> BTW, Ada.Calendar should use the performance counters as well, because

And it does.

Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
From: Dmitry A. Kazakov on
On Sun, 11 Mar 2007 14:57:56 +0100, Pascal Obry wrote:

> Dmitry A. Kazakov a �crit :
>
>> BTW, Ada.Calendar should use the performance counters as well, because
>
> And it does.

Good to know it. Do you know how does it synchronize counter ticks with
GetSystemTime? The method I am using is a thread that periodically adjusts
the offset.

BTW performance counters:

http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323&

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de