From: Grant Edwards on
On 2010-02-09, Grant Edwards <invalid(a)invalid.invalid> wrote:
> On 2010-02-09, Jean-Michel Pichavant <jeanmichel(a)sequans.com> wrote:
>> Grant Edwards wrote:
>>> What's the correct way to measure small periods of elapsed
>>> time. I've always used time.clock() in the past:
>>>
>>> start = time.clock()
>>> [stuff being timed]
>>> stop = time.clock()
>>>
>>> delta = stop-start
>>>
>>>
>>> However on multi-processor machines that doesn't work.
>>> Sometimes I get negative values for delta. According to
>>> google, this is due to a bug in Windows that causes the value
>>> of time.clock() to be different depending on which core in a
>>> multi-core CPU you happen to be on. [insert appropriate
>>> MS-bashing here]
>>>
>>> Is there another way to measure small periods of elapsed time
>>> (say in the 1-10ms range)?
>>>
>>> Is there a way to lock the python process to a single core so
>>> that time.clock() works right?
>
>> Did you try with the datetime module ?
>>
>> import datetime
>> t0 = datetime.datetime.now()
>> t1 = t0 - datetime.datetime.now()
>> t1.microseconds
>> Out[4]: 644114
>
> Doesn't work. datetime.datetime.now has granularity of
> 15-16ms.

time.time() exhibits the same behavior, so I assume that
datetime.datetime.new() ends up making the same libc/system
call as time.time(). From what I can grok of the datetime
module source code, it looks like it's calling gettimeofday().

I can't find any real documentation on the granularity of Win32
gettimeofday() other than a blog post that claims it is 10ms
(which doesn't agree with what my tests show).

--
Grant Edwards grante Yow! I feel better about
at world problems now!
visi.com
From: Gabriel Genellina on
En Tue, 09 Feb 2010 13:10:56 -0300, Grant Edwards
<invalid(a)invalid.invalid> escribi�:

> What's the correct way to measure small periods of elapsed
> time. I've always used time.clock() in the past:
>
> However on multi-processor machines that doesn't work.
> Sometimes I get negative values for delta. According to
> google, this is due to a bug in Windows that causes the value
> of time.clock() to be different depending on which core in a
> multi-core CPU you happen to be on. [insert appropriate
> MS-bashing here]

I'm not sure you can blame MS of this issue; anyway, this patch should fix
the problem:
http://support.microsoft.com/?id=896256

> Is there another way to measure small periods of elapsed time
> (say in the 1-10ms range)?

No that I know of. QueryPerformanceCounter (the function used by
time.clock) seems to be the best timer available.

> Is there a way to lock the python process to a single core so
> that time.clock() works right?

Interactively, from the Task Manager:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/taskman_assign_process.mspx

In code, using SetProcessAffinityMask and related functions:
http://msdn.microsoft.com/en-us/library/ms686223(VS.85).aspx

--
Gabriel Genellina

From: Paul McGuire on
On Feb 9, 10:10 am, Grant Edwards <inva...(a)invalid.invalid> wrote:
> Is there another way to measure small periods of elapsed time
> (say in the 1-10ms range)?
>
On Feb 9, 10:10 am, Grant Edwards <inva...(a)invalid.invalid> wrote:
> Is there another way to measure small periods of elapsed time
> (say in the 1-10ms range)?
>

I made repeated calls to time.clock() in a generator expression, which
is as fast a loop I can think of in Python. Then I computed the
successive time deltas to see if any granularities jumped out. Here
are the results:

>>> import time
>>> from itertools import groupby
>>>
>>> # get about 1000 different values of time.clock()
>>> ts = set(time.clock() for i in range(1000))
>>>
>>> # sort in ascending order
>>> ts = sorted(ts)
>>>
>>> # compute diffs between adjacent time values
>>> diffs = [j-i for i,j in zip(ts[:-1],ts[1:])]
>>>
>>> # sort and group
>>> diffs.sort()
>>> diffgroups = groupby(diffs)
>>>
>>> # print the distribution of time differences in microseconds
>>> for i in diffgroups: print "%3d %12.6f" % (len(list(i[1])), i[0]*1e6)
....
25 2.234921
28 2.234921
242 2.514286
506 2.514286
45 2.793651
116 2.793651
1 3.073016
8 3.073016
6 3.352381
4 3.631746
3 3.911112
1 3.911112
5 4.190477
2 4.469842
1 6.146033
1 8.660319
1 9.777779
1 10.895239
1 11.174605
1 24.304765
1 41.904767

There seems to be a step size of about .28 microseconds. So I would
guess time.clock() has enough resolution. But also beware of the
overhead of the calls to clock() - using timeit, I find that each call
takes about 2 microseconds (consistent with the smallest time
difference in the above data set).

-- Paul
From: Paul McGuire on
On Feb 10, 2:24 am, Dennis Lee Bieber <wlfr...(a)ix.netcom.com> wrote:
> On Tue, 9 Feb 2010 21:45:38 +0000 (UTC), Grant Edwards
> <inva...(a)invalid.invalid> declaimed the following in
> gmane.comp.python.general:
>
> > Doesn't work.  datetime.datetime.now has granularity of
> > 15-16ms.
>
> > Intervals much less that that often come back with a delta of
> > 0.  A delay of 20ms produces a delta of either 15-16ms or
> > 31-32ms
>
>         WinXP uses an ~15ms time quantum for task switching. Which defines
> the step rate of the wall clock output...
>
> http://www.eggheadcafe.com/software/aspnet/35546579/the-quantum-was-n...http://www.eggheadcafe.com/software/aspnet/32823760/how-do-you-set-ti...
>
> http://www.lochan.org/2005/keith-cl/useful/win32time.html
> --
>         Wulfraed         Dennis Lee Bieber               KD6MOG
>         wlfr...(a)ix.netcom.com     HTTP://wlfraed.home.netcom.com/

Gabriel Genellina reports that time.clock() uses Windows'
QueryPerformanceCounter() API, which has much higher resolution than
the task switcher's 15ms. QueryPerformanceCounter's resolution is
hardware-dependent; using the Win API, and a little test program, I
get this value on my machine:
Frequency is 3579545 ticks/sec
Resolution is 0.279365114840015 microsecond/tick

-- Paul
From: Grant Edwards on
On 2010-02-09, Gabriel Genellina <gagsl-py2(a)yahoo.com.ar> wrote:
> En Tue, 09 Feb 2010 13:10:56 -0300, Grant Edwards
><invalid(a)invalid.invalid> escribi?:
>
>> What's the correct way to measure small periods of elapsed
>> time. I've always used time.clock() in the past:
>>
>> However on multi-processor machines that doesn't work.
>> Sometimes I get negative values for delta. According to
>> google, this is due to a bug in Windows that causes the value
>> of time.clock() to be different depending on which core in a
>> multi-core CPU you happen to be on. [insert appropriate
>> MS-bashing here]
>
> I'm not sure you can blame MS of this issue; anyway, this
> patch should fix the problem:
> http://support.microsoft.com/?id=896256

I'm curious why it wouldn't be Microsoft's fault, because

A) Everything is Microsoft's fault. ;)

B) If a patch to MS Windows fixes the problem, how is it not a
problem in MS Windows?

>> Is there a way to lock the python process to a single core so
>> that time.clock() works right?
>
> Interactively, from the Task Manager:
> http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/taskman_assign_process.mspx

Thanks. That looks a bit easier than disabling the second core
(which is what I ended up doing).

> In code, using SetProcessAffinityMask and related functions:
> http://msdn.microsoft.com/en-us/library/ms686223(VS.85).aspx

With help from google and some old mailing list posting I might
even try that.

--
Grant Edwards grante Yow! It's a lot of fun
at being alive ... I wonder if
visi.com my bed is made?!?