From: VK on
I received an answer from Boris Zbarsky (one of Mozilla project head
leaders) at mozilla.dev.tech.js-engine

http://groups.google.com/group/mozilla.dev.tech.js-engine/msg/4e6df47759cc7018

Copy:

> Assuming

> var timerID = window.setTimeout(doIt(), 20000);
> executed at the moment of time 2010-XX-XX 23:50:0000

> and within the next 20 secs OS time was changed by DST request or
> manually. Will it be executed somewhere in 20000ms since timerID set
> irrespectively to the OS time, somewhere at 2010-XX-XY 00:10:0000 of
> the old system time, somewhere at 2010-XX-XY 00:10:0000 of the new
> system time? Other words is the queue based on an absolute scale,
> immutable time stamps, mutable time stamps?

1) This is a DOM issue, not a JSEng one.
2) Right now, the new system time would determine firing time (though
note that "time" means "time since epoch", so is unaffected by
DST changes, changes of OS timezone, or the like; only actual
changes to the actual clock matter, not to the user-visible
display).
3) The information in item 2 is subject to change. See
https://bugzilla.mozilla.org/show_bug.cgi?id=558306
From: VK on
So to summarize the actual setTimeout/setInterval behavior in response
to the OP question:

setTimeout / setInterval are based on time stamps using the current
epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
zone change or DST change do not affect timers. This way system clock
change breaks the timer functionality.

Timers did not, do not and will not be based on relative scales, like
window.setTimeout("foo()", 10000);
// WRONG ASSUMPTION:
// foo() will be tried to execute 10 sec
// after window.setTimeout("foo()", 10000);
// statement was executed
From: Dr J R Stockton on
In comp.lang.javascript message <7229694.eNJFYEL58v(a)PointedEars.de>,
Sat, 26 Jun 2010 20:29:51, Thomas 'PointedEars' Lahn
<PointedEars(a)web.de> posted:

>Dr J R Stockton wrote:
>
>> Jeremy J Starcher <r3jjs(a)yahoo.com> posted:
>>> In many other situations, adjusting the system clock leads to
>>> unpredictable events, including possible refiring or skipping of cron
>>> jobs and the like.
>>
>> AIUI, CRON jobs are set to fire at specific times. A CRON job set to
>> fire at 01:30 local should fire whenever 01:30 local occurs. A wise
>> used does not mindlessly set an event to occur during the missing Spring
>> hour or the doubled Autumn hour, though in most places avoiding Sundays
>> will prevent a problem.
>
>An even wiser person lets their system, and their cron jobs, run on UTC,
>which avoids the DST issue, and leaves the textual representation of dates
>to the locale.

A peculiar attitude (as is customary).

The Germans, by EU law, adjust their official time in Spring and Autumn.
No doubt The vast majority of the population will shift their daily
lives accordingly. But perhaps you do not. A computer should be set to
use whichever sort of time is most appropriate to its usage.


>>> It is perfectly reasonable for software to do something unpredictable
>>> when something totally unreasonable happens.
>>
>> But changing the displayed time should NOT affect an interval specified
>> as a duration.
>
>Duration is defined as the interval between two points in time. The only
>way to keep the counter up-to-date is to check against the system clock. If
>the end point of the interval changes as the system clock is modified, the
>result as to whether and when the duration is over must become false.

You are displaying a lack of understanding of computers in general and
also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
Resolution 1.

Duration is measured in SI seconds, or multiples/submultiples thereof.
If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
be no surprise).


>>>But what you say and what the computer understands are not the same
>>>thing. If the OS only has one timer, how do you suggest it keeps track
>>>of time passage besides deciding to start at:
>>> +new Date()+ x milliseconds?
>>
>> Bu continuing to count its GMT millisecond timer in the normal way and
>> using it for durations.
>
>Since usually a process is not being granted CPU time every millisecond,
>this is not going to work. I find it surprising to read this from you as
>you appeared to be well-aware of timer tick intervals at around 50 ms,
>depending on the system.

You appear to be still running DOS or Win98, in which there are indeed
0x1800B0 ticks per 24 hours. In more recent systems, the default
granularity is finer; and the fineness can be adjusted can be adjusted
by program demand. Indeed, a program relying on the fineness that it
finds may be affected when another process changes the corresponding
timer, AIUI.

Next time that you read PCTIM003, read also its date.

Perhaps you have heard of interrupts? In a bog-standard PC, from the
earliest days, it has been possible to get interrupts at up to 32 kHz
from the RTC - consult the RS 146816 data sheet or equivalent. CRON
ought not to rely on being awoken at frequent intervals so that it may
look at the clock; it should be awoken from passivity by the timer event
queue (or whatever it may be called) of the system, and should pre-empt
whatever else may currently have an active time slice.

A sensibly-written CRON would enable events to be scheduled by UTC and
by local time and by duration (SI time) from request.

--
(c) John Stockton, nr London UK. ?@merlyn.demon.co.uk Turnpike v6.05 MIME.
Web <URL:http://www.merlyn.demon.co.uk/> - FAQish topics, acronyms, & links.
Proper <= 4-line sig. separator as above, a line exactly "-- " (RFCs 5536/7)
Do not Mail News to me. Before a reply, quote with ">" or "> " (RFCs 5536/7)
From: Ry Nohryb on
On Jun 27, 1:39 pm, VK <schools_r...(a)yahoo.com> wrote:
> So to summarize the actual setTimeout/setInterval behavior in response
> to the OP question:
>
> setTimeout / setInterval are based on time stamps using the current
> epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
> zone change or DST change do not affect timers. This way system clock
> change breaks the timer functionality.

Not in Operas. Kudos to them. A setTimeout(f ,100) means call f in
100ms. If not, I'd rather write setTimeout(f, +new Date+ 100).
--
Jorge.
From: Thomas 'PointedEars' Lahn on
Dr J R Stockton wrote:

> Thomas 'PointedEars' Lahn posted:
>> Dr J R Stockton wrote:
>>> Jeremy J Starcher <r3jjs(a)yahoo.com> posted:
>>>> In many other situations, adjusting the system clock leads to
>>>> unpredictable events, including possible refiring or skipping of cron
>>>> jobs and the like.
>>> AIUI, CRON jobs are set to fire at specific times. A CRON job set to
>>> fire at 01:30 local should fire whenever 01:30 local occurs. A wise
>>> used does not mindlessly set an event to occur during the missing Spring
>>> hour or the doubled Autumn hour, though in most places avoiding Sundays
>>> will prevent a problem.
>> An even wiser person lets their system, and their cron jobs, run on UTC,
>> which avoids the DST issue, and leaves the textual representation of
>> dates to the locale.
>
> A peculiar attitude (as is customary).
>
> The Germans, by EU law, adjust their official time in Spring and Autumn.
> No doubt The vast majority of the population will shift their daily
> lives accordingly. But perhaps you do not. A computer should be set to
> use whichever sort of time is most appropriate to its usage.

You miss the point. It is not necessary for the system clock of a computer
to use local time in order for the operating system to display local time.
Not even in Germany, which you claim to know so well (but in fact haven't
got the slightest clue about).

>>>> It is perfectly reasonable for software to do something unpredictable
>>>> when something totally unreasonable happens.
>>> But changing the displayed time should NOT affect an interval specified
>>> as a duration.
>> Duration is defined as the interval between two points in time. The only
>> way to keep the counter up-to-date is to check against the system clock.
>> If the end point of the interval changes as the system clock is modified,
>> the result as to whether and when the duration is over must become false.
>
> You are displaying a lack of understanding of computers in general

Is that so? A usual PC will not grant CPU time to a process every
millisecond (so that this process could count down reliably per your
suggestion), so other means are necessary to determine which amount of time
has passed.

> and also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
> Resolution 1.
>
> Duration is measured in SI seconds, or multiples/submultiples thereof.
> If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
> be no surprise).

You are missing the point completely.

>>>> But what you say and what the computer understands are not the same
>>>> thing. If the OS only has one timer, how do you suggest it keeps track
>>>> of time passage besides deciding to start at:
>>>> +new Date()+ x milliseconds?
>>>
>>> Bu continuing to count its GMT millisecond timer in the normal way and
>>> using it for durations.
>>
>> Since usually a process is not being granted CPU time every millisecond,
>> this is not going to work. I find it surprising to read this from you as
>> you appeared to be well-aware of timer tick intervals at around 50 ms,
>> depending on the system.
>
> You appear to be still running DOS or Win98, in which there are indeed
> 0x1800B0 ticks per 24 hours. In more recent systems, the default
> granularity is finer; and the fineness can be adjusted can be adjusted
> by program demand. Indeed, a program relying on the fineness that it
> finds may be affected when another process changes the corresponding
> timer, AIUI.

You should get yourself informed beyond technical standards, and avoid
making hasty generalizations if you want to be taken seriously. I happen to
be running a PC laptop with a Linux kernel I have configured and compiled
myself which has a finer granularity, a timer frequency of 1000 Hz to be
precise (which is recommended for desktop systems). That does not have
anything to do with the CPU time granted to a process by the operating
system (which is certainly not every millisecond, since other processes
running on that machine want that CPU time, too), especially not with the
resolution of setTimeout()/setInterval() which is determined by the
implementation (and Mozilla-based ones will not go below 10 milliseconds
AISB).

> [snip irrelevance]


PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f806at$ail$1$8300dec7(a)news.demon.co.uk>