DOM timers and the system clock.

R

Ry Nohryb

Hi,

Do you think that adjusting the operating system's date/time ought to
affect a setTimeout(f, ms) or a setInterval(f, ms) ?

I don't.

I mean, when I code a setTimeout I'm saying "do this as soon as x ms
have elapsed", not do this at the time +new Date()+ x milliseconds,
right ?

But, in every browser I've tested this in (NS Navigators, iCabs, the
latest Chromes, Safaris and FireFoxes), in all of them except in Opera
(kudos to Opera !), any pending setTimeouts and setIntervals go nuts
just by adjusting the system's clock time to somewhere in the past.

Try it by yourself: open this: http://jorgechamorro.com/cljs/100/ and
see what happens to the timers as soon as you adjust the system's
clock to for example an hour less or a day less (yesterday).

Everywhere but in Opera. That's a bug, right ? Or not ? What do you
think ? Are there any (valid) excuses for that ? Or should we open a
bunch of tickets in their respective bugzillas ?

TIA,
 
J

Jeremy J Starcher

Hi,

Do you think that adjusting the operating system's date/time ought to
affect a setTimeout(f, ms) or a setInterval(f, ms) ?

In many other situations, adjusting the system clock leads to
unpredictable events, including possible refiring or skipping of cron
jobs and the like.

It is perfectly reasonable for software to do something unpredictable
when something totally unreasonable happens.

In other words: DON'T DO THAT.

If keeping systems in sync is important, there are ways to clocks in sync
without ever setting a system clock in backwards -- you just "slow down"
the clock until it finally catches up. While it has side effects, they
are a lot more gentle most other approaches, but this gets off topic.

By the same token, setTimeout and setInterval fail to fire when the
computer is in suspend or hibernate and UA's act differently when woken
from sleep and all UA's I've tested fail when I remove the onboard RAM.

I'm sorry to hear that.
I mean, when I code a setTimeout I'm saying "do this as soon as x ms
have elapsed", not do this at the time +new Date()+ x milliseconds,
right ?

But what you say and what the computer understands are not the same
thing. If the OS only has one timer, how do you suggest it keeps track
of time passage besides deciding to start at:
+new Date()+ x milliseconds?


Everywhere but in Opera.

It would be mildly interesting to see their implantation of that.
That's a bug, right ? Or not ?
No.

What do you think ?

I think if I can't say anything nice ...
Are there any (valid) excuses for that ? Or should we open a
bunch of tickets in their respective bugzillas ?

If I were part of the Mozilla team, I think I'd enjoy getting a bug
report on that one. Its the sort of thing I'd email around the office
for a good laugh of the day and a bit of stress relief.
 
R

Ry Nohryb

(...)
If I were part of the Mozilla team, I think I'd enjoy getting a bug
report on that one.  Its the sort of thing I'd email around the office
for a good laugh of the day and a bit of stress relief.

Sure you'd do that, until you discover that the system resets the time
every now and then and suddenly your brain turns on and your idiotic
smile disappears completely:

system.log :

23/06/10 00:32:49 ntpd[13] time reset -23.5561 s
 
R

Ry Nohryb

(...) setTimeout and setInterval fail to fire when the
computer is in suspend or hibernate and UA's act differently when woken
from sleep and all UA's I've tested fail when I remove the onboard RAM.
(...)

Wow! How come ?
 
J

Jeremy J Starcher

(...)
If I were part of the Mozilla team, I think I'd enjoy getting a bug
report on that one.  Its the sort of thing I'd email around the office
for a good laugh of the day and a bit of stress relief.

Sure you'd do that, until you discover that the system resets the time
every now and then and suddenly your brain turns on and your idiotic
smile disappears completely:

system.log :

23/06/10 00:32:49 ntpd[13] time reset -23.5561 s

I'm not an expert on ntpd, but it should only do 'hard' set the first
time it adjusts the clock, after that it should skew the system time to
sync.
 
R

Ry Nohryb

Sure you'd do that, until you discover that the system resets the time
every now and then and suddenly your brain turns on and your idiotic
smile disappears completely:
system.log :
23/06/10 00:32:49  ntpd[13]        time reset -23.5561 s

I'm not an expert on ntpd, but it should only do 'hard' set the first
time it adjusts the clock, after that it should skew the system time to
sync.

And what about daylight saving time ? That's +/- 1 hour at once ...
 
J

Jeremy J Starcher

Sure you'd do that, until you discover that the system resets the
time every now and then and suddenly your brain turns on and your
idiotic smile disappears completely:
system.log :
23/06/10 00:32:49  ntpd[13]        time reset -23.5561 s

I'm not an expert on ntpd, but it should only do 'hard' set the first
time it adjusts the clock, after that it should skew the system time to
sync.

And what about daylight saving time ? That's +/- 1 hour at once ...

Depends upon the OS. My Linux box stores the time internally as GMT and
applies rules to translate to the local time zone. There is no
adjustment for daylight savings time.

You might have a point about Windows based machines -- that sounds like a
crappy enough approach that they'd use it.
 
T

Thomas 'PointedEars' Lahn

John said:
It doesn't matter what we think. What does the specification say ?

It's "DOM Level 0". There is no specification in the sense of a Web
standard (yet).


PointedEars
 
D

Dr J R Stockton

It, certainly, ought not to do so. But it might do so.
In many other situations, adjusting the system clock leads to
unpredictable events, including possible refiring or skipping of cron
jobs and the like.

AIUI, CRON jobs are set to fire at specific times. A CRON job set to
fire at 01:30 local should fire whenever 01:30 local occurs. A wise
used does not mindlessly set an event to occur during the missing Spring
hour or the doubled Autumn hour, though in most places avoiding Sundays
will prevent a problem.
It is perfectly reasonable for software to do something unpredictable
when something totally unreasonable happens.

But changing the displayed time should NOT affect an interval specified
as a duration.
But what you say and what the computer understands are not the same
thing. If the OS only has one timer, how do you suggest it keeps track
of time passage besides deciding to start at:
+new Date()+ x milliseconds?

Bu continuing to count its GMT millisecond timer in the normal way and
using it for durations. The displayed time is obtained from a value
offset from that by a time-zone-dependent amount and by a further 18e5
or 36e5 ms in Summer.


A PC has at least two independent clocks, one in the RTC and one using
different hardware (read PCTIM003.TXT, which Google seems to find). The
same seems likely to be true for any computer designed to be turned on
and off.
 
V

VK

It's "DOM Level 0".  There is no specification in the sense of a Web
standard (yet).

There is a working draft from 2006, left in the misery ever since - it
contains nothing valuable but a lot of question marks:
http://www.w3.org/TR/Window/#window-timers

At the very least for Windows/IE Javascript is heavily based on
different C++ runtimes. In the particular
%System%\System32\jscript.dll imports Msvcrt.dll and from there gets
all its float math and Date manipulation.
So it would be interesting to know the implementation of C++ own
timers and their reaction on OS time change. If OP observations are
correct then probably the "canonical" setTimeout explanation that goes
back from Netscape docs is incomplete up to being misleading, the
proper explanation would be (the added part with asterisk): "The
setTimeout method evaluates an expression or calls a function after a
specified amount of time * since the timer has been set based on the
current system time *"
 
V

VK

"The setTimeout method evaluates an expression or calls a function after a
specified amount of time * since the timer has been set based on the
current system time *"

Other words
window.setTimeout("window.alert(1)", 10000);
executed at say 2010-06-26 00:01:0000 LST (Local System Time)
literally means:

1. Get LST / 2010-06-26 00:01:0000

2. Get delay (10000ms = 10 sec)

3. Set C++ runtime IRQ to 2010-06-26 00:11:0000 to notify the
Javascript engine * whenever this moment of LST will happen *.
 
V

VK

Other words
 window.setTimeout("window.alert(1)", 10000);
executed at say 2010-06-26 00:01:0000 LST (Local System Time)
literally means:

1. Get LST / 2010-06-26 00:01:0000

2. Get delay (10000ms = 10 sec)

3. Set C++ runtime IRQ to 2010-06-26 00:11:0000 to notify the
Javascript engine * whenever this moment of LST will happen *.

As Google search shows, I am right. C/C++ do not have built-in timer
functionality, and their add-on implementations in OSs are based on
time stamps timePlaced/timeCalled, not on some absolute coordinate. If
so then it is a global laziness oops of non real time OSs.

It may also be interesting that for Windows environments the minimal
delay is 10ms, any smaller will be automatically set to 10ms, so
window.setTimeout("foo()",0) is perfectly valid but equal to
window.setTimeout("foo()",10)

Also the maximum delay for Windows environments is 2147483647ms =~ 596
hours =~ 24.8 days, any bigger value will be set to 2147483647ms. See
http://msdn.microsoft.com/en-us/library/ms644906(v=VS.85).aspx
USER_TIMER_MINIMUM and USER_TIMER_MAXIMUM
 
T

Thomas 'PointedEars' Lahn

Dr said:
AIUI, CRON jobs are set to fire at specific times. A CRON job set to
fire at 01:30 local should fire whenever 01:30 local occurs. A wise
used does not mindlessly set an event to occur during the missing Spring
hour or the doubled Autumn hour, though in most places avoiding Sundays
will prevent a problem.

An even wiser person lets their system, and their cron jobs, run on UTC,
which avoids the DST issue, and leaves the textual representation of dates
to the locale.
But changing the displayed time should NOT affect an interval specified
as a duration.

Duration is defined as the interval between two points in time. The only
way to keep the counter up-to-date is to check against the system clock. If
the end point of the interval changes as the system clock is modified, the
result as to whether and when the duration is over must become false.
Bu continuing to count its GMT millisecond timer in the normal way and
using it for durations.

Since usually a process is not being granted CPU time every millisecond,
this is not going to work. I find it surprising to read this from you as
you appeared to be well-aware of timer tick intervals at around 50 ms,
depending on the system.


PointedEars
 
V

VK

I received an answer from Boris Zbarsky (one of Mozilla project head
leaders) at mozilla.dev.tech.js-engine

http://groups.google.com/group/mozilla.dev.tech.js-engine/msg/4e6df47759cc7018

Copy:
var timerID = window.setTimeout(doIt(), 20000);
executed at the moment of time 2010-XX-XX 23:50:0000
and within the next 20 secs OS time was changed by DST request or
manually. Will it be executed somewhere in 20000ms since timerID set
irrespectively to the OS time, somewhere at 2010-XX-XY 00:10:0000 of
the old system time, somewhere at 2010-XX-XY 00:10:0000 of the new
system time? Other words is the queue based on an absolute scale,
immutable time stamps, mutable time stamps?

1) This is a DOM issue, not a JSEng one.
2) Right now, the new system time would determine firing time (though
note that "time" means "time since epoch", so is unaffected by
DST changes, changes of OS timezone, or the like; only actual
changes to the actual clock matter, not to the user-visible
display).
3) The information in item 2 is subject to change. See
https://bugzilla.mozilla.org/show_bug.cgi?id=558306
 
V

VK

So to summarize the actual setTimeout/setInterval behavior in response
to the OP question:

setTimeout / setInterval are based on time stamps using the current
epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
zone change or DST change do not affect timers. This way system clock
change breaks the timer functionality.

Timers did not, do not and will not be based on relative scales, like
window.setTimeout("foo()", 10000);
// WRONG ASSUMPTION:
// foo() will be tried to execute 10 sec
// after window.setTimeout("foo()", 10000);
// statement was executed
 
D

Dr J R Stockton

In comp.lang.javascript message <[email protected]>,
Sat, 26 Jun 2010 20:29:51, Thomas 'PointedEars' Lahn
An even wiser person lets their system, and their cron jobs, run on UTC,
which avoids the DST issue, and leaves the textual representation of dates
to the locale.

A peculiar attitude (as is customary).

The Germans, by EU law, adjust their official time in Spring and Autumn.
No doubt The vast majority of the population will shift their daily
lives accordingly. But perhaps you do not. A computer should be set to
use whichever sort of time is most appropriate to its usage.

Duration is defined as the interval between two points in time. The only
way to keep the counter up-to-date is to check against the system clock. If
the end point of the interval changes as the system clock is modified, the
result as to whether and when the duration is over must become false.

You are displaying a lack of understanding of computers in general and
also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
Resolution 1.

Duration is measured in SI seconds, or multiples/submultiples thereof.
If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
be no surprise).

Since usually a process is not being granted CPU time every millisecond,
this is not going to work. I find it surprising to read this from you as
you appeared to be well-aware of timer tick intervals at around 50 ms,
depending on the system.

You appear to be still running DOS or Win98, in which there are indeed
0x1800B0 ticks per 24 hours. In more recent systems, the default
granularity is finer; and the fineness can be adjusted can be adjusted
by program demand. Indeed, a program relying on the fineness that it
finds may be affected when another process changes the corresponding
timer, AIUI.

Next time that you read PCTIM003, read also its date.

Perhaps you have heard of interrupts? In a bog-standard PC, from the
earliest days, it has been possible to get interrupts at up to 32 kHz
from the RTC - consult the RS 146816 data sheet or equivalent. CRON
ought not to rely on being awoken at frequent intervals so that it may
look at the clock; it should be awoken from passivity by the timer event
queue (or whatever it may be called) of the system, and should pre-empt
whatever else may currently have an active time slice.

A sensibly-written CRON would enable events to be scheduled by UTC and
by local time and by duration (SI time) from request.
 
R

Ry Nohryb

So to summarize the actual setTimeout/setInterval behavior in response
to the OP question:

setTimeout / setInterval are based on time stamps using the current
epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
zone change or DST change do not affect timers. This way system clock
change breaks the timer functionality.

Not in Operas. Kudos to them. A setTimeout(f ,100) means call f in
100ms. If not, I'd rather write setTimeout(f, +new Date+ 100).
 
T

Thomas 'PointedEars' Lahn

Dr said:
Thomas 'PointedEars' Lahn posted:

A peculiar attitude (as is customary).

The Germans, by EU law, adjust their official time in Spring and Autumn.
No doubt The vast majority of the population will shift their daily
lives accordingly. But perhaps you do not. A computer should be set to
use whichever sort of time is most appropriate to its usage.

You miss the point. It is not necessary for the system clock of a computer
to use local time in order for the operating system to display local time.
Not even in Germany, which you claim to know so well (but in fact haven't
got the slightest clue about).
You are displaying a lack of understanding of computers in general

Is that so? A usual PC will not grant CPU time to a process every
millisecond (so that this process could count down reliably per your
suggestion), so other means are necessary to determine which amount of time
has passed.
and also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
Resolution 1.

Duration is measured in SI seconds, or multiples/submultiples thereof.
If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
be no surprise).

You are missing the point completely.
You appear to be still running DOS or Win98, in which there are indeed
0x1800B0 ticks per 24 hours. In more recent systems, the default
granularity is finer; and the fineness can be adjusted can be adjusted
by program demand. Indeed, a program relying on the fineness that it
finds may be affected when another process changes the corresponding
timer, AIUI.

You should get yourself informed beyond technical standards, and avoid
making hasty generalizations if you want to be taken seriously. I happen to
be running a PC laptop with a Linux kernel I have configured and compiled
myself which has a finer granularity, a timer frequency of 1000 Hz to be
precise (which is recommended for desktop systems). That does not have
anything to do with the CPU time granted to a process by the operating
system (which is certainly not every millisecond, since other processes
running on that machine want that CPU time, too), especially not with the
resolution of setTimeout()/setInterval() which is determined by the
implementation (and Mozilla-based ones will not go below 10 milliseconds
AISB).
[snip irrelevance]


PointedEars
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,997
Messages
2,570,240
Members
46,828
Latest member
LauraCastr

Latest Threads

Top