From: novickivan on
Hello,

I was wondering if it is valid to use select as a sleep call. When i
use select and try to sleep, it seems the elapsed time is always 4
milliseconds at a minimum. I can not sleep for 1 millisecond only.
And, if i set the sleep to longer than 4 milliseconds the elapsed time
is also greater then the time i set.

Does anyone know why there would be any fixed overhead in using select
that would make it always 4 milliseconds?

My test program is attached.

Cheers,
Ivan


#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/time.h>
#include <time.h>
struct timespec diff_timespec(struct timespec start, struct timespec
end);
long long millisec_elapsed(struct timespec diff);

void test1(long microsec)
{
struct timeval delay;
delay.tv_sec = 0;
delay.tv_usec = 1;
(void) select(0, NULL, NULL, NULL, &delay);
}

void test2(long microsec)
{
struct timespec delay;
delay.tv_sec = 0;
delay.tv_nsec = 1000;
nanosleep(&delay, NULL);
}

int
main(int argc, char **argv)
{
struct timespec start;
struct timespec end;
struct timespec diff;
int i;

clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < 1000; ++i){
test1(1);
}
clock_gettime(CLOCK_MONOTONIC, &end);
diff = diff_timespec(start, end);
printf("operations took %lld milliseconds\n",
millisec_elapsed(diff));

clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < 1000; ++i){
test2(1);
}
clock_gettime(CLOCK_MONOTONIC, &end);
diff = diff_timespec(start, end);
printf("operations took %lld milliseconds\n",
millisec_elapsed(diff));


return 0;
}

struct timespec diff_timespec(struct timespec start, struct timespec
end)
{
struct timespec result;

if (end.tv_nsec < start.tv_nsec){ // peform carry like in normal
subtraction
result.tv_nsec = 1000000000 + end.tv_nsec -
start.tv_nsec;
result.tv_sec = end.tv_sec - 1 - start.tv_sec;
}
else{
result.tv_nsec = end.tv_nsec - start.tv_nsec;
result.tv_sec = end.tv_sec - start.tv_sec;
}

return result;
}

long long millisec_elapsed(struct timespec diff){
return ((long long)diff.tv_sec * 1000) + (diff.tv_nsec / 1000000);
}


From: Nicolas George on
"novickivan(a)gmail.com" wrote in message
<71344f54-8b08-4a9f-a3dd-5870e22acdcf(a)u19g2000prh.googlegroups.com>:
> I was wondering if it is valid to use select as a sleep call. When i
> use select and try to sleep, it seems the elapsed time is always 4
> milliseconds at a minimum. I can not sleep for 1 millisecond only.
> And, if i set the sleep to longer than 4 milliseconds the elapsed time
> is also greater then the time i set.
>
> Does anyone know why there would be any fixed overhead in using select
> that would make it always 4 milliseconds?

A lot of schedulers use a fixed-interval timer interrupt to implement all
time-related scheduling. 4�ms means 250�Hz, which is a common value for the
timer interrupt on desktop setups.

You should be more specific about the exact OS you use, including the kernel
configuration.
From: Jens Thoms Toerring on
novickivan(a)gmail.com <novickivan(a)gmail.com> wrote:
> I was wondering if it is valid to use select as a sleep call.

Yes, that's one thing select() gets used for.

> When i use select and try to sleep, it seems the elapsed time is always
> 4 milliseconds at a minimum. I can not sleep for 1 millisecond only.
> And, if i set the sleep to longer than 4 milliseconds the elapsed time
> is also greater then the time i set.

> Does anyone know why there would be any fixed overhead in using select
> that would make it always 4 milliseconds?

You have to consider that you're using a multi-tasking system,
i.e. a system on which several processes run "in parallel" but
not really at the same time but with the system just making it
look like that by quickly switching between the different pro-
cesses. Thus when your process "runs" it just runs for a short
time, a so-called timeslice, then it gets suspended and some
other process is run, then your process may get run again for
a the duration of a timeslice, suspened again etc. until it's
finished.

Now when your process puts itself to sleep, e.g. by calling
select() with just a timeout or by calling usleep() etc., then
it tells the system: "I have nothing to do at the moment, you
may start another process while I'm waiting." And unless the
other process that then will get run also goes to sleep, your
process has to wait (at least) until the other process has used
up its timeslice. Thus, when you ask for your process to be put
to sleep then you can't expect that exactly after the time you
wanted it to sleep it will get rescheduled. The timeout you pass
to select() (or usleep() or similar functions) is thus only a
lower limit, i.e. your process won't be woken up before it's over
- but it can take a lot longer before your process is run again
than that.

Switching between processes takes time. If timeslices are very
short, a lot of the CPU time will be wasted just for that. Thus
the length of the timeslice is a compromise between not spending
too much time on task switching on the one hand and making it
look for the user as if all processes run at the same time on
the other. The 4 ms you have seen look like a reasonable value
for a timeslice - some years ago you normally would have had at
least 10 ms but with the newer, faster machines, timeslices of
4 ms or 1 ms get more and more common. On some systems the length
of the timeslice can be set when compiling the kernel (e.g. on
Linux you can select between 100 Hz, 250 Hz and 1 kHz). Going
beyond that is possible but would make the machine seem to be a
lot slower without any benefit for most users.

So, you have to accept that with all normal kinds of "sleeping"
you can only specify a lower bound for the time your process will
sleep, but there's no upper limit - the more processes are waiting
to be run the longer it may take (if you want to experiment try to
get your machine in a state where it's running out of memory and
starts to swap heavily and see how long those "sleeps" take then).

Regards, Jens
--
\ Jens Thoms Toerring ___ jt(a)toerring.de
\__________________________ http://toerring.de
From: novickivan on
On Feb 19, 4:27 pm, Nicolas George <nicolas$geo...(a)salle-s.org> wrote:
> "novicki...(a)gmail.com"  wrote in message
>
> <71344f54-8b08-4a9f-a3dd-5870e22ac...(a)u19g2000prh.googlegroups.com>:
>
> > I was wondering if it is valid to use select as a sleep call.  When i
> > use select and try to sleep, it seems the elapsed time is always 4
> > milliseconds at a minimum.  I can not sleep for 1 millisecond only.
> > And, if i set the sleep to longer than 4 milliseconds the elapsed time
> > is also greater then the time i set.
>
> > Does anyone know why there would be any fixed overhead in using select
> > that would make it always 4 milliseconds?
>
> A lot of schedulers use a fixed-interval timer interrupt to implement all
> time-related scheduling. 4 ms means 250 Hz, which is a common value for the
> timer interrupt on desktop setups.
>
> You should be more specific about the exact OS you use, including the kernel
> configuration.

Ahhh ok. I am using Suse Enterprise Linux 11. I guess my box is set
to 250 Hz interval timer.

Thanks!

Cheers,
Ivan Novick
From: Ersek, Laszlo on
In article <71344f54-8b08-4a9f-a3dd-5870e22acdcf(a)u19g2000prh.googlegroups.com>, "novickivan(a)gmail.com" <novickivan(a)gmail.com> writes:

> I was wondering if it is valid to use select as a sleep call.

Yes, it is, the SUS explicitly specifies this behavior. Link to v2:

http://www.opengroup.org/onlinepubs/007908775/xsh/select.html

It has the benefit of not interfering with (some) other timers.


> When i
> use select and try to sleep, it seems the elapsed time is always 4
> milliseconds at a minimum.

This is also explicitly allowed by the standard -- search for the word
"granularity". (The specific reasons were explained by others.) The
descriptions of other interfaces use the word "resolution" instead.


> I can not sleep for 1 millisecond only.

http://kerneltrap.org/node/6750

That's an old article, but some parts of it should still be true.


http://www.ibm.com/developerworks/linux/library/l-cfs/#N10083

"For SCHED_RR and SCHED_FIFO policies, the real-time scheduling module
is used (that module is implemented in kernel/sched_rt.c)."


Or try to busy-wait in user-space if applicable.


Another way that might work is this: accept that you will be woken up
most of the time a bit late, and keep a running balance between where
you should be in time and where you actually are in time. If you're
late, work relentlessly. If you are early (have positive balance, ie.
surplus) then sleep off that surplus. That will most definitely push you
into deficit because of coarse timer resolution, so you'll work a bit
relentlessly afterwards. The greater the precision of your select(), the
smaller the amplitude of your deficit will be, and the shorter the "work
relentlessly" bursts will last.

Of course if your platform is incapable to cope, in the longer term,
with the event rate you have in mind, your deficit will accumulate
without bound.

....

My "udp_copy2" utility implements a packet scheduler that is
"mathematically precise" on the average, ie. it shouldn't drift in the
long term at all and enables a finely tunable packet rate.

http://freshmeat.net/projects/udp_copy2

Quoting from "timer_design.txt" -- take it for what it's worth:

----v----

N[0] N[1] N[2] N[3] N[4]
| | | | |
| |<--S[0]--->| |<-S[1]-->| |<-S[2]->| |<-S[3]-->| |
+--+-----------+--+---------+-+--------+-+---------+--+----
| | | | | | | | | |
| C[0] | C[1] | C[2] | C[3] | C[4]
| | | | |

Definitions:

N[I] := Nominal completion time of event #(I-1).
Also nominal start time of event #I.
For all I >= 0.

L[I] := N[I+1] - N[I]
Nominal length of event #I.
For all I >= 0.

C[I] := Real completion time of event #I.
For all I >= 0.
Let C[-1] := N[0].

S[I] := N[I+1] - C[I]
Amount of time to sleep after real completion of event #I
until nominal completion time of event #I.
For all I >= -1.

Thus:

1. S[-1] = N[0] - C[-1]
= N[0] - N[0] substituted definition
= 0.

2. For all I >= 0:
S[I] = S[I] - S[I-1] + S[I-1] introduced S[I-1]
= S[I-1] + (S[I] - S[I-1]) regrouped
= S[I-1] + (N[I+1] - C[I] - (N[I] - C[I-1])) subst. def.
= S[I-1] + (N[I+1] - N[I]) - (C[I] - C[I-1]) regrouped
= S[I-1] + (L[I] - (C[I] - C[I-1])) subst. def.

This means that the amount of time to sleep (S[I]) right after the real
completion of current event #I (C[I], "now")) can be determined using the
previous sleep length (S[I-1]), the nominal length of the current event
(L[I]), and the time passed by since the real completion time of the
previous event (C[I] - C[I-1]).

We can check that, for example, this yields for I=0:

S[0] = S[-1] + (L[0] - (C[0] - C[-1]))
= 0 + ((N[1] - N[0]) - (C[0] - N[0]))
= N[1] - N[0] - C[0] + N[0]
= N[1] - C[0]


In the algorithm below, for all I >= 0, exec_event(I) executes event #I and
reveals its nominal length L[I].

Algorithm:

C[-1] := current_time
S[-1] := 0
I := 0

LOOP forever
L[I] := exec_event(I)
C[I] := current_time
S[I] := S[I-1] + (L[I] - (C[I] - C[I-1]))
exec_sleep(S[I])
I := I + 1
END LOOP

[...]

the resoultion of the time line is 1/K microseconds

----^----

Cheers,
lacos
 |  Next  |  Last
Pages: 1 2 3 4
Prev: directories III
Next: 'netstat' and '-f inet' option