From: Ingo Molnar on

* Ting Yang <tingy(a)cs.umass.edu> wrote:

> Authors of this paper proposed a scheduler: Earlist Eligible Virtual
> Deadline First (EEVDF). EEVDF uses exactly the same method as CFS to
> track the execution of each running task. The only difference between
> EEVDF and CFS is that EEVDF tries to _deadline_ fair while CFS is
> _start-time_ fair. [...]

Well, this is a difference but note that it's far from being the 'only
difference' between CFS and EEVDF:

- in CFS you have to "earn" your right to be on the CPU, while EEVDF
gives out timeslices (quanta)

- EEVDF concentrates on real-time (SCHED_RR-alike) workloads where they
know the length of work units - while CFS does not need any knowledge
about 'future work', it measures 'past behavior' and makes its
decisions based on that. So CFS is purely 'history-based'.

- thus in CFS there's no concept of "deadline" either (the 'D' from
EEVDF).

- EEVDF seems to be calculating timeslices in units of milliseconds,
while CFS follows a very strict 'precise' accounting scheme on the
nanoseconds scale.

- the EEVDF paper is also silent on SMP issues.

- it seems EEVDF never existed as a kernel scheduler, it was a
user-space prototype under FreeBSD with simulated workloads. (have
they released that code at all?).

The main common ground seems to be that both CFS and EEVDF share the
view that the central metric is 'virtual time' proportional to the load
of the CPU (called the 'fair clock' in CFS) - but even for this the
details of the actual mechanism differ: EEVDF uses 1/N while CFS (since
-v8) uses a precise, smoothed and weighted load average that is close to
(and reuses portions of) Peter Williams's load metric used in smp-nice.

The EEVDF mechanism could perhaps be more appropriate for real-time
systems (the main target of their paper), while the CFS one i believe is
more appropriate for general purpose workloads.

So i'd say there's more in common between SD and CFS than between EEVDF
and CFS.

So ... it would certainly be interesting to try the EEVDF paper based on
CFS (or whatever way you'd like to try it) and turn the EEVDF paper into
a real kernel scheduler - the two mechanisms are quite dissimilar and
they could behave wildly differently on various workloads. Depending on
test results we could use bits of EEVDF's approaches in CFS too, if it
manages to out-schedule CFS :-)

(your observation about CFS's fork handling is correct nevertheless!)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Bill Huey on
On Wed, May 02, 2007 at 11:18:45PM -0400, Ting Yang wrote:
> I just want to point out that ->wait_runtime, in fact, stores the lag of
> each task in CFS, except that it is also used by other things, and
> occasionally tweaked (heuristically ?). Under normal cases the sum of
> lags of all active tasks in such a system, should be a constant 0. The
> lag information is equally important to EEVDF, when some tasks leave the
> system (becomes inactive) carrying certain amount of lag. The key point
> here is that we have to spread the lag (either negative or positive) to
> all remaining task, so that the fairness of the system is preserved. I
> thinks CFS implementation does not seems to handle this properly.
>
> I am running out time today :-( I will write an email about CFS -v8
> tomorrow, describing 2 issues in CFS I found related to this.

Interesting. I haven't look at the code carefully but that wouldn't
surprise me if this was the case and it led to odd corner cases.

I'm eagerly waiting your analysis and explanation.

bill

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Zoltan Boszormenyi <zboszor(a)dunaweb.hu> wrote:

> I started up 12 glxgears to see the effect of CFS v8 on my Athlon64 X2
> 4200.
>
> Without this patch all the GL load was handled by the second core, the
> system only stressed the first core if other processes were also
> started, i.e. a kernel compilation. With this patch the load is evenly
> balanced across the two cores all the time.

ok, i didnt realize that it would affect x86_64 too. I'll do a -v9
release with this fix included.

> [...] And while doing make -j4 on the kernel, the 12 gears are still
> spinning about 185+ FPS and there are only slightly visible hiccups.
> Switching between workspaces, i.e. refreshing the large windows of
> Thunderbird and Firefox are done very quickly, the whole system is
> exceptionally responsive.

great! So it seems -v8 does improve OpenGL handling too :-)

> Thanks for this great work!

you are welcome :)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Damien Wyart on
Hello,

* Ingo Molnar <mingo(a)elte.hu> [2007-05-03 15:02]:
> great! So it seems -v8 does improve OpenGL handling too :-)

What are your thoughts/plans concerning merging CFS into mainline ? Is
it a bit premature to get it into 2.6.22 ? I remember Linus was ok to
change the default scheduler rapidly (the discussion was about sd at
that time) to get more testing than in -mm.

--
Damien Wyart
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Srivatsa Vaddagiri on
On Thu, May 03, 2007 at 10:50:15AM +0200, Ingo Molnar wrote:
> - EEVDF concentrates on real-time (SCHED_RR-alike) workloads where they
> know the length of work units

This is what I was thinking when I wrote earlier that EEVDF expects each
task will specify "length of each new request"
(http://lkml.org/lkml/2007/5/2/339).

The other observation that I have of EEVDF is that it tries to be fair
in the virtual time scale (each client will get 'wi' real units in 1
virtual unit), whereas sometimes fairness in real-time scale also matters?
For ex: a multi-media app would call scheduler fair to it, it it recvs
atleast 1 ms cpu time every 10 *real* milleseconds (frame-time). A rogue
user (or workload) that does a fork bomb may skew this fairness for that
multi-media app in real-time scale under EEVDF?

--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/