From: Bret Towe on
On Thu, Sep 10, 2009 at 9:26 AM, Ingo Molnar <mingo(a)elte.hu> wrote:
>
> * Bret Towe <magnade(a)gmail.com> wrote:
>
>> On Thu, Sep 10, 2009 at 9:05 AM, Peter Zijlstra <a.p.zijlstra(a)chello.nl> wrote:
>> > On Thu, 2009-09-10 at 09:02 -0700, Bret Towe wrote:
>> >>
>> >> thanks to this thread and others I've seen several kernel tunables
>> >> that can effect how the scheduler performs/acts
>> >> but what I don't see after a bit of looking is where all these are
>> >> documented
>> >> perhaps thats also part of the reason there are unhappy people with
>> >> the current code in the kernel just because they don't know how
>> >> to tune it for their workload
>> >
>> > The thing is, ideally they should not need to poke at these.
>> > These knobs are under CONFIG_SCHED_DEBUG, and that is exactly
>> > what they are for.
>>
>> even then I would think they should be documented so people can
>> find out what item is hurting their workload so they can better
>> report the bug no?
>
> Would be happy to apply such documentation patches. You could also
> help start adding a 'scheduler performance' wiki portion to
> perf.wiki.kernel.org, if you have time for that.

time isn't so much the issue but not having any clue as to what any
of the options do
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Bret Towe <magnade(a)gmail.com> wrote:

> On Thu, Sep 10, 2009 at 9:26 AM, Ingo Molnar <mingo(a)elte.hu> wrote:
> >
> > * Bret Towe <magnade(a)gmail.com> wrote:
> >
> >> On Thu, Sep 10, 2009 at 9:05 AM, Peter Zijlstra <a.p.zijlstra(a)chello.nl> wrote:
> >> > On Thu, 2009-09-10 at 09:02 -0700, Bret Towe wrote:
> >> >>
> >> >> thanks to this thread and others I've seen several kernel tunables
> >> >> that can effect how the scheduler performs/acts
> >> >> but what I don't see after a bit of looking is where all these are
> >> >> documented
> >> >> perhaps thats also part of the reason there are unhappy people with
> >> >> the current code in the kernel just because they don't know how
> >> >> to tune it for their workload
> >> >
> >> > The thing is, ideally they should not need to poke at these.
> >> > These knobs are under CONFIG_SCHED_DEBUG, and that is exactly
> >> > what they are for.
> >>
> >> even then I would think they should be documented so people can
> >> find out what item is hurting their workload so they can better
> >> report the bug no?
> >
> > Would be happy to apply such documentation patches. You could also
> > help start adding a 'scheduler performance' wiki portion to
> > perf.wiki.kernel.org, if you have time for that.
>
> time isn't so much the issue but not having any clue as to what
> any of the options do

One approach would be to list them in an email in this thread with
question marks and let people here fill them in - then help by
organizing and prettifying the result on the wiki.

Asking for clarifications when an explanation is unclear is also
helpful - those who write this code are not the best people to judge
whether technical descriptions are understandable enough.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Nikos Chantziaras on
On 09/10/2009 09:08 AM, Ingo Molnar wrote:
>
> * Nikos Chantziaras<realnc(a)arcor.de> wrote:
>>
>> With your version of latt.c, I get these results with 2.6-tip vs
>> 2.6.31-rc9-bfs:
>>
>>
>> (mainline)
>> Averages:
>> ------------------------------
>> Max 50 usec
>> Avg 12 usec
>> Stdev 3 usec
>>
>>
>> (BFS)
>> Averages:
>> ------------------------------
>> Max 474 usec
>> Avg 11 usec
>> Stdev 16 usec
>>
>> However, the interactivity problems still remain. Does that mean
>> it's not a latency issue?
>
> It means that Jens's test-app, which demonstrated and helped us fix
> the issue for him does not help us fix it for you just yet.
>
> The "fluidity problem" you described might not be a classic latency
> issue per se (which latt.c measures), but a timeslicing / CPU time
> distribution problem.
>
> A slight shift in CPU time allocation can change the flow of tasks
> to result in a 'choppier' system.
>
> Have you tried, in addition of the granularity tweaks you've done,
> to renice mplayer either up or down? (or compiz and Xorg for that
> matter)

Yes. It seems to do what one would expect, but only if two separate
programs are competing for CPU time continuously. For example, when
running two glxgears instances, one with nice 0 the other with 19, the
first will report ~5000 FPS, the other ~1000. Renicing the second one
from 19 to 0, will result in both reporting ~3000. So nice values
obviously work in distributing CPU time. But the problem isn't the
available CPU time it seems since even if running glxgears nice -20, it
will still freeze during various other interactive taks (moving windows
etc.)


> [...]
> # echo NO_NEW_FAIR_SLEEPERS> /debug/sched_features
>
> Btw., NO_NEW_FAIR_SLEEPERS is something that will turn the scheduler
> into a more classic fair scheduler (like BFS is too).

Setting NO_NEW_FAIR_SLEEPERS (with everything else at default values)
pretty much solves all issues I raised in all my other posts! With this
setting, I can do "nice -n 19 make -j20" and still have a very smooth
desktop and watch a movie at the same time. Various other annoyances
(like the "logout/shutdown/restart" dialog of KDE not appearing at all
until the background fade-out effect has finished) are also gone. So
this seems to be the single most important setting that vastly improves
desktop behavior, at least here.

In fact, I liked this setting so much that I went to
kernel/sched_features.h of kernel 2.6.30.5 (the kernel I use normally
right now) and set SCHED_FEAT(NEW_FAIR_SLEEPERS, 0) (default is 1) with
absolutely no other tweaks (like sched_latency_ns,
sched_wakeup_granularity_ns, etc.). It pretty much behaves like BFS now
from an interactivity point of view. But I've used it only for about an
hour or so, so I don't know if any ill effects will appear later on.


> NO_START_DEBIT might be another thing that improves (or worsens :-/)
> make -j type of kernel build workloads.

No effect with this one, at least not one I could observe.

I didn't have the opportunity yet to test and tweak all the other
various settings you listed, but I will try to do so as soon as possible.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Nikos Chantziaras <realnc(a)arcor.de> wrote:

> On 09/10/2009 09:08 AM, Ingo Molnar wrote:
>>
>> * Nikos Chantziaras<realnc(a)arcor.de> wrote:
>>>
>>> With your version of latt.c, I get these results with 2.6-tip vs
>>> 2.6.31-rc9-bfs:
>>>
>>>
>>> (mainline)
>>> Averages:
>>> ------------------------------
>>> Max 50 usec
>>> Avg 12 usec
>>> Stdev 3 usec
>>>
>>>
>>> (BFS)
>>> Averages:
>>> ------------------------------
>>> Max 474 usec
>>> Avg 11 usec
>>> Stdev 16 usec
>>>
>>> However, the interactivity problems still remain. Does that mean
>>> it's not a latency issue?
>>
>> It means that Jens's test-app, which demonstrated and helped us fix
>> the issue for him does not help us fix it for you just yet.
>>
>> The "fluidity problem" you described might not be a classic latency
>> issue per se (which latt.c measures), but a timeslicing / CPU time
>> distribution problem.
>>
>> A slight shift in CPU time allocation can change the flow of tasks
>> to result in a 'choppier' system.
>>
>> Have you tried, in addition of the granularity tweaks you've done,
>> to renice mplayer either up or down? (or compiz and Xorg for that
>> matter)
>
> Yes. It seems to do what one would expect, but only if two separate
> programs are competing for CPU time continuously. For example, when
> running two glxgears instances, one with nice 0 the other with 19, the
> first will report ~5000 FPS, the other ~1000. Renicing the second one
> from 19 to 0, will result in both reporting ~3000. So nice values
> obviously work in distributing CPU time. But the problem isn't the
> available CPU time it seems since even if running glxgears nice -20, it
> will still freeze during various other interactive taks (moving windows
> etc.)
>
>
>> [...]
>> # echo NO_NEW_FAIR_SLEEPERS> /debug/sched_features
>>
>> Btw., NO_NEW_FAIR_SLEEPERS is something that will turn the scheduler
>> into a more classic fair scheduler (like BFS is too).
>
> Setting NO_NEW_FAIR_SLEEPERS (with everything else at default
> values) pretty much solves all issues I raised in all my other
> posts! With this setting, I can do "nice -n 19 make -j20" and
> still have a very smooth desktop and watch a movie at the same
> time. Various other annoyances (like the
> "logout/shutdown/restart" dialog of KDE not appearing at all until
> the background fade-out effect has finished) are also gone. So
> this seems to be the single most important setting that vastly
> improves desktop behavior, at least here.
>
> In fact, I liked this setting so much that I went to
> kernel/sched_features.h of kernel 2.6.30.5 (the kernel I use
> normally right now) and set SCHED_FEAT(NEW_FAIR_SLEEPERS, 0)
> (default is 1) with absolutely no other tweaks (like
> sched_latency_ns, sched_wakeup_granularity_ns, etc.). It pretty
> much behaves like BFS now from an interactivity point of view.
> But I've used it only for about an hour or so, so I don't know if
> any ill effects will appear later on.

ok, this is quite an important observation!

Either NEW_FAIR_SLEEPERS is broken, or if it works it's not what we
want to do. Other measures in the scheduler protect us from fatal
badness here, but all the finer wakeup behavior is out the window
really.

Will check this. We'll probably start with a quick commit disabling
it first - then re-enabling it if it's fixed (will Cc: you so that
you can re-test with fixed-NEW_FAIR_SLEEPERS, if it's re-enabled).

Thanks a lot for the persistent testing!

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Martin Steigerwald on
Am Mittwoch 09 September 2009 schrieb Peter Zijlstra:
> On Wed, 2009-09-09 at 12:05 +0300, Nikos Chantziaras wrote:
> > Thank you for mentioning min_granularity. After:
> >
> > echo 10000000 > /proc/sys/kernel/sched_latency_ns
> > echo 2000000 > /proc/sys/kernel/sched_min_granularity_ns
>
> You might also want to do:
>
> echo 2000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
>
> That affects when a newly woken task will preempt an already running
> task.

Heh that scheduler thing again... and unfortunately Col appearing to feel
hurt while I am think that Ingo is honest on his offer on collaboration...

While it makes fun playing with that numbers and indeed experiencing
subjectively a more fluid deskopt how about just a

echo "This is a f* desktop!" > /proc/sys/kernel/sched_workload

Or to say it in other words: The Linux kernel should not require me to
fine-tune three or more values to have the scheduler act in a way that
matches my workload.

I am willing to test stuff on my work thinkpad and my Amarok thinkpad in
order to help improving with that.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7