From: Avi Kivity on
On 09/07/2009 12:49 PM, Jens Axboe wrote:
>
> I ran a simple test as well, since I was curious to see how it performed
> wrt interactiveness. One of my pet peeves with the current scheduler is
> that I have to nice compile jobs, or my X experience is just awful while
> the compile is running.
>

I think the problem is that CFS is optimizing for the wrong thing. It's
trying to be fair to tasks, but these are meaningless building blocks of
jobs, which is what the user sees and measures. Your make -j128
dominates your interactive task by two orders of magnitude. If the
scheduler attempts to bridge this gap using heuristics, it will fail
badly when it misdetects since it will starve the really important
100-thread job for a task that was misdetected as interactive.

I think that bash (and the GUI shell) should put any new job (for bash,
a pipeline; for the GUI, an application launch from the menu) in a
scheduling group of its own. This way it will have equal weight in the
scheduler's eyes with interactive tasks; one will not dominate the
other. Of course if the cpu is free the compile job is welcome to use
all 128 threads.

(similarly, different login sessions should be placed in different jobs
to avoid a heavily multithreaded screensaver from overwhelming ed).

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Jens Axboe <jens.axboe(a)oracle.com> wrote:

> Agree, I was actually looking into doing joint latency for X
> number of tasks for the test app. I'll try and do that and see if
> we can detect something from that.

Could you please try latest -tip:

http://people.redhat.com/mingo/tip.git/README

(c26f010 or later)

Does it get any better with make -j128 build jobs? Peter just fixed
a bug in the SMP load-balancer that can cause interactivity problems
on large CPU count systems.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jens Axboe on
On Mon, Sep 07 2009, Ingo Molnar wrote:
>
> * Jens Axboe <jens.axboe(a)oracle.com> wrote:
>
> > Agree, I was actually looking into doing joint latency for X
> > number of tasks for the test app. I'll try and do that and see if
> > we can detect something from that.
>
> Could you please try latest -tip:
>
> http://people.redhat.com/mingo/tip.git/README
>
> (c26f010 or later)
>
> Does it get any better with make -j128 build jobs? Peter just fixed

The compile 'problem' is on my workstation, which is a dual core Intel
core 2. I use -j4 on that typically. On the bigger boxes, I don't notice
any interactivity problems, largely because I don't run anything latency
sensitive on those :-)

> a bug in the SMP load-balancer that can cause interactivity problems
> on large CPU count systems.

Worth trying on the dual core box?

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jens Axboe on
On Mon, Sep 07 2009, Jens Axboe wrote:
> > And yes, it would be wonderful to get a test-app from you that would
> > express the kind of pain you are seeing during compile jobs.
>
> I was hoping this one would, but it's not showing anything. I even added
> support for doing the ping and wakeup over a socket, to see if the pipe
> test was doing well because of the sync wakeup we do there. The net
> latency is a little worse, but still good. So no luck in making that app
> so far.

Here's a version that bounces timestamps between a producer and a number
of consumers (clients). Not really tested much, but perhaps someone can
compare this on a box that boots BFS and see what happens.

To run it, use -cX where X is the number of children that you wait for a
response from. The max delay between this children is logged for each
wakeup. You can invoke it ala:

$ ./latt -c4 'make -j4'

and it'll dump the max/avg/stddev bounce time after make has completed,
or if you just want to play around, start the compile in one xterm and
do:

$ ./latt -c4 'sleep 5'

to just log for a small period of time. Vary the number of clients to
see how that changes the aggregated latency. 1 should be fast, adding
more clients quickly adds up.

Additionally, it has a -f and -t option that controls the window of
sleep time for the parent between each message. The numbers are in
msecs, and it defaults to a minimum of 100msecs and up to 500msecs.

--
Jens Axboe

From: Ingo Molnar on

* Michael Buesch <mb(a)bu3sch.de> wrote:

> On Monday 07 September 2009 20:26:29 Ingo Molnar wrote:
> > Could you profile it please? Also, what's the context-switch rate?
>
> As far as I can tell, the broadcom mips architecture does not have
> profiling support. It does only have some proprietary profiling
> registers that nobody wrote kernel support for, yet.

Well, what does 'vmstat 1' show - how many context switches are
there per second on the iperf server? In theory if it's a truly
saturated box, there shouldnt be many - just a single iperf task
running at 100% CPU utilization or so.

(Also, if there's hrtimer support for that board then perfcounters
could be used to profile it.)

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/