From: Peter Zijlstra on
On Mon, 2010-03-08 at 14:19 -0800, Suresh Siddha wrote:
> plain text document attachment (fix_wake_affine.patch)
> On a single cpu system with SMT, in the scenario of one SMT thread being
> idle with another SMT thread running a task and doing a non sync wakeup of
> another task, we see (from the traces) that the woken up task ends up running
> on the busy thread instead of the idle thread. Idle balancing that comes
> in little bit later is fixing the scernaio.
>
> But fixing this wake balance and running the woken up task directly on the
> idle SMT thread improved the performance (phoronix 7zip compression workload)
> by ~9% on an atom platform.
>
> During the process wakeup, select_task_rq_fair() and wake_affine() makes
> the decision to wakeup the task either on the previous cpu that the task
> ran or the cpu that the task is currently woken up.
>
> select_task_rq_fair() also goes through to see if there are any idle siblings
> for the cpu that the task is woken up on. This is to ensure that we select
> any idle sibling rather than choose a busy cpu.
>
> In the above load scenario, it so happens that the prev_cpu (that the
> task ran before) and this_cpu (where it is woken up currently) are the same. And
> in this case, it looks like wake_affine() returns 0 and ultimately not selecting
> the idle sibling chosen by select_idle_sibling() in select_task_rq_fair().
> Further down the path of select_task_rq_fair(), we ultimately select
> the currently running cpu (busy SMT thread instead of the idle SMT thread).
>
> Check for prev_cpu == this_cpu before calling wake_affine() and no need to do
> any fancy stuff(and ultimately taking wrong decisions) in this case.
>
> Signed-off-by: Suresh Siddha <suresh.b.siddha(a)intel.com>
> ---
> Changes from v1:
> Move the "this_cpu == prev_cpu" check before calling wake_affine()
> ---
> kernel/sched_fair.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> Index: tip/kernel/sched_fair.c
> ===================================================================
> --- tip.orig/kernel/sched_fair.c
> +++ tip/kernel/sched_fair.c
> @@ -1454,6 +1454,7 @@ static int select_task_rq_fair(struct ta
> int want_affine = 0;
> int want_sd = 1;
> int sync = wake_flags & WF_SYNC;
> + int this_cpu = cpu;
>
> if (sd_flag & SD_BALANCE_WAKE) {
> if (sched_feat(AFFINE_WAKEUPS) &&
> @@ -1545,8 +1546,10 @@ static int select_task_rq_fair(struct ta
> update_shares(tmp);
> }
>
> - if (affine_sd && wake_affine(affine_sd, p, sync))
> - return cpu;
> + if (affine_sd) {
> + if (this_cpu == prev_cpu || wake_affine(affine_sd, p, sync))
> + return cpu;
> + }
>
> while (sd) {
> int load_idx = sd->forkexec_idx;
>

Right, so we since merged 8b911acd, in which Mike did almost this but
not quite, the question is over: cpu == prev_cpu vs this_cpu ==
prev_cpu.

Mike seems to see some workloads regress with the this_cpu check, does
your workload work with the cpu == prev_cpu one?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Suresh Siddha on
On Wed, 2010-03-31 at 03:25 -0700, Peter Zijlstra wrote:
> Right, so we since merged 8b911acd, in which Mike did almost this but
> not quite, the question is over: cpu == prev_cpu vs this_cpu ==
> prev_cpu.
>
> Mike seems to see some workloads regress with the this_cpu check, does
> your workload work with the cpu == prev_cpu one?

Mike saw a regression with the sync check that was in the previous
version (v1). Anyways, the current code in -tip has the check that I
wanted and which addresses the netbook (2 SMT cpu's) performance issue.

But the current logic in select_task_rq_fair() is not quite correct,
especially we can wake the task on a busy core rather than on an idle
core, as the latest changes are making the wake up decisions entirely on
an idle HT sibling if there is one.

Also there are couple of more issues which I have explained in the
previous version of the patch. I have updated my patch on top of the
latest -tip, which addresses all these issues. Let me know your
thoughts. Thanks.

---
From: Suresh Siddha <suresh.b.siddha(a)intel.com>
Subject: sched: fix select_idle_sibling() logic in select_task_rq_fair()

Issues in the current select_idle_sibling() logic in select_task_rq_fair()
in the context of a task wake-up:

a) Once we select the idle sibling, we use that domain (spanning the cpu that
the task is currently woken-up and the idle sibling that we found) in our
wake_affine() decisions. This domain is completely different from the
domain(we are supposed to use) that spans the cpu that the task currently
woken-up and the cpu where the task previously ran.

b) We do select_idle_sibling() check only for the cpu that the task is
currently woken-up on. If select_task_rq_fair() selects the previously run
cpu for waking the task, doing a select_idle_sibling() check
for that cpu also helps and we don't do this currently.

c) In the scenarios where the cpu that the task is woken-up is busy but
with its HT siblings are idle, we are selecting the task be woken-up
on the idle HT sibling instead of a core that it previously ran
and currently completely idle. i.e., we are not taking decisions based on
wake_affine() but directly selecting an idle sibling that can cause
an imbalance at the SMT/MC level which will be later corrected by the
periodic load balancer.

Fix this by first going through the load imbalance calculations using
wake_affine() and once we make a decision of woken-up cpu vs previously-ran cpu,
then choose a possible idle sibling for waking up the task on.

Signed-off-by: Suresh Siddha <suresh.b.siddha(a)intel.com>
---

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 49ad993..f905a4b 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1385,28 +1385,48 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
* Try and locate an idle CPU in the sched_domain.
*/
static int
-select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target)
+select_idle_sibling(struct task_struct *p, int target)
{
int cpu = smp_processor_id();
int prev_cpu = task_cpu(p);
int i;
+ struct sched_domain *sd;

/*
- * If this domain spans both cpu and prev_cpu (see the SD_WAKE_AFFINE
- * test in select_task_rq_fair) and the prev_cpu is idle then that's
- * always a better target than the current cpu.
+ * If the task is going to be woken-up on this cpu and if it is
+ * already idle, then it is the right target.
*/
- if (target == cpu && !cpu_rq(prev_cpu)->cfs.nr_running)
+ if (target == cpu && !cpu_rq(cpu)->cfs.nr_running)
+ return cpu;
+
+ /*
+ * If the task is going to be woken-up on the cpu where it previously
+ * ran and if it is currently idle, then it the right target.
+ */
+ if (target == prev_cpu && !cpu_rq(prev_cpu)->cfs.nr_running)
return prev_cpu;

/*
- * Otherwise, iterate the domain and find an elegible idle cpu.
+ * Otherwise, iterate the domains and find an elegible idle cpu.
*/
- for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) {
- if (!cpu_rq(i)->cfs.nr_running) {
- target = i;
+ for_each_domain(target, sd) {
+ if (!(sd->flags & SD_SHARE_PKG_RESOURCES))
break;
+
+ for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) {
+ if (!cpu_rq(i)->cfs.nr_running) {
+ target = i;
+ break;
+ }
}
+
+ /*
+ * Lets stop looking for an idle sibling when we reached
+ * the domain that spans the current cpu and prev_cpu.
+ */
+ if (cpumask_test_cpu(cpu, sched_domain_span(sd)) &&
+ cpumask_test_cpu(prev_cpu, sched_domain_span(sd)))
+ break;
}

return target;
@@ -1429,7 +1449,7 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
int cpu = smp_processor_id();
int prev_cpu = task_cpu(p);
int new_cpu = cpu;
- int want_affine = 0, cpu_idle = !current->pid;
+ int want_affine = 0;
int want_sd = 1;
int sync = wake_flags & WF_SYNC;

@@ -1467,36 +1487,15 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
want_sd = 0;
}

- /*
- * While iterating the domains looking for a spanning
- * WAKE_AFFINE domain, adjust the affine target to any idle cpu
- * in cache sharing domains along the way.
- */
if (want_affine) {
- int target = -1;
-
/*
* If both cpu and prev_cpu are part of this domain,
* cpu is a valid SD_WAKE_AFFINE target.
*/
- if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp)))
- target = cpu;
-
- /*
- * If there's an idle sibling in this domain, make that
- * the wake_affine target instead of the current cpu.
- */
- if (!cpu_idle && tmp->flags & SD_SHARE_PKG_RESOURCES)
- target = select_idle_sibling(p, tmp, target);
-
- if (target >= 0) {
- if (tmp->flags & SD_WAKE_AFFINE) {
- affine_sd = tmp;
- want_affine = 0;
- if (target != cpu)
- cpu_idle = 1;
- }
- cpu = target;
+ if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))
+ && (tmp->flags & SD_WAKE_AFFINE)) {
+ affine_sd = tmp;
+ want_affine = 0;
}
}

@@ -1527,8 +1526,10 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
#endif

if (affine_sd) {
- if (cpu_idle || cpu == prev_cpu || wake_affine(affine_sd, p, sync))
- return cpu;
+ if (cpu == prev_cpu || wake_affine(affine_sd, p, sync))
+ return select_idle_sibling(p, cpu);
+ else
+ return select_idle_sibling(p, prev_cpu);
}

while (sd) {


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mike Galbraith on
On Wed, 2010-03-31 at 16:47 -0700, Suresh Siddha wrote:

> Issues in the current select_idle_sibling() logic in select_task_rq_fair()
> in the context of a task wake-up:
>
> a) Once we select the idle sibling, we use that domain (spanning the cpu that
> the task is currently woken-up and the idle sibling that we found) in our
> wake_affine() decisions. This domain is completely different from the
> domain(we are supposed to use) that spans the cpu that the task currently
> woken-up and the cpu where the task previously ran.

Why does that matter? If we find an idle shared cache cpu before we hit
the spanning domain, we don't use affine_sd other than maybe (unlikely)
for updating group scheduler shares.

> b) We do select_idle_sibling() check only for the cpu that the task is
> currently woken-up on. If select_task_rq_fair() selects the previously run
> cpu for waking the task, doing a select_idle_sibling() check
> for that cpu also helps and we don't do this currently.

True, but that costs too. Those idle checks aren't cheap.

> c) In the scenarios where the cpu that the task is woken-up is busy but
> with its HT siblings are idle, we are selecting the task be woken-up
> on the idle HT sibling instead of a core that it previously ran
> and currently completely idle. i.e., we are not taking decisions based on
> wake_affine() but directly selecting an idle sibling that can cause
> an imbalance at the SMT/MC level which will be later corrected by the
> periodic load balancer.

Yes, the pressing decision for this one wakeup is can we wake to a
shared cache and thus avoid cache misses.

IMHO, the point of the affinity decision isn't instant perfect balance,
it's cache affinity if at all possible without wrecking balance. Load
balancing moves tasks for optimal CPU utilization, tasks waking each
other pull to a shared domain.. a tug-of-war that balances buddies over
time. wake_affine()'s job is only to say "no, leave it where it was for
now". I don't see any reason to ask wake_affine()'s opinion about an
idle CPU. We paid for idle shared cache knowledge.

We certainly wouldn't want to leave the wakee on it's previous CPU only
because that CPU is idle, it would have to be idle and sharing cache.

That said, Nehalem may ramp better with select_idle_sibling() turned off
at the HT level, and ramp was it's motivation. Maybe you could continue
checking until out of shared cache country, but that's more expensive.

The logic may not be perfect, but it really needs to become cheaper, not
more expensive.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Suresh Siddha on
On Wed, 2010-03-31 at 22:32 -0700, Mike Galbraith wrote:
> On Wed, 2010-03-31 at 16:47 -0700, Suresh Siddha wrote:
>
> > Issues in the current select_idle_sibling() logic in select_task_rq_fair()
> > in the context of a task wake-up:
> >
> > a) Once we select the idle sibling, we use that domain (spanning the cpu that
> > the task is currently woken-up and the idle sibling that we found) in our
> > wake_affine() decisions. This domain is completely different from the
> > domain(we are supposed to use) that spans the cpu that the task currently
> > woken-up and the cpu where the task previously ran.
>
> Why does that matter? If we find an idle shared cache cpu before we hit
> the spanning domain, we don't use affine_sd other than maybe (unlikely)
> for updating group scheduler shares.

Ok. This is not a big issue with the new idle cpu change, as atleast we
don't endup calling wake_affine() with the wrong sd. I have never tried
to understand any code surrounded by CONFIG_FAIR_GROUP_SCHED so can't
comment if the using affine_sd for updating group scheduler shares is
correct or not. But please look below for the issues with selecting the
idle sibling right away.

>
> > b) We do select_idle_sibling() check only for the cpu that the task is
> > currently woken-up on. If select_task_rq_fair() selects the previously run
> > cpu for waking the task, doing a select_idle_sibling() check
> > for that cpu also helps and we don't do this currently.
>
> True, but that costs too. Those idle checks aren't cheap.

Just like the current code, my patch is doing the idle checks only once.
Current code is doing idle checks for the woken-up cpu and my code is
first selecting woken-up vs previously-ran and then doing idle sibling
checks . So don't expect to see much cost increase.

>
> > c) In the scenarios where the cpu that the task is woken-up is busy but
> > with its HT siblings are idle, we are selecting the task be woken-up
> > on the idle HT sibling instead of a core that it previously ran
> > and currently completely idle. i.e., we are not taking decisions based on
> > wake_affine() but directly selecting an idle sibling that can cause
> > an imbalance at the SMT/MC level which will be later corrected by the
> > periodic load balancer.
>
> Yes, the pressing decision for this one wakeup is can we wake to a
> shared cache and thus avoid cache misses.

Last level cache sharing is much more important than small L1 and mid
level caches. Also performance impact of keeping both the threads on a
core busy in the context of an idle core and then periodic balancer
coming in and correcting this is more costly.

> IMHO, the point of the affinity decision isn't instant perfect balance,
> it's cache affinity if at all possible without wrecking balance.

For not wrecking balance we should do the wake_balance() and based on
that decision, do the select_idle_sibling() for selecting an idle cpu in
that cache affinity. Current code in -tip is opposite of this.

> Load balancing moves tasks for optimal CPU utilization, tasks waking each
> other pull to a shared domain.. a tug-of-war that balances buddies over
> time.
>
> wake_affine()'s job is only to say "no, leave it where it was for
> now". I don't see any reason to ask wake_affine()'s opinion about an
> idle CPU. We paid for idle shared cache knowledge.
>
> We certainly wouldn't want to leave the wakee on it's previous CPU only
> because that CPU is idle, it would have to be idle and sharing cache.

Consider this scenario. Today we do balance on fork() and exec(). This
will cause the tasks to start far away. On systems like NHM-EP, tasks
will start on two different sockets/nodes(as each socket is a numa node)
and allocate their memory locally etc. Task A starting on Node-0 and
Task B starting on Node-1. Once task B sleeps and if Task A or something
else wakes up task B on Node-0, (with the recent change) just because
there is an idle HT sibling on node-0 we endup waking the task on
node-0. This is wrong. We should first atleast go through wake_affine()
and if wake_affine() says ok to move the task to node-0, then we can
look at the cache siblings for node-0 and select an appropriate cpu.

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mike Galbraith on
On Thu, 2010-04-01 at 14:04 -0700, Suresh Siddha wrote:

> Consider this scenario. Today we do balance on fork() and exec(). This
> will cause the tasks to start far away. On systems like NHM-EP, tasks
> will start on two different sockets/nodes(as each socket is a numa node)
> and allocate their memory locally etc. Task A starting on Node-0 and
> Task B starting on Node-1. Once task B sleeps and if Task A or something
> else wakes up task B on Node-0, (with the recent change) just because
> there is an idle HT sibling on node-0 we endup waking the task on
> node-0. This is wrong. We should first atleast go through wake_affine()
> and if wake_affine() says ok to move the task to node-0, then we can
> look at the cache siblings for node-0 and select an appropriate cpu.

Yes, if task A and task B are more or less unrelated, you'd want them to
stay in separate domains, you'd not want some random event to pull. The
other side of the coin is tasks which fork off partners that they will
talk to at high frequency. They land just as far away, and desperately
need to move into a shared cache domain. There's currently no
discriminator, so while always asking wake_affine() may reduce the risk
of moving a task with a large footprint, it also increases the risk of
leaving buddies jabbering cross cache. You can tweak it in either
direction, and neither can be called "wrong", it's all compromise.

Do you have a compute load bouncing painfully which this patch cures?

I have no strong objections, and the result is certainly easier on the
eye. If I were making the decision, I'd want to see some numbers.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/