From: KAMEZAWA Hiroyuki on
On Mon, 31 May 2010 10:52:27 -0300
"Luis Claudio R. Goncalves" <lclaudio(a)uudg.org> wrote:

> | If an explanation as "acceralating all thread's priority in a process seems overkill"
> | is given in changelog or comment, it's ok to me.
>
> If my understanding of badness() is right, I wouldn't be ashamed of saying
> that it seems to be _a bit_ overkill. But I may be wrong in my
> interpretation.
>
> While re-reading the code I noticed that in select_bad_process() we can
> eventually bump on an already dying task, case in which we just wait for
> the task to die and avoid killing other tasks. Maybe we could boost the
> priority of the dying task here too.
>
yes, nice catch.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
On Mon, May 31, 2010 at 10:52 PM, Luis Claudio R. Goncalves
<lclaudio(a)uudg.org> wrote:
> On Mon, May 31, 2010 at 03:51:02PM +0900, KAMEZAWA Hiroyuki wrote:
> | On Mon, 31 May 2010 15:09:41 +0900
> | Minchan Kim <minchan.kim(a)gmail.com> wrote:
> | > On Mon, May 31, 2010 at 2:54 PM, KAMEZAWA Hiroyuki
> | > <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
> ...
> | > >> > IIUC, the purpose of rising priority is to accerate dying thread to exit()
> | > >> > for freeing memory AFAP. But to free memory, exit, all threads which share
> | > >> > mm_struct should exit, too. I'm sorry if I miss something.
> | > >>
> | > >> How do we kill only some thread and what's the benefit of it?
> | > >> I think when if some thread receives  KILL signal, the process include
> | > >> the thread will be killed.
> | > >>
> | > > yes, so, if you want a _process_ die quickly, you have to acceralte the whole
> | > > threads on a process. Acceralating a thread in a process is not big help.
> | >
> | > Yes.
> | >
> | > I see the code.
> | > oom_kill_process is called by
> | >
> | > 1. mem_cgroup_out_of_memory
> | > 2. __out_of_memory
> | > 3. out_of_memory
> | >
> | >
> | > (1,2) calls select_bad_process which select victim task in processes
> | > by do_each_process.
> | > But 3 isn't In case of  CONSTRAINT_MEMORY_POLICY, it kills current.
> | > In only the case, couldn't we pass task of process, not one of thread?
> | >
> |
> | Hmm, my point is that priority-acceralation is against a thread, not against a process.
> | So, most of threads in memory-eater will not gain high priority even with this patch
> | and works slowly.
>
> This is a good point...
>
> | I have no objections to this patch. I just want to confirm the purpose. If this patch
> | is for accelating exiting process by SIGKILL, it seems not enough.
>
> I understand (from the comments in the code) the badness calculation gives more
> points to the siblings in a thread that have their own mm. I wonder if what you
> are describing is not a corner case.
>
> Again, your idea sounds like an interesting refinement to the patch. I am
> just not sure this change should implemented now or in a second round of
> changes.

First of all, I think your patch is first.
That's because I am not sure this logic is effective.


/*
* We give our sacrificial lamb high priority and access to
* all the memory it needs. That way it should be able to
* exit() and clear out its resources quickly...
*/
p->rt.time_slice = HZ;

Peter changed it in fa717060f1ab.
Now if we change rt.time_slice as HZ, it means the task have high priority?
I am not a scheduler expert. but as I looked through scheduler code,
rt.time_slice is only related to RT scheduler. so if we uses CFS, it
doesn't make task high priority.
Perter, Right?

If it is right, I think Luis patch will fix it.

Secondly, as Kame pointed out, we have to raise whole thread's
priority to kill victim process for reclaiming pages. But I think it
has deadlock problem.
If we raise whole threads's priority and some thread has dependency of
other thread which is blocked, it makes system deadlock. So I think
it's not easy part.

If this part is really big problem, we should consider it more carefully.

>
> | If an explanation as "acceralating all thread's priority in a process seems overkill"
> | is given in changelog or comment, it's ok to me.
>
> If my understanding of badness() is right, I wouldn't be ashamed of saying
> that it seems to be _a bit_ overkill. But I may be wrong in my
> interpretation.
>
> While re-reading the code I noticed that in select_bad_process() we can
> eventually bump on an already dying task, case in which we just wait for
> the task to die and avoid killing other tasks. Maybe we could boost the
> priority of the dying task here too.

Yes. It is good where we boost priority of task, I think.

>
> Luis
> --
> [ Luis Claudio R. Goncalves                    Bass - Gospel - RT ]
> [ Fingerprint: 4FDD B8C4 3C59 34BD 8BE9  2696 7203 D980 A448 C8F8 ]
>
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Luis Claudio R. Goncalves on
On Tue, Jun 01, 2010 at 08:50:06AM +0900, KAMEZAWA Hiroyuki wrote:
| On Mon, 31 May 2010 10:52:27 -0300
| "Luis Claudio R. Goncalves" <lclaudio(a)uudg.org> wrote:
|
| > | If an explanation as "acceralating all thread's priority in a process seems overkill"
| > | is given in changelog or comment, it's ok to me.
| >
| > If my understanding of badness() is right, I wouldn't be ashamed of saying
| > that it seems to be _a bit_ overkill. But I may be wrong in my
| > interpretation.
| >
| > While re-reading the code I noticed that in select_bad_process() we can
| > eventually bump on an already dying task, case in which we just wait for
| > the task to die and avoid killing other tasks. Maybe we could boost the
| > priority of the dying task here too.
| >
| yes, nice catch.

Here is a more complete version of the patch, boosting priority on the
three exit points of the OOM-killer. I also avoid touching the priority if
the task is already an RT task. The patch:


oom-kill: give the dying task a higher priority (v5)

In a system under heavy load it was observed that even after the
oom-killer selects a task to die, the task may take a long time to die.

Right before sending a SIGKILL to the task selected by the oom-killer
this task has it's priority increased so that it can exit() exit soon,
freeing memory. That is accomplished by:

/*
* We give our sacrificial lamb high priority and access to
* all the memory it needs. That way it should be able to
* exit() and clear out its resources quickly...
*/
p->rt.time_slice = HZ;
set_tsk_thread_flag(p, TIF_MEMDIE);

It sounds plausible giving the dying task an even higher priority to be
sure it will be scheduled sooner and free the desired memory. It was
suggested on LKML using SCHED_FIFO:1, the lowest RT priority so that
this task won't interfere with any running RT task.

If the dying task is already an RT task, leave it untouched.

Another good suggestion, implemented here, was to avoid boosting the
dying task priority in case of mem_cgroup OOM.

Signed-off-by: Luis Claudio R. Gon�alves <lclaudio(a)uudg.org>

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 709aedf..67e18ca 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -52,6 +52,22 @@ static int has_intersects_mems_allowed(struct task_struct *tsk)
return 0;
}

+/*
+ * If this is a system OOM (not a memcg OOM) and the task selected to be
+ * killed is not already running at high (RT) priorities, speed up the
+ * recovery by boosting the dying task to the lowest FIFO priority.
+ * That helps with the recovery and avoids interfering with RT tasks.
+ */
+static void boost_dying_task_prio(struct task_struct *p,
+ struct mem_cgroup *mem)
+{
+ if ((mem == NULL) && !rt_task(p)) {
+ struct sched_param param;
+ param.sched_priority = 1;
+ sched_setscheduler_nocheck(p, SCHED_FIFO, &param);
+ }
+}
+
/**
* badness - calculate a numeric value for how bad this task has been
* @p: task struct of which task we should calculate
@@ -277,8 +293,10 @@ static struct task_struct *select_bad_process(unsigned long *ppoints,
* blocked waiting for another task which itself is waiting
* for memory. Is there a better alternative?
*/
- if (test_tsk_thread_flag(p, TIF_MEMDIE))
+ if (test_tsk_thread_flag(p, TIF_MEMDIE)) {
+ boost_dying_task_prio(p, mem);
return ERR_PTR(-1UL);
+ }

/*
* This is in the process of releasing memory so wait for it
@@ -291,9 +309,10 @@ static struct task_struct *select_bad_process(unsigned long *ppoints,
* Otherwise we could get an easy OOM deadlock.
*/
if (p->flags & PF_EXITING) {
- if (p != current)
+ if (p != current) {
+ boost_dying_task_prio(p, mem);
return ERR_PTR(-1UL);
-
+ }
chosen = p;
*ppoints = ULONG_MAX;
}
@@ -380,7 +399,8 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order,
* flag though it's unlikely that we select a process with CAP_SYS_RAW_IO
* set.
*/
-static void __oom_kill_task(struct task_struct *p, int verbose)
+static void __oom_kill_task(struct task_struct *p, struct mem_cgroup *mem,
+ int verbose)
{
if (is_global_init(p)) {
WARN_ON(1);
@@ -413,11 +433,11 @@ static void __oom_kill_task(struct task_struct *p, int verbose)
*/
p->rt.time_slice = HZ;
set_tsk_thread_flag(p, TIF_MEMDIE);
-
force_sig(SIGKILL, p);
+ boost_dying_task_prio(p, mem);
}

-static int oom_kill_task(struct task_struct *p)
+static int oom_kill_task(struct task_struct *p, struct mem_cgroup *mem)
{
/* WARNING: mm may not be dereferenced since we did not obtain its
* value from get_task_mm(p). This is OK since all we need to do is
@@ -430,7 +450,7 @@ static int oom_kill_task(struct task_struct *p)
if (!p->mm || p->signal->oom_adj == OOM_DISABLE)
return 1;

- __oom_kill_task(p, 1);
+ __oom_kill_task(p, mem, 1);

return 0;
}
@@ -449,7 +469,7 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
* its children or threads, just set TIF_MEMDIE so it can die quickly
*/
if (p->flags & PF_EXITING) {
- __oom_kill_task(p, 0);
+ __oom_kill_task(p, mem, 0);
return 0;
}

@@ -462,10 +482,10 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
continue;
if (mem && !task_in_mem_cgroup(c, mem))
continue;
- if (!oom_kill_task(c))
+ if (!oom_kill_task(c, mem))
return 0;
}
- return oom_kill_task(p);
+ return oom_kill_task(p, mem);
}

#ifdef CONFIG_CGROUP_MEM_RES_CTLR

--
[ Luis Claudio R. Goncalves Bass - Gospel - RT ]
[ Fingerprint: 4FDD B8C4 3C59 34BD 8BE9 2696 7203 D980 A448 C8F8 ]

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: David Rientjes on
On Tue, 1 Jun 2010, Minchan Kim wrote:

> Secondly, as Kame pointed out, we have to raise whole thread's
> priority to kill victim process for reclaiming pages. But I think it
> has deadlock problem.

Agreed, this has the potential to actually increase the amount of time for
an oom killed task to fully exit: the exit path takes mm->mmap_sem on exit
and if that is held by another thread waiting for the oom killed task to
exit (i.e. reclaim has failed and the oom killer becomes a no-op because
it sees an already killed task) then there's a livelock. That's always
been a problem, but is compounded with increasing the priority of a task
not holding mm->mmap_sem if the thread holding the writelock actually
isn't looking for memory but simply doesn't get a chance to release
because it fails to run.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: David Rientjes on
On Tue, 1 Jun 2010, Luis Claudio R. Goncalves wrote:

> oom-kill: give the dying task a higher priority (v5)
>
> In a system under heavy load it was observed that even after the
> oom-killer selects a task to die, the task may take a long time to die.
>
> Right before sending a SIGKILL to the task selected by the oom-killer
> this task has it's priority increased so that it can exit() exit soon,
> freeing memory. That is accomplished by:
>
> /*
> * We give our sacrificial lamb high priority and access to
> * all the memory it needs. That way it should be able to
> * exit() and clear out its resources quickly...
> */
> p->rt.time_slice = HZ;
> set_tsk_thread_flag(p, TIF_MEMDIE);
>
> It sounds plausible giving the dying task an even higher priority to be
> sure it will be scheduled sooner and free the desired memory. It was
> suggested on LKML using SCHED_FIFO:1, the lowest RT priority so that
> this task won't interfere with any running RT task.
>
> If the dying task is already an RT task, leave it untouched.
>
> Another good suggestion, implemented here, was to avoid boosting the
> dying task priority in case of mem_cgroup OOM.
>
> Signed-off-by: Luis Claudio R. Gon�alves <lclaudio(a)uudg.org>
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 709aedf..67e18ca 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -52,6 +52,22 @@ static int has_intersects_mems_allowed(struct task_struct *tsk)
> return 0;
> }
>
> +/*
> + * If this is a system OOM (not a memcg OOM) and the task selected to be
> + * killed is not already running at high (RT) priorities, speed up the
> + * recovery by boosting the dying task to the lowest FIFO priority.
> + * That helps with the recovery and avoids interfering with RT tasks.
> + */
> +static void boost_dying_task_prio(struct task_struct *p,
> + struct mem_cgroup *mem)
> +{
> + if ((mem == NULL) && !rt_task(p)) {
> + struct sched_param param;
> + param.sched_priority = 1;
> + sched_setscheduler_nocheck(p, SCHED_FIFO, &param);
> + }
> +}
> +
> /**
> * badness - calculate a numeric value for how bad this task has been
> * @p: task struct of which task we should calculate
> @@ -277,8 +293,10 @@ static struct task_struct *select_bad_process(unsigned long *ppoints,
> * blocked waiting for another task which itself is waiting
> * for memory. Is there a better alternative?
> */
> - if (test_tsk_thread_flag(p, TIF_MEMDIE))
> + if (test_tsk_thread_flag(p, TIF_MEMDIE)) {
> + boost_dying_task_prio(p, mem);
> return ERR_PTR(-1UL);
> + }
>
> /*
> * This is in the process of releasing memory so wait for it

That's unnecessary, if p already has TIF_MEMDIE set, then
boost_dying_task_prio(p) has already been called.

> @@ -291,9 +309,10 @@ static struct task_struct *select_bad_process(unsigned long *ppoints,
> * Otherwise we could get an easy OOM deadlock.
> */
> if (p->flags & PF_EXITING) {
> - if (p != current)
> + if (p != current) {
> + boost_dying_task_prio(p, mem);
> return ERR_PTR(-1UL);
> -
> + }
> chosen = p;
> *ppoints = ULONG_MAX;
> }

This has the potential to actually make it harder to free memory if p is
waiting to acquire a writelock on mm->mmap_sem in the exit path while the
thread holding mm->mmap_sem is trying to run.