From: Paul E. McKenney on
Add an rcu_read_lock() / rcu_read_unlock() pair to protect a fork-time
cgroup access. This seems likely to be a false positive.

Located by: Alessio Igor Bogani <abogani(a)texware.it>
Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
---

sched.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/sched.c b/kernel/sched.c
index 9ab3cd7..d4bb5e0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2621,7 +2621,9 @@ void sched_fork(struct task_struct *p, int clone_flags)
if (p->sched_class->task_fork)
p->sched_class->task_fork(p);

+ rcu_read_lock();
set_task_cpu(p, cpu);
+ rcu_read_unlock();

#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
if (likely(sched_info_on()))
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/