From: Greg KH on
2.6.34-stable review patch. If anyone has any objections, please let me know.


From: Greg Thelen <gthelen(a)>

commit 94b3dd0f7bb393d93e84a173b1df9b8b64c83ac4 upstream.

Child groups should have a greater depth than their parents. Prior to
this change, the parent would incorrectly report zero memory usage for
child cgroups when use_hierarchy is enabled.

test script:
mount -t cgroup none /cgroups -o memory
cd /cgroups
mkdir cg1

echo 1 > cg1/memory.use_hierarchy
mkdir cg1/cg11

echo $$ > cg1/cg11/tasks
dd if=/dev/zero of=/tmp/foo bs=1M count=1

echo CHILD
grep cache cg1/cg11/memory.stat

grep cache cg1/memory.stat

echo $$ > tasks
rmdir cg1/cg11 cg1
cd /
umount /cgroups

Using fae9c79, a recent patch that changed alloc_css_id() depth computation,
the parent incorrectly reports zero usage:
root(a)ubuntu:~# ./test
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0151844 s, 69.1 MB/s

cache 1048576
total_cache 1048576

cache 0
total_cache 0

With this patch, the parent correctly includes child usage:
root(a)ubuntu:~# ./test
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0136827 s, 76.6 MB/s

cache 1052672
total_cache 1052672

cache 0
total_cache 1052672

Signed-off-by: Greg Thelen <gthelen(a)>
Acked-by: Paul Menage <menage(a)>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)>
Acked-by: Li Zefan <lizf(a)>
Signed-off-by: Andrew Morton <akpm(a)>
Signed-off-by: Linus Torvalds <torvalds(a)>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)>

kernel/cgroup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4599,7 +4599,7 @@ static int alloc_css_id(struct cgroup_su
parent_css = parent->subsys[subsys_id];
child_css = child->subsys[subsys_id];
parent_id = parent_css->id;
- depth = parent_id->depth;
+ depth = parent_id->depth + 1;

child_id = get_new_cssid(ss, depth);
if (IS_ERR(child_id))

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)
More majordomo info at
Please read the FAQ at