From: John Berthels on
Dave Chinner wrote:

> So effectively the storage subsystem (NFS, filesystem, DM, MD,
> device drivers) have about 4K of stack to work in now. That seems to
> be a lot less than last time I looked at this, and we've been really
> careful not to increase XFS's stack usage for quite some time now.

OK. I should note that we have what appears to be a similar problem on a
2.6.28 distro kernel, so I'm not sure this is a very recent change. (We
see the lockups on that kernel, we haven't tried larger stacks + stack
instrumentation on the earlier kernel).

Do you know if there are any obvious knobs to twiddle to make these
codepaths less likely? The cluster is resilient against occasional
server death, but frequent death is more annoying.

We're currently running with sysctls:

net.ipv4.ip_nonlocal_bind=1
kernel.panic=300
vm.dirty_background_ratio=3
vm.min_free_kbytes=16384

I'm not sure what circumstances force the memory reclaim (and why it
doesn't come from discarding a cached page).

Is the problem is the DMA/DMA32 zone and we should try playing with
lowmem_reserve_ratio? Is there anything else we could do to keep dirty
pages out of the low zones?

Before trying THREAD_ORDER 2, we tried doubling the RAM in a couple of
boxes from 2GB to 4GB without any significant reduction in the problem.

Lastly - if we end up stuck with THREAD_ORDER 2, does anyone know what
symptoms to look out for to know if unable to allocate thread stacks due
to fragmentation?

> I'll have to have a bit of a think on this one - if you could
> provide further stack traces as they get deeper (esp. if they go
> past 8k) that would be really handy.

Two of the worst offenders below. We have plenty to send if you would
like more. Please let us know if you'd like us to try anything else or
would like other info.

Thanks very much for your thoughts, suggestions and work so far, it's
very much appreciated here.

regards,

jb

From: John Berthels on
Chris Mason wrote:
> shrink_zone on my box isn't 500 bytes, but lets try the easy stuff
> first. This is against .34, if you have any trouble applying to .32,
> just add the word noinline after the word static on the function
> definitions.

Hi Chris,

Thanks for this, we've been soaking it for a while and get the stack
trace below (which is still >8k), which still has shrink_zone at 528 bytes.

I find it odd that the shrink_zone stack usage is different on our
systems. This is a stock kernel 2.6.33.2 kernel, x86_64 arch (plus your
patch + Dave Chinner's patch) built using ubuntu make-kpkg, with gcc
(Ubuntu 4.3.3-5ubuntu4) 4.3.3 (.vmscan.o.cmd with full build options is
below, gzipped .config attached).

Can you see any difference between your system and ours which might
explain the discrepancy? I note -g and -pg in there. (Does -pg have any
stack overhead? It seems to be enabled in ubuntu release kernels).

regards,

jb



mm/.vmscan.o.cmd:

cmd_mm/vmscan.o := gcc -Wp,-MD,mm/.vmscan.o.d -nostdinc -isystem
/usr/lib/gcc/x86_64-linux-gnu/4.3.3/include
-I/usr/local/src/kern/linux-2.6.33.2/arch/x86/include -Iinclude
-include include/generated/autoconf.h -D__KERNEL__ -Wall -Wundef
-Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common
-Werror-implicit-function-declaration -Wno-format-security
-fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone
-mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args
-fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe
-Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx
-mno-sse2 -mno-3dnow -fno-omit-frame-pointer -fno-optimize-sibling-calls
-g -pg -Wdeclaration-after-statement -Wno-pointer-sign
-fno-strict-overflow -D"KBUILD_STR(s)=\#s"
-D"KBUILD_BASENAME=KBUILD_STR(vmscan)"
-D"KBUILD_MODNAME=KBUILD_STR(vmscan)" -c -o mm/.tmp_vmscan.o mm/vmscan.c



Apr 12 22:06:35 nas17 kernel: [36346.599076] apache2 used greatest stack
depth: 7904 bytes left
Depth Size Location (56 entries)
----- ---- --------
0) 7904 48 __call_rcu+0x67/0x190
1) 7856 16 call_rcu_sched+0x15/0x20
2) 7840 16 call_rcu+0xe/0x10
3) 7824 272 radix_tree_delete+0x159/0x2e0
4) 7552 32 __remove_from_page_cache+0x21/0x110
5) 7520 64 __remove_mapping+0xe8/0x130
6) 7456 384 shrink_page_list+0x400/0x860
7) 7072 528 shrink_zone+0x636/0xdc0
8) 6544 112 do_try_to_free_pages+0xc2/0x3c0
9) 6432 112 try_to_free_pages+0x64/0x70
10) 6320 256 __alloc_pages_nodemask+0x3d2/0x710
11) 6064 48 alloc_pages_current+0x8c/0xe0
12) 6016 32 __page_cache_alloc+0x67/0x70
13) 5984 80 find_or_create_page+0x50/0xb0
14) 5904 160 _xfs_buf_lookup_pages+0x145/0x350 [xfs]
15) 5744 64 xfs_buf_get+0x74/0x1d0 [xfs]
16) 5680 48 xfs_buf_read+0x2f/0x110 [xfs]
17) 5632 80 xfs_trans_read_buf+0x2bf/0x430 [xfs]
18) 5552 80 xfs_btree_read_buf_block+0x5d/0xb0 [xfs]
19) 5472 176 xfs_btree_rshift+0xd7/0x530 [xfs]
20) 5296 96 xfs_btree_make_block_unfull+0x5b/0x190 [xfs]
21) 5200 224 xfs_btree_insrec+0x39c/0x5b0 [xfs]
22) 4976 128 xfs_btree_insert+0x86/0x180 [xfs]
23) 4848 96 xfs_alloc_fixup_trees+0x1fa/0x350 [xfs]
24) 4752 144 xfs_alloc_ag_vextent_near+0x916/0xb30 [xfs]
25) 4608 32 xfs_alloc_ag_vextent+0xe5/0x140 [xfs]
26) 4576 96 xfs_alloc_vextent+0x49f/0x630 [xfs]
27) 4480 160 xfs_bmbt_alloc_block+0xbe/0x1d0 [xfs]
28) 4320 208 xfs_btree_split+0xb3/0x6a0 [xfs]
29) 4112 96 xfs_btree_make_block_unfull+0x151/0x190 [xfs]
30) 4016 224 xfs_btree_insrec+0x39c/0x5b0 [xfs]
31) 3792 128 xfs_btree_insert+0x86/0x180 [xfs]
32) 3664 352 xfs_bmap_add_extent_delay_real+0x41e/0x1670 [xfs]
33) 3312 208 xfs_bmap_add_extent+0x41c/0x450 [xfs]
34) 3104 448 xfs_bmapi+0x982/0x1200 [xfs]
35) 2656 256 xfs_iomap_write_allocate+0x248/0x3c0 [xfs]
36) 2400 208 xfs_iomap+0x3d8/0x410 [xfs]
37) 2192 32 xfs_map_blocks+0x2c/0x30 [xfs]
38) 2160 256 xfs_page_state_convert+0x443/0x730 [xfs]
39) 1904 64 xfs_vm_writepage+0xab/0x160 [xfs]
40) 1840 32 __writepage+0x1a/0x60
41) 1808 288 write_cache_pages+0x1f7/0x400
42) 1520 16 generic_writepages+0x27/0x30
43) 1504 48 xfs_vm_writepages+0x5a/0x70 [xfs]
44) 1456 16 do_writepages+0x24/0x40
45) 1440 64 writeback_single_inode+0xf1/0x3e0
46) 1376 128 writeback_inodes_wb+0x31e/0x510
47) 1248 16 writeback_inodes_wbc+0x1e/0x20
48) 1232 224 balance_dirty_pages_ratelimited_nr+0x277/0x410
49) 1008 192 generic_file_buffered_write+0x19b/0x240
50) 816 288 xfs_write+0x849/0x930 [xfs]
51) 528 16 xfs_file_aio_write+0x5b/0x70 [xfs]
52) 512 272 do_sync_write+0xd1/0x120
53) 240 48 vfs_write+0xcb/0x1a0
54) 192 64 sys_write+0x55/0x90
55) 128 128 system_call_fastpath+0x16/0x1b