From: David Rientjes on
On Wed, 21 Oct 2009, Karol Lewandowski wrote:

> commit d6849591e042bceb66f1b4513a1df6740d2ad762
> Author: Karol Lewandowski <karol.k.lewandowski(a)gmail.com>
> Date: Wed Oct 21 21:01:20 2009 +0200
>
> SLUB: Don't drop __GFP_NOFAIL completely from allocate_slab()
>
> Commit ba52270d18fb17ce2cf176b35419dab1e43fe4a3 unconditionally
> cleared __GFP_NOFAIL flag on all allocations.
>

No, it clears __GFP_NOFAIL from the first allocation of oo_order(s->oo).
If that fails (and it's easy to fail, it has __GFP_NORETRY), another
allocation is attempted with oo_order(s->min), for which __GFP_NOFAIL
would be preserved if that's the slab cache's allocflags.

> Preserve this flag on second attempt to allocate page (with possibly
> decreased order).
>
> This should help with bugs #14265, #14141 and similar.
>
> Signed-off-by: Karol Lewandowski <karol.k.lewandowski(a)gmail.com>
>
> diff --git a/mm/slub.c b/mm/slub.c
> index b627675..ac5db65 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1084,7 +1084,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> {
> struct page *page;
> struct kmem_cache_order_objects oo = s->oo;
> - gfp_t alloc_gfp;
> + gfp_t alloc_gfp, nofail;
>
> flags |= s->allocflags;
>
> @@ -1092,6 +1092,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> * Let the initial higher-order allocation fail under memory pressure
> * so we fall-back to the minimum order allocation.
> */
> + nofail = flags & __GFP_NOFAIL;
> alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
>
> page = alloc_slab_page(alloc_gfp, node, oo);
> @@ -1100,8 +1101,10 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> /*
> * Allocation may have failed due to fragmentation.
> * Try a lower order alloc if possible
> + *
> + * Preserve __GFP_NOFAIL flag if previous allocation failed.
> */
> - page = alloc_slab_page(flags, node, oo);
> + page = alloc_slab_page(flags | nofail, node, oo);
> if (!page)
> return NULL;
>
>

This does nothing. You may have missed that the lower order allocation is
passing 'flags' (which is a union of the gfp flags passed to
allocate_slab() based on the allocation context and the cache's
allocflags), and not alloc_gfp where __GFP_NOFAIL is masked.

Nack.

Note: slub isn't going to be a culprit in order 5 allocation failures
since they have kmalloc passthrough to the page allocator.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/