From: Manfred Spraul on
On 05/19/2010 03:30 PM, Johannes Stezenbach wrote:
> Hi,
>
> I have some comments/questions, I hope it's not too silly:
>
> On Wed, May 19, 2010 at 12:01:42PM +0100, David Woodhouse wrote:
>
>> +#ifndef ARCH_KMALLOC_MINALIGN
>> +/*
>> + * Enforce a minimum alignment for the kmalloc caches.
>> + * Usually, the kmalloc caches are cache_line_size() aligned, except when
>> + * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
>> + * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>> + * alignment larger than the alignment of a 64-bit integer.
>> + * ARCH_KMALLOC_MINALIGN allows that.
>> + * Note that increasing this value may disable some debug features.
>> + */
>> +#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
>> +#endif
>>
> I think the comment is confusing. IIRC kmalloc() API guarantees that
> the allocated buffer is suitable for DMA, so if cache coherence is not
> handled by hardware the arch might need to set this to the cache line size,
> and that's what ARCH_KMALLOC_MINALIGN is about. Nothing else.
>
Is this text better?

/*
* Enforce a minimum alignment for the kmalloc caches.
* kmalloc allocations are guaranteed to be BYTES_PER_WORD aligned -
sizeof(void*)
* If an arch needs a larger guarantee (e.g. cache_line_size due to DMA),
then it
* must use ARCH_KMALLOC_MINALIGN to enforce that.
* Note: Do not set ARCH_KMALLOC_MINALIGN for performance reasons.
* Unless debug options are enabled, the kernel uses cache_line_size()
automatically.
*/

>> +#ifndef ARCH_SLAB_MINALIGN
>> +/*
>> + * Enforce a minimum alignment for all caches.
>> + * Intended for archs that get misalignment faults even for BYTES_PER_WORD
>> + * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
>> + * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
>> + * some debug features.
>> + */
>> +#define ARCH_SLAB_MINALIGN 0
>> +#endif
>>
> Why is this needed at all? If code calls kmem_cache_create()
> with wrong align parameter, or has wrong expectations wrt kmalloc()
> alignment guarantees, this code needs to be fixed?
> I mean, portable code cannot assume that unaligned accesses work?
>
ARM uses 8 bytes. I don't know why.
A 32-bit arch and unaligned 64-bit loads are not supported?

--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Pekka Enberg on
David Woodhouse wrote:
> Signed-off-by: David Woodhouse <David.Woodhouse(a)intel.com>

I applied patches 1-4. Thanks David!

> ---
> include/linux/slab_def.h | 24 ++++++++++++++++++++++++
> mm/slab.c | 24 ------------------------
> 2 files changed, 24 insertions(+), 24 deletions(-)
>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index ca6b2b3..1812dac 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -16,6 +16,30 @@
> #include <linux/compiler.h>
> #include <linux/kmemtrace.h>
>
> +#ifndef ARCH_KMALLOC_MINALIGN
> +/*
> + * Enforce a minimum alignment for the kmalloc caches.
> + * Usually, the kmalloc caches are cache_line_size() aligned, except when
> + * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
> + * Some archs want to perform DMA into kmalloc caches and need a guaranteed
> + * alignment larger than the alignment of a 64-bit integer.
> + * ARCH_KMALLOC_MINALIGN allows that.
> + * Note that increasing this value may disable some debug features.
> + */
> +#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
> +#endif
> +
> +#ifndef ARCH_SLAB_MINALIGN
> +/*
> + * Enforce a minimum alignment for all caches.
> + * Intended for archs that get misalignment faults even for BYTES_PER_WORD
> + * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
> + * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
> + * some debug features.
> + */
> +#define ARCH_SLAB_MINALIGN 0
> +#endif
> +
> /*
> * struct kmem_cache
> *
> diff --git a/mm/slab.c b/mm/slab.c
> index bac0f4f..7401ddc 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -144,30 +144,6 @@
> #define BYTES_PER_WORD sizeof(void *)
> #define REDZONE_ALIGN max(BYTES_PER_WORD, __alignof__(unsigned long long))
>
> -#ifndef ARCH_KMALLOC_MINALIGN
> -/*
> - * Enforce a minimum alignment for the kmalloc caches.
> - * Usually, the kmalloc caches are cache_line_size() aligned, except when
> - * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
> - * Some archs want to perform DMA into kmalloc caches and need a guaranteed
> - * alignment larger than the alignment of a 64-bit integer.
> - * ARCH_KMALLOC_MINALIGN allows that.
> - * Note that increasing this value may disable some debug features.
> - */
> -#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
> -#endif
> -
> -#ifndef ARCH_SLAB_MINALIGN
> -/*
> - * Enforce a minimum alignment for all caches.
> - * Intended for archs that get misalignment faults even for BYTES_PER_WORD
> - * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
> - * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
> - * some debug features.
> - */
> -#define ARCH_SLAB_MINALIGN 0
> -#endif
> -
> #ifndef ARCH_KMALLOC_FLAGS
> #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
> #endif

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Pekka Enberg on
Christoph Lameter wrote:
> Maybe we can consolidate that into slab.h so that the alignment is the
> same for all allocators?

But we don't want that for SLOB as discussed in the other thread. It
really wants to be sizeof(unsigned long), not sizeof(unsigned long long).

Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Lameter on

Maybe we can consolidate that into slab.h so that the alignment is the
same for all allocators?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Lameter on
On Wed, 19 May 2010, Pekka Enberg wrote:

> Christoph Lameter wrote:
> > Maybe we can consolidate that into slab.h so that the alignment is the
> > same for all allocators?
>
> But we don't want that for SLOB as discussed in the other thread. It really
> wants to be sizeof(unsigned long), not sizeof(unsigned long long).

__alignof__(unsigned long long)

SLOB needs to respect that as well otherwise objects are not aligned as
required by the compiler.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/