From: Peter Zijlstra on
On Tue, 2010-06-29 at 15:35 +0100, Jan Beulich wrote:
>
> The (only) additional overhead this introduces for native execution is
> the writing of the owning CPU in the lock acquire paths.

Uhm, and growing the size of spinlock_t to 6 (or 8 bytes when aligned)
bytes when NR_CPUS>256.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on
On Wed, 2010-06-30 at 09:49 +0100, Jan Beulich wrote:
> >>> On 30.06.10 at 10:11, Peter Zijlstra <peterz(a)infradead.org> wrote:
> > On Tue, 2010-06-29 at 15:35 +0100, Jan Beulich wrote:
> >>
> >> The (only) additional overhead this introduces for native execution is
> >> the writing of the owning CPU in the lock acquire paths.
> >
> > Uhm, and growing the size of spinlock_t to 6 (or 8 bytes when aligned)
> > bytes when NR_CPUS>256.
>
> Indeed, I should have mentioned that. Will do so in an eventual
> next version.

It would be good to also get a measure of data structure bloat caused by
this, not sure .data section size is representable there, but its
something easy to provide.

Something like: pahole -s build/vmlinux | awk '{t+=$2} END {print t}'
from before and after might also be interesting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/