From: Mathieu Desnoyers on
* Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> On Wed, Jul 14, 2010 at 06:31:07PM -0400, Mathieu Desnoyers wrote:
> > * Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> > > On Wed, Jul 14, 2010 at 12:54:19PM -0700, Linus Torvalds wrote:
> > > > On Wed, Jul 14, 2010 at 12:36 PM, Frederic Weisbecker
> > > > <fweisbec(a)gmail.com> wrote:
> > > > >
> > > > > There is also the fact we need to handle the lost NMI, by defering its
> > > > > treatment or so. That adds even more complexity.
> > > >
> > > > I don't think your read my proposal very deeply. It already handles
> > > > them by taking a fault on the iret of the first one (that's why we
> > > > point to the stack frame - so that we can corrupt it and force a
> > > > fault).
> > >
> > >
> > > Ah right, I missed this part.
> >
> > Hrm, Frederic, I hate to ask that but.. what are you doing with those percpu 8k
> > data structures exactly ? :)
> >
> > Mathieu
>
>
>
> So, when an event triggers in perf, we sometimes want to capture the stacktrace
> that led to the event.
>
> We want this stacktrace (here we call that a callchain) to be recorded
> locklessly. So we want this callchain buffer per cpu, with the following
> type:

Ah OK, so you mean that perf now has 2 different ring buffer implementations ?
How about using a single one that is generic enough to handle perf and ftrace
needs instead ?

(/me runs away quickly before the lightning strikes) ;)

Mathieu


>
> #define PERF_MAX_STACK_DEPTH 255
>
> struct perf_callchain_entry {
> __u64 nr;
> __u64 ip[PERF_MAX_STACK_DEPTH];
> };
>
>
> That makes 2048 bytes. But per cpu is not enough for the callchain to be recorded
> locklessly, we also need one buffer per context: task, softirq, hardirq, nmi, as
> an event can trigger in any of these.
> Since we disable preemption, none of these contexts can nest locally. In
> fact hardirqs can nest but we just don't care about this corner case.
>
> So, it makes 2048 * 4 = 8192 bytes. And that per cpu.
>

--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Linus Torvalds on
On Wed, Jul 14, 2010 at 4:09 PM, Andi Kleen <andi(a)firstfloor.org> wrote:
>
> It can happen in theory, but for such a rare case take a lock
> and walking everything should be fine.

Actually, that's _exactly_ the wrong kind of thinking.

Bad latency is bad latency, even when it happens rarely. So latency
problems kill - even when they are rare. So you want to avoid them.
And walking every possible page table is a _huge_ latency problem when
it happens.

In contrast, what's the advantage of doing thigns synchronously while
holding a lock? It's that you can avoid a few page faults, and get
better CPU use. But that's _stupid_ if it's something that is very
rare to begin with.

So the very rarity argues for the lazy approach. If it wasn't rare,
there would be a much stronger argument for trying to do things
up-front.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Tejun Heo on
Hello,

On 07/14/2010 10:08 PM, H. Peter Anvin wrote:
>> I suspect the low level per cpu allocation functions should
>> just call it.
>>
>
> Yes, specifically the point at which we allocate new per cpu memory
> blocks.

We can definitely do that but walking whole page table list is too
heavy to do automatically at that level especially when all users
other than NMI would be fine w/ the default lazy approach. If Linus'
approach doesn't pan out, I think the right thing to do would be
adding a wrapper to vmalloc_sync_all() and let perf code call it after
percpu allocation.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Frederic Weisbecker on
On Wed, Jul 14, 2010 at 07:11:17PM -0400, Mathieu Desnoyers wrote:
> * Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> > On Wed, Jul 14, 2010 at 06:31:07PM -0400, Mathieu Desnoyers wrote:
> > > * Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> > > > On Wed, Jul 14, 2010 at 12:54:19PM -0700, Linus Torvalds wrote:
> > > > > On Wed, Jul 14, 2010 at 12:36 PM, Frederic Weisbecker
> > > > > <fweisbec(a)gmail.com> wrote:
> > > > > >
> > > > > > There is also the fact we need to handle the lost NMI, by defering its
> > > > > > treatment or so. That adds even more complexity.
> > > > >
> > > > > I don't think your read my proposal very deeply. It already handles
> > > > > them by taking a fault on the iret of the first one (that's why we
> > > > > point to the stack frame - so that we can corrupt it and force a
> > > > > fault).
> > > >
> > > >
> > > > Ah right, I missed this part.
> > >
> > > Hrm, Frederic, I hate to ask that but.. what are you doing with those percpu 8k
> > > data structures exactly ? :)
> > >
> > > Mathieu
> >
> >
> >
> > So, when an event triggers in perf, we sometimes want to capture the stacktrace
> > that led to the event.
> >
> > We want this stacktrace (here we call that a callchain) to be recorded
> > locklessly. So we want this callchain buffer per cpu, with the following
> > type:
>
> Ah OK, so you mean that perf now has 2 different ring buffer implementations ?
> How about using a single one that is generic enough to handle perf and ftrace
> needs instead ?
>
> (/me runs away quickly before the lightning strikes) ;)
>
> Mathieu


:-)

That's no ring buffer. It's a temporary linear buffer to fill a stacktrace,
and get its effective size before committing it to the real ring buffer.

Sure that involves two copies.

But I don't have a better solution in mind than using a pre-buffer for that,
since we can't know the size of the stacktrace in advance. We could
always reserve the max stacktrace size, but that would be wasteful.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Steven Rostedt on
[ /me removes the duplicate email of himself! ]

On Wed, 2010-07-14 at 19:11 -0400, Mathieu Desnoyers wrote:
> >
> > So, when an event triggers in perf, we sometimes want to capture the stacktrace
> > that led to the event.
> >
> > We want this stacktrace (here we call that a callchain) to be recorded
> > locklessly. So we want this callchain buffer per cpu, with the following
> > type:
>
> Ah OK, so you mean that perf now has 2 different ring buffer implementations ?
> How about using a single one that is generic enough to handle perf and ftrace
> needs instead ?
>
> (/me runs away quickly before the lightning strikes) ;)
>

To be fair, that's just a temp buffer.

-- Steve

(/me sends this to try to remove the dup email he's getting )

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/