From: Mathieu Desnoyers on
* Linus Torvalds (torvalds(a)linux-foundation.org) wrote:
> On Wed, Jul 14, 2010 at 1:39 PM, Mathieu Desnoyers
> <mathieu.desnoyers(a)efficios.com> wrote:
> >
> >> �- load percpu NMI stack frame pointer
> >> �- if non-zero we know we're nested, and should ignore this NMI:
> >> � � - we're returning to kernel mode, so return immediately by using
> >> "popf/ret", which also keeps NMI's disabled in the hardware until the
> >> "real" NMI iret happens.
> >
> > Maybe incrementing a per-cpu missed NMIs count could be appropriate here so we
> > know how many NMIs should be replayed at iret ?
>
> No. As mentioned, there is no such counter in real hardware either.
>
> Look at what happens for the not-nested case:
>
> - NMI1 triggers. The CPU takes a fault, and runs the NMI handler with
> NMI's disabled
>
> - NMI2 triggers. Nothing happens, the NMI's are disabled.
>
> - NMI3 triggers. Again, nothing happens, the NMI's are still disabled
>
> - the NMI handler returns.
>
> - What happens now?
>
> How many NMI interrupts do you get? ONE. Exactly like my "emulate it
> in software" approach. The hardware doesn't have any counters for
> pending NMI's either. Why should the software emulation have them?

So I figure that given Maciej's response, we can get at most 2 nested nmis, no
more, no less. So I was probably going too far with the counter, but we need 2.
However, failure to deliver the second NMI in this case would not match the
hardware expectations (see below).

>
> >> � � - before the popf/iret, use the NMI stack pointer to make the NMI
> >> return stack be invalid and cause a fault
> >
> > I assume you mean "popf/ret" here.
>
> Yes, that was as typo. The whole point of using popf was obviously to
> _avoid_ the iret ;)
>
> > So assuming we use a frame copy, we should
> > change the nmi stack pointer in the nesting 0 nmi stack copy, so the nesting 0
> > NMI iret will trigger the fault
> >
> >> � - set the NMI stack pointer to the current stack pointer
> >
> > That would mean bringing back the NMI stack pointer to the (nesting - 1) nmi
> > stack copy.
>
> I think you're confused. Or I am by your question.
>
> The NMI code would literally just do:
>
> - check if the NMI was nested, by looking at whether the percpu
> nmi-stack-pointer is non-NULL
>
> - if it was nested, do nothing, an return with a popf/ret. The only
> stack this sequence might needs is to save/restore the register that
> we use for the percpu value (although maybe we can just co a "cmpl
> $0,%_percpu_seg:nmi_stack_ptr" and not even need that), and it's
> atomic because at this point we know that NMI's are disabled (we've
> not _yet_ taken any nested faults)
>
> - if it's a regular (non-nesting) NMI, we'd basically do
>
> 6* pushq 48(%rsp)
>
> to copy the five words that the NMI pushed (ss/esp/eflags/cs/eip)
> and the one we saved ourselves (if we needed any, maybe we can make do
> with just 5 words).

Ah, right, you only need to do the copy and use the copy for the nesting level 0
NMI handler. The nested NMI can work on the "real" nmi stack because we never
expect it to fault.

>
> - then we just save that new stack pointer to the percpu thing with a simple
>
> movq %rsp,%__percpu_seg:nmi_stack_ptr
>
> and we're all done. The final "iret" will do the right thing (either
> fault or return), and there are no races that I can see exactly
> because we use a single nmi-atomic instruction (the "iret" itself) to
> either re-enable NMI's _or_ test whether we should re-do an NMI.
>
> There is a single-instruction window that is interestign in the return
> path, which is the window between the two final instructions:
>
> movl $0,%__percpu_seg:nmi_stack_ptr
> iret
>
> where I wonder what happens if we have re-enabled NMI (due to a fault
> in the NMI handler), but we haven't actually taken the NMI itself yet,
> so now we _will_ re-use the stack. Hmm. I suspect we need another of
> those horrible "if the NMI happens at this particular %rip" cases that
> we already have for the sysenter code on x86-32 for the NMI/DEBUG trap
> case of fixing up the stack pointer.

Yes, this was this exact instruction window I was worried about. I see another
possible failure mode:

- NMI
- page fault
- iret
- NMI
- set nmi_stack_ptr to 0, popf/lret.
- page fault (yep, another one!)
- iret
- movl $0,%__percpu_seg:nmi_stack_ptr
- iret

So in this case, movl/iret are executed with NMIs enabled. So if an NMI comes in
after the movl instruction, it will not detect that it is nested and will re-use
the percpu "nmi_stack_ptr" stack, squashing the "faulty" stack ptr with a brand
new one which won't trigger a fault. I'm afraid that in this case, the last NMI
handler will iret to the "nesting 0" handler at the iret instruction, which will
in turn return to itself, breaking all hell loose in the meantime (endless iret
loop).

So this also calls for special-casing an NMI nested on top of the following iret

- movl $0,%__percpu_seg:nmi_stack_ptr
- iret <-----

instruction. At the beginning of the NMI handler, we could detect if we are
either nested over an NMI (checking nmi_stack_ptr != NULL) or if we are at this
specifica %rip, and assume we are nested in both cases.

>
> And maybe I missed something else. But it does look reasonably simple.
> Subtle, but not a lot of code. And the code is all very much about the
> NMI itself, not about other random sequences. No?

If we can find a clean way to handle this NMI vs iret problem outside of the
entry_*.S code, within NMI-specific code, I'm indeed all for it. entry_*.s is
already complicated enough as it is. I think checking the %rip at NMI entry
could work out.

Thanks!

Mathieu

--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Maciej W. Rozycki on
On Wed, 14 Jul 2010, Linus Torvalds wrote:

> You just count differently. I don't count the first one (the "real"
> NMI). That obviously happens. So I only count how many interrupts we
> need to fake. That's my "one". That's the one that happens as a result
> of the fault that we take on the iret in the emulated model.

Ah, I see -- so we are on the same page after all.

> (Yeah, yeah, you can call it a "one-bit counter", but I don't think
> that's a counter. It's just a bit of information).

Hardware has something like a strapped-high D flip-flop (NMI goes to the
clock input) with an extra reset input I presume -- this dates back to
8086 when the transistor count mattered with accuracy higher than 1e6. ;)

Maciej
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Linus Torvalds on
On Wed, Jul 14, 2010 at 3:21 PM, Mathieu Desnoyers
<mathieu.desnoyers(a)efficios.com> wrote:
>
> If we can find a clean way to handle this NMI vs iret problem outside of the
> entry_*.S code, within NMI-specific code, I'm indeed all for it. entry_*.s is
> already complicated enough as it is. I think checking the %rip at NMI entry
> could work out.

I think the %rip check should be pretty simple - exactly because there
is only a single point where the race is open between that 'mov' and
the 'iret'. So it's simpler than the (similar) thing we do for
debug/nmi stack fixup for sysenter that has to check a range.

The only worry is if that crazy paravirt code wants to paravirtualize
the iretq. Afaik, paravirt does that exactly because they screw up
iret handling themselves. Maybe we could stop doing that stupid iretq
paravirtualization, and just tell the paravirt people to do the same
thing I propose, and just allow nesting.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mathieu Desnoyers on
* Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> On Wed, Jul 14, 2010 at 12:54:19PM -0700, Linus Torvalds wrote:
> > On Wed, Jul 14, 2010 at 12:36 PM, Frederic Weisbecker
> > <fweisbec(a)gmail.com> wrote:
> > >
> > > There is also the fact we need to handle the lost NMI, by defering its
> > > treatment or so. That adds even more complexity.
> >
> > I don't think your read my proposal very deeply. It already handles
> > them by taking a fault on the iret of the first one (that's why we
> > point to the stack frame - so that we can corrupt it and force a
> > fault).
>
>
> Ah right, I missed this part.

Hrm, Frederic, I hate to ask that but.. what are you doing with those percpu 8k
data structures exactly ? :)

Mathieu


--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Frederic Weisbecker on
On Wed, Jul 14, 2010 at 06:31:07PM -0400, Mathieu Desnoyers wrote:
> * Frederic Weisbecker (fweisbec(a)gmail.com) wrote:
> > On Wed, Jul 14, 2010 at 12:54:19PM -0700, Linus Torvalds wrote:
> > > On Wed, Jul 14, 2010 at 12:36 PM, Frederic Weisbecker
> > > <fweisbec(a)gmail.com> wrote:
> > > >
> > > > There is also the fact we need to handle the lost NMI, by defering its
> > > > treatment or so. That adds even more complexity.
> > >
> > > I don't think your read my proposal very deeply. It already handles
> > > them by taking a fault on the iret of the first one (that's why we
> > > point to the stack frame - so that we can corrupt it and force a
> > > fault).
> >
> >
> > Ah right, I missed this part.
>
> Hrm, Frederic, I hate to ask that but.. what are you doing with those percpu 8k
> data structures exactly ? :)
>
> Mathieu



So, when an event triggers in perf, we sometimes want to capture the stacktrace
that led to the event.

We want this stacktrace (here we call that a callchain) to be recorded
locklessly. So we want this callchain buffer per cpu, with the following
type:

#define PERF_MAX_STACK_DEPTH 255

struct perf_callchain_entry {
__u64 nr;
__u64 ip[PERF_MAX_STACK_DEPTH];
};


That makes 2048 bytes. But per cpu is not enough for the callchain to be recorded
locklessly, we also need one buffer per context: task, softirq, hardirq, nmi, as
an event can trigger in any of these.
Since we disable preemption, none of these contexts can nest locally. In
fact hardirqs can nest but we just don't care about this corner case.

So, it makes 2048 * 4 = 8192 bytes. And that per cpu.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/