From: Cyrill Gorcunov on
In case if last active performance counter is not overflowed at
moment of NMI being triggered by another counter, the irq statistics
may miss an update stage. As a more serious consequence -- apic quirk
may not be triggered so apic lvt entry stay masked.

Tested-by: Lin Ming <ming.m.lin(a)intel.com>
Signed-off-by: Cyrill Gorcunov <gorcunov(a)openvz.org>
CC: Lin Ming <ming.m.lin(a)intel.com>
CC: Stephane Eranian <eranian(a)google.com>
CC: Peter Zijlstra <a.p.zijlstra(a)chello.nl>
CC: Ingo Molnar <mingo(a)elte.hu>
CC: Frederic Weisbecker <fweisbec(a)gmail.com>
---
arch/x86/kernel/cpu/perf_event_p4.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

Index: linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
=====================================================================
--- linux-2.6.git.orig/arch/x86/kernel/cpu/perf_event_p4.c
+++ linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
@@ -656,6 +656,7 @@ static int p4_pmu_handle_irq(struct pt_r
cpuc = &__get_cpu_var(cpu_hw_events);

for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+ int overflow;

if (!test_bit(idx, cpuc->active_mask))
continue;
@@ -666,12 +667,14 @@ static int p4_pmu_handle_irq(struct pt_r
WARN_ON_ONCE(hwc->idx != idx);

/* it might be unflagged overflow */
- handled = p4_pmu_clear_cccr_ovf(hwc);
+ overflow = p4_pmu_clear_cccr_ovf(hwc);

val = x86_perf_event_update(event);
- if (!handled && (val & (1ULL << (x86_pmu.cntval_bits - 1))))
+ if (!overflow && (val & (1ULL << (x86_pmu.cntval_bits - 1))))
continue;

+ handled += overflow;
+
/* event overflow for sure */
data.period = event->hw.last_period;

@@ -687,7 +690,7 @@ static int p4_pmu_handle_irq(struct pt_r
inc_irq_stat(apic_perf_irqs);
}

- return handled;
+ return handled > 0;
}

/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/