From: Jeremy Fitzhardinge on
On 06/30/2010 01:52 PM, Jan Beulich wrote:
> I fail to see that: Depending on the hypervisor's capabilities, the
> two main functions could be much smaller (potentially there wouldn't
> even be a need for the unlock hook in some cases),

What mechanism are you envisaging in that case?

>> That appears to be a mechanism to allow it to take interrupts while
>> spinning on the lock, which is something that stock ticket locks don't
>> allow. If that's a useful thing to do, it should happen in the generic
>> ticketlock code rather than in the per-hypervisor backend (otherwise we
>> end up with all kinds of subtle differences in lock behaviour depending
>> on the exact environment, which is just going to be messy). Even if
>> interrupts-while-spinning isn't useful on native hardware, it is going
>> to be equally applicable to all virtual environments.
>>
> While we do interrupt re-enabling in our pv kernels, I intentionally
> didn't do this here - it complicates the code quite a bit further, and
> that did seem right for an initial submission.
>

Ah, I was confused by this:
> + /*
> + * If we interrupted another spinlock while it was blocking, make
> + * sure it doesn't block (again) without re-checking the lock.
> + */
> + if (spinning.prev)
> + sync_set_bit(percpu_read(poll_evtchn),
> + xen_shared_info->evtchn_pending);
> +
> +

> The list really juts is needed to not pointlessly tickle CPUs that
> won't own the just released lock next anyway (or would own
> it, but meanwhile went for another one where they also decided
> to go into polling mode).

Did you measure that it was a particularly common case which was worth
optimising for?

J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jeremy Fitzhardinge on
On 06/30/2010 03:21 PM, Jan Beulich wrote:
>>>> On 30.06.10 at 14:53, Jeremy Fitzhardinge <jeremy(a)goop.org> wrote:
>>>>
>> On 06/30/2010 01:52 PM, Jan Beulich wrote:
>>
>>> I fail to see that: Depending on the hypervisor's capabilities, the
>>> two main functions could be much smaller (potentially there wouldn't
>>> even be a need for the unlock hook in some cases),
>>>
>> What mechanism are you envisaging in that case?
>>
> A simple yield is better than not doing anything at all.
>

Is that true? The main problem with ticket locks is that it requires
the host scheduler to schedule the correct "next" vcpu after unlock. If
the vcpus are just bouncing in and out of the scheduler with yields then
there's still no guarantee that the host scheduler will pick the right
vcpu at anything like the right time. I guess if a vcpu knows its next
it can plain spin while everyone else yields and that would work
approximately OK.

>>> The list really juts is needed to not pointlessly tickle CPUs that
>>> won't own the just released lock next anyway (or would own
>>> it, but meanwhile went for another one where they also decided
>>> to go into polling mode).
>>>
>> Did you measure that it was a particularly common case which was worth
>> optimising for?
>>
> I didn't measure this particular case. But since the main problem
> with ticket locks is when (host) CPUs are overcommitted, it
> certainly is a bad idea to create even more load on the host than
> there already is (the more that these are bursts).
>

A directed wakeup is important, but I'm not sure how important its
efficiency is (since you're already deep in slowpath if it makes a
difference at all).

J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
 | 
Pages: 1
Prev: pcrypt: sysfs interface
Next: (none)