From: Avi Kivity on
On 08/02/2010 05:42 PM, Ryan Harper wrote:
>
>> A problem with directed yield is figuring out who to yield to. One idea
>> is to look for a random vcpu that is not running and donate some runtime
>> to it. In the best case, it's the lock holder and we cause it to start
>> running. Middle case it's not the lock holder, but we lose enough
>> runtime to stop running, so at least we don't waste cpu. Worst case we
>> continue running not having woken the lock holder. Spin again, yield
>> again hoping to find the right vcpu.
> It's been quite some time, but played with directed yielding for Xen[1]
> and we were looking to model the POWER directed yield (H_CONFER) where
> the lock holding vcpu was indiciated in the spinlock. When acquiring
> the lock, record the vcpu id. When another vcpu attempts to acquire the
> lock if it can't it can yield its time to the lock holder.

No reason why we can't have something similar.

We can take the lock and set the owner atomically:

LOCK_PREFIX "cmpxchg %1, %0"
: "=m"(lock) : "r"(raw_smp_processor_id() | SPIN_LOCK_BIAS),
"a"((u16)0) : "memory"

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jeremy Fitzhardinge on
On 08/02/2010 01:32 AM, Avi Kivity wrote:
> On 07/26/2010 08:19 PM, Jeremy Fitzhardinge wrote:
>> On 07/25/2010 11:14 PM, Srivatsa Vaddagiri wrote:
>>> Add KVM hypercall for yielding vcpu timeslice.
>>
>> Can you do a directed yield?
>>
>
> A problem with directed yield is figuring out who to yield to. One
> idea is to look for a random vcpu that is not running and donate some
> runtime to it. In the best case, it's the lock holder and we cause it
> to start running. Middle case it's not the lock holder, but we lose
> enough runtime to stop running, so at least we don't waste cpu. Worst
> case we continue running not having woken the lock holder. Spin
> again, yield again hoping to find the right vcpu.

That can help with lockholder preemption, but on unlock you need to wake
up exactly the right vcpu - the next in the ticket queue - in order to
avoid burning masses of cpu. If each cpu records what lock it is
spinning on and what its ticket is in a percpu variable, then the
unlocker can search for the next person to kick.

J

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Srivatsa Vaddagiri on
On Mon, Aug 02, 2010 at 11:40:23AM +0300, Avi Kivity wrote:
> >>Can you do a directed yield?
> >We don't have that support yet in Linux scheduler.
>
> If you think it's useful, it would be good to design it into the
> interface, and fall back to ordinary yield if the host doesn't
> support it.
>
> A big advantage of directed yield vs yield is that you conserve
> resources within a VM; a simple yield will cause the guest to drop
> its share of cpu to other guest.

Hmm .. I see possibility of modifying yield to reclaim its "lost" timeslice when
its scheduled next as well. Basically remember what timeslice we have given
up and add that as its "bonus" when it runs next. That would keep the dynamics
of yield donation/reclaim local to the (physical) cpu and IMHO is less complex
than dealing with directed yield between tasks located across different physical
cpus. That would also address the fairness issue with yield you are pointing at?

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Srivatsa Vaddagiri on
On Tue, Aug 03, 2010 at 10:46:59AM +0530, Srivatsa Vaddagiri wrote:
> On Mon, Aug 02, 2010 at 11:40:23AM +0300, Avi Kivity wrote:
> > >>Can you do a directed yield?
> > >We don't have that support yet in Linux scheduler.
> >
> > If you think it's useful, it would be good to design it into the
> > interface, and fall back to ordinary yield if the host doesn't
> > support it.
> >
> > A big advantage of directed yield vs yield is that you conserve
> > resources within a VM; a simple yield will cause the guest to drop
> > its share of cpu to other guest.
>
> Hmm .. I see possibility of modifying yield to reclaim its "lost" timeslice when
> its scheduled next as well. Basically remember what timeslice we have given
> up and add that as its "bonus" when it runs next. That would keep the dynamics
> of yield donation/reclaim local to the (physical) cpu and IMHO is less complex
> than dealing with directed yield between tasks located across different physical
> cpus. That would also address the fairness issue with yield you are pointing at?

Basically with directed yield, we need to deal with these issues:

- Timeslice inflation of target (lock-holder) vcpu affecting fair-time of other
guests vcpus.
- Intra-VM fairness - different vcpus could get different fair-time, depending
on how much of a lock-holder/spinner a vcpu is

By simply educating yield to reclaim its lost share, I feel we can avoid these
complexities and get most of the benefit of yield-on-contention.

CCing other shceduler experts for their opinion of directed yield.

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/