From: Nick Piggin on
On Thu, Jun 03, 2010 at 10:52:51AM +0200, Andi Kleen wrote:
> On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
> > On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
> > >
> > > There are two separate problems: the more general problem is that
> > > the hypervisor can put a vcpu to sleep while holding a lock, causing
> > > other vcpus to spin until the end of their time slice. This can
> > > only be addressed with hypervisor help.
> >
> > Fyi - I have a early patch ready to address this issue. Basically I am using
> > host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint
> > host whenever guest is in spin-lock'ed section, which is read by host scheduler
> > to defer preemption.
>
> Looks like a ni.ce simple way to handle this for the kernel.
>
> However I suspect user space will hit the same issue sooner
> or later. I assume your way is not easily extensable to futexes?

Well userspace has always had the problem, hypervisor or not. So
sleeping locks obviously help a lot with that.

But we do hit the problem at times. The MySQL sysbench scalability
problem was a fine example

http://ozlabs.org/~anton/linux/sysbench/

Performance would tank when threads oversubscribe CPUs because lock
holders would start getting preempted.

This was due to nasty locking in MySQL as well, mind you.

There are some ways to improve it. glibc I believe has an option to
increase thread priority when taking a mutex, which is similar to
what we have here.

But it's a fairly broad problem for userspace. The resource may not
be just a lock but it could be IO as well.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Nick Piggin on
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
> On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
> >
> > There are two separate problems: the more general problem is that
> > the hypervisor can put a vcpu to sleep while holding a lock, causing
> > other vcpus to spin until the end of their time slice. This can
> > only be addressed with hypervisor help.
>
> Fyi - I have a early patch ready to address this issue. Basically I am using
> host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint
> host whenever guest is in spin-lock'ed section, which is read by host scheduler
> to defer preemption.
>
> Guest side:
>
> static inline void spin_lock(spinlock_t *lock)
> {
> raw_spin_lock(&lock->rlock);
> + __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
> }
>
> static inline void spin_unlock(spinlock_t *lock)
> {
> + __get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
> raw_spin_unlock(&lock->rlock);
> }
>
> [similar changes to other spinlock variants]

Great, this is a nice way to improve it.

You might want to consider playing with first taking a ticket, and
then if we fail to acquire the lock immediately, then increment
defer_preempt before we start spinning.

The downside of this would be if we waste all our slice on spinning
and then preempted in the critical section. But with ticket locks
you can easily see how many entries in the queue in front of you.
So you could experiment with starting to defer preempt when we
notice we are getting toward the head of the queue.

Have you also looked at how s390 checks if the owning vcpu is running
and if so it spins, if not yields to the hypervisor. Something like
turning it into an adaptive lock. This could be applicable as well.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: David Woodhouse on
On Tue, 2010-06-01 at 21:36 +0200, Andi Kleen wrote:
> > Collecting the contention/usage statistics on a per spinlock
> > basis seems complex. I believe a practical approximation
> > to this are adaptive mutexes where upon hitting a spin
> > time threshold, punt and let the scheduler reconcile fairness.
>
> That would probably work, except: how do you get the
> adaptive spinlock into a paravirt op without slowing
> down a standard kernel?

It only ever comes into play in the case where the spinlock is contended
anyway -- surely it shouldn't be _that_ much of a performance issue?

See the way that ppc64 handles it -- on a machine with overcommitted
virtual cpus, it will call __spin_yield (arch/powerpc/lib/locks.c) on
contention, which may cause the virtual CPU to donate its hypervisor
timeslice to the vCPU which is actually holding the lock in question.

--
David Woodhouse Open Source Technology Centre
David.Woodhouse(a)intel.com Intel Corporation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Srivatsa Vaddagiri on
On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
> > Guest side:
> >
> > static inline void spin_lock(spinlock_t *lock)
> > {
> > raw_spin_lock(&lock->rlock);
> > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
> > }
> >
> > static inline void spin_unlock(spinlock_t *lock)
> > {
> > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
> > raw_spin_unlock(&lock->rlock);
> > }
> >
> > [similar changes to other spinlock variants]
>
> Great, this is a nice way to improve it.
>
> You might want to consider playing with first taking a ticket, and
> then if we fail to acquire the lock immediately, then increment
> defer_preempt before we start spinning.
>
> The downside of this would be if we waste all our slice on spinning
> and then preempted in the critical section. But with ticket locks
> you can easily see how many entries in the queue in front of you.
> So you could experiment with starting to defer preempt when we
> notice we are getting toward the head of the queue.

Mm - my goal is to avoid long spin times in the first place (because the
owning vcpu was descheduled at an unfortunate time i.e while it was holding a
lock). From that sense, I am targetting preemption-defer of lock *holder*
rather than of lock acquirer. So ideally whenever somebody tries to grab a lock,
it should be free most of the time, it can be held only if the owner is
currently running - which means we won't have to spin too long for the lock.

> Have you also looked at how s390 checks if the owning vcpu is running
> and if so it spins, if not yields to the hypervisor. Something like
> turning it into an adaptive lock. This could be applicable as well.

I don't think even s390 does adaptive spinlocks. Also afaik s390 zVM does gang
scheduling of vcpus, which reduces the severity of this problem very much -
essentially lock acquirer/holder are run simultaneously on different cpus all
the time. Gang scheduling is on my list of things to look at much later
(although I have been warned that its a scalablility nightmare!).

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Nick Piggin on
On Thu, Jun 03, 2010 at 05:34:50PM +0530, Srivatsa Vaddagiri wrote:
> On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
> > > Guest side:
> > >
> > > static inline void spin_lock(spinlock_t *lock)
> > > {
> > > raw_spin_lock(&lock->rlock);
> > > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
> > > }
> > >
> > > static inline void spin_unlock(spinlock_t *lock)
> > > {
> > > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
> > > raw_spin_unlock(&lock->rlock);
> > > }
> > >
> > > [similar changes to other spinlock variants]
> >
> > Great, this is a nice way to improve it.
> >
> > You might want to consider playing with first taking a ticket, and
> > then if we fail to acquire the lock immediately, then increment
> > defer_preempt before we start spinning.
> >
> > The downside of this would be if we waste all our slice on spinning
> > and then preempted in the critical section. But with ticket locks
> > you can easily see how many entries in the queue in front of you.
> > So you could experiment with starting to defer preempt when we
> > notice we are getting toward the head of the queue.
>
> Mm - my goal is to avoid long spin times in the first place (because the
> owning vcpu was descheduled at an unfortunate time i.e while it was holding a
> lock). From that sense, I am targetting preemption-defer of lock *holder*
> rather than of lock acquirer. So ideally whenever somebody tries to grab a lock,
> it should be free most of the time, it can be held only if the owner is
> currently running - which means we won't have to spin too long for the lock.

Holding a ticket in the queue is effectively the same as holding the
lock, from the pov of processes waiting behind.

The difference of course is that CPU cycles do not directly reduce
latency of ticket holders (only the owner). Spinlock critical sections
should tend to be several orders of magnitude shorter than context
switch times. So if you preempt the guy waiting at the head of the
queue, then it's almost as bad as preempting the lock holder.


> > Have you also looked at how s390 checks if the owning vcpu is running
> > and if so it spins, if not yields to the hypervisor. Something like
> > turning it into an adaptive lock. This could be applicable as well.
>
> I don't think even s390 does adaptive spinlocks. Also afaik s390 zVM does gang
> scheduling of vcpus, which reduces the severity of this problem very much -
> essentially lock acquirer/holder are run simultaneously on different cpus all
> the time. Gang scheduling is on my list of things to look at much later
> (although I have been warned that its a scalablility nightmare!).

It effectively is pretty well an adaptive lock. The spinlock itself
doesn't sleep of course, but it yields to the hypervisor if the owner
has been preempted. This is pretty close to analogous with Linux
adaptive mutexes.

s390 also has the diag9c instruction which I suppose somehow boosts
priority of a preempted contended lock holder. In spite of any other
possible optimizations in their hypervisor like gang scheduling,
diag9c apparently provides quite a large improvement in some cases.

And they aren't even using ticket spinlocks!!

So I think these things are fairly important to look at.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/