From: Zhang, Xiantao on
Avi Kivity wrote:
> On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:
>>
>>>>> Spin loops need to be addressed first, they are known to kill
>>>>> performance in overcommit situations.
>>>>>
>>>>>
>>>> Even in overcommit case, if vcpu threads of one qemu are not
>>>> scheduled or pulled to the same logical processor, the performance
>>>> drop is tolerant like Xen's case today. But for KVM, it has to
>>>> suffer from additional performance loss, since host's scheduler
>>>> actively pulls these vcpu threads together.
>>>>
>>>>
>>>>
>>> Can you quantify this loss? Give examples of what happens?
>>>
>> For example, one machine is configured with 2 pCPUs and there are
>> two Windows guests running on the machine, and each guest is
>> cconfigured with 2 vcpus and one webbench server runs in it.
>> If use host's default scheduler, webbench's performance is very bad,
>> but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can
>> see 5-10X performance improvement with same CPU utilization.
>> In addition, we also see kvm's perf scalability is also impacted in
>> large systems, for some performance experiments, kvm's perf begins
>> to drop when vCPU is overcommitted and pCPU are saturated, but once
>> the wake_up_affine feature is switched off in scheduler, kvm's perf
>> can keep rising in this case.
>>
>
> Ok. This is probably due to spinlock contention.

Yes, exactly.

> When vcpus are pinned to pcpus, there is a 50% chance that a guest's
> vcpus will be co-scheduled and spinlocks will perform will.
>
> When vcpus are not pinned, but affine wakeups are disabled, there is a
> 33% chance that vcpus will be co-scheduled.
>
> When vcpus are not pinned and affine wakeups are enabled there is a 0%
> chance that vcpus will be co-scheduled.
>
> Keeping both vcpus on the same core actually makes sense since they
> can communicate through the local cache faster than across cores.
> What we need is to make sure that they don't spin.
>
> Windows 2008 can report spinlock spinning through a hypercall. Can
> you hook to that interface and see if it happens regularly?
> Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
> function.
We only tried windows 2003 for the experiments, and have no data related to windows 2008.
But maybe we can have a try later. Anyway, the key point is we have to enhance the scheduler to let it
Know which threads are vcpu threads to avoid perf loss in this case.
Xiantao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on
On Thu, 2010-04-15 at 09:43 -0700, Srivatsa Vaddagiri wrote:
> On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
> > On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
> > >
> > > Certainly that has even greater potential for Linux guests. Note that
> > > we spin on mutexes now, so we need to prevent preemption while the lock
> > > owner is running.
> >
> > either that, or disable spinning on (para) virt kernels. Para virt
> > kernels could possibly extend the thing by also checking to see if the
> > owner's vcpu is running.
>
> I suspect we will need a combination of both approaches, given that we will not
> be able to avoid preempting guests in their critical section always (too long
> critical sections or real-time tasks wanting to preempt). Other idea is to
> gang-schedule VCPUs of the same guest as much as possible?

Except gang scheduling is a scalability nightmare waiting to happen. I
much prefer this hint thing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 04/15/2010 04:33 PM, Peter Zijlstra wrote:
> On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
>
>> Certainly that has even greater potential for Linux guests. Note that
>> we spin on mutexes now, so we need to prevent preemption while the lock
>> owner is running.
>>
> either that, or disable spinning on (para) virt kernels.

What would you do instead?

Note we can't disable spinning on Windows or pre 2.6.36 kernels.

> Para virt
> kernels could possibly extend the thing by also checking to see if the
> owner's vcpu is running.
>

Certainly that's worth doing.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Zhang, Xiantao on
Srivatsa Vaddagiri wrote:
> On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
>> On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
>>>
>>> Certainly that has even greater potential for Linux guests. Note
>>> that we spin on mutexes now, so we need to prevent preemption while
>>> the lock owner is running.
>>
>> either that, or disable spinning on (para) virt kernels. Para virt
>> kernels could possibly extend the thing by also checking to see if
>> the owner's vcpu is running.
>
> I suspect we will need a combination of both approaches, given that
> we will not be able to avoid preempting guests in their critical
> section always (too long critical sections or real-time tasks wanting
> to preempt). Other idea is to gang-schedule VCPUs of the same guest
> as much as possible?
Gang-scheduling maybe the ideal solution to solve the issue, and has to change host's scheduler a lot to implement it, and it maybe hard to be upstream. So can we figure out an easy way(maybe not best) for this ?
Xiantao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/