From: Tejun Heo on
Hello,

On 07/29/2010 02:23 PM, Michael S. Tsirkin wrote:
> I saw WARN_ON(!list_empty(&dev->work_list)) trigger
> so our custom flush is not as airtight as need be.

Could be but it's also possible that something has queued something
after the last flush? Is the problem reproducible?

> This patch switches to a simple atomic counter + srcu instead of
> the custom locked queue + flush implementation.
>
> This will slow down the setup ioctls, which should not matter -
> it's slow path anyway. We use the expedited flush to at least
> make sure it has a sane time bound.
>
> Works fine for me. I got reports that with many guests,
> work lock is highly contended, and this patch should in theory
> fix this as well - but I haven't tested this yet.

Hmmm... vhost_poll_flush() becomes synchronize_srcu_expedited(). Can
you please explain how it works? synchronize_srcu_expedited() is an
extremely heavy operation involving scheduling the cpu_stop task on
all cpus. I'm not quite sure whether doing it from every flush is a
good idea. Is flush supposed to be a very rare operation?

Having custom implementation is fine too but let's try to implement
something generic if at all possible.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Michael S. Tsirkin on
On Fri, Jul 30, 2010 at 04:49:54PM +0200, Tejun Heo wrote:
> Hello,
>
> On 07/29/2010 02:23 PM, Michael S. Tsirkin wrote:
> > I saw WARN_ON(!list_empty(&dev->work_list)) trigger
> > so our custom flush is not as airtight as need be.
>
> Could be but it's also possible that something has queued something
> after the last flush?
> Is the problem reproducible?

Well, We do requeue from the job itself. So need to be careful with what
we do with indexes here. Bug seemed to happen all the time when qemu was
killed under stress but now I can't reproduce anymore :(
Will try again later.

> > This patch switches to a simple atomic counter + srcu instead of
> > the custom locked queue + flush implementation.
> >
> > This will slow down the setup ioctls, which should not matter -
> > it's slow path anyway. We use the expedited flush to at least
> > make sure it has a sane time bound.
> >
> > Works fine for me. I got reports that with many guests,
> > work lock is highly contended, and this patch should in theory
> > fix this as well - but I haven't tested this yet.
>
> Hmmm... vhost_poll_flush() becomes synchronize_srcu_expedited(). Can
> you please explain how it works? synchronize_srcu_expedited() is an
> extremely heavy operation involving scheduling the cpu_stop task on
> all cpus. I'm not quite sure whether doing it from every flush is a
> good idea. Is flush supposed to be a very rare operation?

It is rare - happens on guest reboot typically. I guess I will
switch to regular synchronize_srcu.

> Having custom implementation is fine too but let's try to implement
> something generic if at all possible.
>
> Thanks.

Sure. It does seem that avoiding list lock would be pretty hard
in generic code though.

> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/