From: Michael S. Tsirkin on
On Wed, Sep 16, 2009 at 04:57:42PM +0200, Arnd Bergmann wrote:
> On Tuesday 15 September 2009, Michael S. Tsirkin wrote:
> > Userspace in x86 maps a PCI region, uses it for communication with ppc?
>
> This might have portability issues. On x86 it should work, but if the
> host is powerpc or similar, you cannot reliably access PCI I/O memory
> through copy_tofrom_user but have to use memcpy_toio/fromio or readl/writel
> calls, which don't work on user pointers.
>
> Specifically on powerpc, copy_from_user cannot access unaligned buffers
> if they are on an I/O mapping.
>
> Arnd <><

We are talking about doing this in userspace, not in kernel.

--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Arnd Bergmann on
On Wednesday 16 September 2009, Michael S. Tsirkin wrote:
> On Wed, Sep 16, 2009 at 04:57:42PM +0200, Arnd Bergmann wrote:
> > On Tuesday 15 September 2009, Michael S. Tsirkin wrote:
> > > Userspace in x86 maps a PCI region, uses it for communication with ppc?
> >
> > This might have portability issues. On x86 it should work, but if the
> > host is powerpc or similar, you cannot reliably access PCI I/O memory
> > through copy_tofrom_user but have to use memcpy_toio/fromio or readl/writel
> > calls, which don't work on user pointers.
> >
> > Specifically on powerpc, copy_from_user cannot access unaligned buffers
> > if they are on an I/O mapping.
> >
> We are talking about doing this in userspace, not in kernel.

Ok, that's fine then. I thought the idea was to use the vhost_net driver
to access the user memory, which would be a really cute hack otherwise,
as you'd only need to provide the eventfds from a hardware specific
driver and could use the regular virtio_net on the other side.

Arnd <><
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 09/16/2009 05:10 PM, Gregory Haskins wrote:
>
>> If kvm can do it, others can.
>>
> The problem is that you seem to either hand-wave over details like this,
> or you give details that are pretty much exactly what vbus does already.
> My point is that I've already sat down and thought about these issues
> and solved them in a freely available GPL'ed software package.
>

In the kernel. IMO that's the wrong place for it. Further, if we adopt
vbus, if drop compatibility with existing guests or have to support both
vbus and virtio-pci.

> So the question is: is your position that vbus is all wrong and you wish
> to create a new bus-like thing to solve the problem?

I don't intend to create anything new, I am satisfied with virtio. If
it works for Ira, excellent. If not, too bad. I believe it will work
without too much trouble.

> If so, how is it
> different from what Ive already done? More importantly, what specific
> objections do you have to what Ive done, as perhaps they can be fixed
> instead of starting over?
>

The two biggest objections are:
- the host side is in the kernel
- the guest side is a new bus instead of reusing pci (on x86/kvm),
making Windows support more difficult

I guess these two are exactly what you think are vbus' greatest
advantages, so we'll probably have to extend our agree-to-disagree on
this one.

I also had issues with using just one interrupt vector to service all
events, but that's easily fixed.

>> There is no guest and host in this scenario. There's a device side
>> (ppc) and a driver side (x86). The driver side can access configuration
>> information on the device side. How to multiplex multiple devices is an
>> interesting exercise for whoever writes the virtio binding for that setup.
>>
> Bingo. So now its a question of do you want to write this layer from
> scratch, or re-use my framework.
>

You will have to implement a connector or whatever for vbus as well.
vbus has more layers so it's probably smaller for vbus.

>>>>
>>>>
>>> I am talking about how we would tunnel the config space for N devices
>>> across his transport.
>>>
>>>
>> Sounds trivial.
>>
> No one said it was rocket science. But it does need to be designed and
> implemented end-to-end, much of which Ive already done in what I hope is
> an extensible way.
>

It was already implemented three times for virtio, so apparently that's
extensible too.

>> Write an address containing the device number and
>> register number to on location, read or write data from another.
>>
> You mean like the "u64 devh", and "u32 func" fields I have here for the
> vbus-kvm connector?
>
> http://git.kernel.org/?p=linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git;a=blob;f=include/linux/vbus_pci.h;h=fe337590e644017392e4c9d9236150adb2333729;hb=ded8ce2005a85c174ba93ee26f8d67049ef11025#l64
>
>

Probably.



>>> That sounds convenient given his hardware, but it has its own set of
>>> problems. For one, the configuration/inventory of these boards is now
>>> driven by the wrong side and has to be addressed.
>>>
>> Why is it the wrong side?
>>
> "Wrong" is probably too harsh a word when looking at ethernet. Its
> certainly "odd", and possibly inconvenient. It would be like having
> vhost in a KVM guest, and virtio-net running on the host. You could do
> it, but its weird and awkward. Where it really falls apart and enters
> the "wrong" category is for non-symmetric devices, like disk-io.
>
>


It's not odd or wrong or wierd or awkward. An ethernet NIC is not
symmetric, one side does DMA and issues interrupts, the other uses its
own memory. That's exactly the case with Ira's setup.

If the ppc boards were to emulate a disk controller, you'd run
virtio-blk on x86 and vhost-blk on the ppc boards.

>>> Second, the role
>>> reversal will likely not work for many models other than ethernet (e.g.
>>> virtio-console or virtio-blk drivers running on the x86 board would be
>>> naturally consuming services from the slave boards...virtio-net is an
>>> exception because 802.x is generally symmetrical).
>>>
>>>
>> There is no role reversal.
>>
> So if I have virtio-blk driver running on the x86 and vhost-blk device
> running on the ppc board, I can use the ppc board as a block-device.
> What if I really wanted to go the other way?
>

You mean, if the x86 board was able to access the disks and dma into the
ppb boards memory? You'd run vhost-blk on x86 and virtio-net on ppc.

As long as you don't use the words "guest" and "host" but keep to
"driver" and "device", it all works out.

>> The side doing dma is the device, the side
>> accessing its own memory is the driver. Just like that other 1e12
>> driver/device pairs out there.
>>
> IIUC, his ppc boards really can be seen as "guests" (they are linux
> instances that are utilizing services from the x86, not the other way
> around).

They aren't guests. Guests don't dma into their host's memory.

> vhost forces the model to have the ppc boards act as IO-hosts,
> whereas vbus would likely work in either direction due to its more
> refined abstraction layer.
>

vhost=device=dma, virtio=driver=own-memory.

>> Of course vhost is incomplete, in the same sense that Linux is
>> incomplete. Both require userspace.
>>
> A vhost based solution to Iras design is missing more than userspace.
> Many of those gaps are addressed by a vbus based solution.
>

Maybe. Ira can fill the gaps or use vbus.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Michael S. Tsirkin on
On Wed, Sep 16, 2009 at 05:22:37PM +0200, Arnd Bergmann wrote:
> On Wednesday 16 September 2009, Michael S. Tsirkin wrote:
> > On Wed, Sep 16, 2009 at 04:57:42PM +0200, Arnd Bergmann wrote:
> > > On Tuesday 15 September 2009, Michael S. Tsirkin wrote:
> > > > Userspace in x86 maps a PCI region, uses it for communication with ppc?
> > >
> > > This might have portability issues. On x86 it should work, but if the
> > > host is powerpc or similar, you cannot reliably access PCI I/O memory
> > > through copy_tofrom_user but have to use memcpy_toio/fromio or readl/writel
> > > calls, which don't work on user pointers.
> > >
> > > Specifically on powerpc, copy_from_user cannot access unaligned buffers
> > > if they are on an I/O mapping.
> > >
> > We are talking about doing this in userspace, not in kernel.
>
> Ok, that's fine then. I thought the idea was to use the vhost_net driver

It's a separate issue. We were talking generally about configuration
and setup. Gregory implemented it in kernel, Avi wants it
moved to userspace, with only fastpath in kernel.

> to access the user memory, which would be a really cute hack otherwise,
> as you'd only need to provide the eventfds from a hardware specific
> driver and could use the regular virtio_net on the other side.
>
> Arnd <><

To do that, maybe copy to user on ppc can be fixed, or wrapped
around in a arch specific macro, so that everyone else
does not have to go through abstraction layers.

--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Gregory Haskins on
Avi Kivity wrote:
> On 09/16/2009 05:10 PM, Gregory Haskins wrote:
>>
>>> If kvm can do it, others can.
>>>
>> The problem is that you seem to either hand-wave over details like this,
>> or you give details that are pretty much exactly what vbus does already.
>> My point is that I've already sat down and thought about these issues
>> and solved them in a freely available GPL'ed software package.
>>
>
> In the kernel. IMO that's the wrong place for it.

In conversations with Ira, he indicated he needs kernel-to-kernel
ethernet for performance, and needs at least an ethernet and console
connectivity. You could conceivably build a solution for this system 3
basic ways:

1) "completely" in userspace: use things like tuntap on the ppc boards,
and tunnel packets across a custom point-to-point connection formed over
the pci link to a userspace app on the x86 board. This app then
reinjects the packets into the x86 kernel as a raw socket or tuntap,
etc. Pretty much vanilla tuntap/vpn kind of stuff. Advantage: very
little kernel code. Problem: performance (citation: hopefully obvious).

2) "partially" in userspace: have an in-kernel virtio-net driver talk to
a userspace based virtio-net backend. This is the (current, non-vhost
oriented) KVM/qemu model. Advantage, re-uses existing kernel-code.
Problem: performance (citation: see alacrityvm numbers).

3) "in-kernel": You can do something like virtio-net to vhost to
potentially meet some of the requirements, but not all.

In order to fully meet (3), you would need to do some of that stuff you
mentioned in the last reply with muxing device-nr/reg-nr. In addition,
we need to have a facility for mapping eventfds and establishing a
signaling mechanism (like PIO+qid), etc. KVM does this with
IRQFD/IOEVENTFD, but we dont have KVM in this case so it needs to be
invented.

To meet performance, this stuff has to be in kernel and there has to be
a way to manage it. Since vbus was designed to do exactly that, this is
what I would advocate. You could also reinvent these concepts and put
your own mux and mapping code in place, in addition to all the other
stuff that vbus does. But I am not clear why anyone would want to.

So no, the kernel is not the wrong place for it. Its the _only_ place
for it. Otherwise, just use (1) and be done with it.

> Further, if we adopt
> vbus, if drop compatibility with existing guests or have to support both
> vbus and virtio-pci.

We already need to support both (at least to support Ira). virtio-pci
doesn't work here. Something else (vbus, or vbus-like) is needed.

>
>> So the question is: is your position that vbus is all wrong and you wish
>> to create a new bus-like thing to solve the problem?
>
> I don't intend to create anything new, I am satisfied with virtio. If
> it works for Ira, excellent. If not, too bad.

I think that about sums it up, then.


> I believe it will work without too much trouble.

Afaict it wont for the reasons I mentioned.

>
>> If so, how is it
>> different from what Ive already done? More importantly, what specific
>> objections do you have to what Ive done, as perhaps they can be fixed
>> instead of starting over?
>>
>
> The two biggest objections are:
> - the host side is in the kernel

As it needs to be.

> - the guest side is a new bus instead of reusing pci (on x86/kvm),
> making Windows support more difficult

Thats a function of the vbus-connector, which is different from
vbus-core. If you don't like it (and I know you don't), we can write
one that interfaces to qemu's pci system. I just don't like the
limitations that imposes, nor do I think we need that complexity of
dealing with a split PCI model, so I chose to not implement vbus-kvm
this way.

With all due respect, based on all of your comments in aggregate I
really do not think you are truly grasping what I am actually building here.

>
> I guess these two are exactly what you think are vbus' greatest
> advantages, so we'll probably have to extend our agree-to-disagree on
> this one.
>
> I also had issues with using just one interrupt vector to service all
> events, but that's easily fixed.

Again, function of the connector.

>
>>> There is no guest and host in this scenario. There's a device side
>>> (ppc) and a driver side (x86). The driver side can access configuration
>>> information on the device side. How to multiplex multiple devices is an
>>> interesting exercise for whoever writes the virtio binding for that
>>> setup.
>>>
>> Bingo. So now its a question of do you want to write this layer from
>> scratch, or re-use my framework.
>>
>
> You will have to implement a connector or whatever for vbus as well.
> vbus has more layers so it's probably smaller for vbus.

Bingo! That is precisely the point.

All the stuff for how to map eventfds, handle signal mitigation, demux
device/function pointers, isolation, etc, are built in. All the
connector has to do is transport the 4-6 verbs and provide a memory
mapping/copy function, and the rest is reusable. The device models
would then work in all environments unmodified, and likewise the
connectors could use all device-models unmodified.

>
>>>>>
>>>>>
>>>> I am talking about how we would tunnel the config space for N devices
>>>> across his transport.
>>>>
>>>>
>>> Sounds trivial.
>>>
>> No one said it was rocket science. But it does need to be designed and
>> implemented end-to-end, much of which Ive already done in what I hope is
>> an extensible way.
>>
>
> It was already implemented three times for virtio, so apparently that's
> extensible too.

And to my point, I'm trying to commoditize as much of that process as
possible on both the front and backends (at least for cases where
performance matters) so that you don't need to reinvent the wheel for
each one.

>
>>> Write an address containing the device number and
>>> register number to on location, read or write data from another.
>>>
>> You mean like the "u64 devh", and "u32 func" fields I have here for the
>> vbus-kvm connector?
>>
>> http://git.kernel.org/?p=linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git;a=blob;f=include/linux/vbus_pci.h;h=fe337590e644017392e4c9d9236150adb2333729;hb=ded8ce2005a85c174ba93ee26f8d67049ef11025#l64
>>
>>
>>
>
> Probably.
>
>
>
>>>> That sounds convenient given his hardware, but it has its own set of
>>>> problems. For one, the configuration/inventory of these boards is now
>>>> driven by the wrong side and has to be addressed.
>>>>
>>> Why is it the wrong side?
>>>
>> "Wrong" is probably too harsh a word when looking at ethernet. Its
>> certainly "odd", and possibly inconvenient. It would be like having
>> vhost in a KVM guest, and virtio-net running on the host. You could do
>> it, but its weird and awkward. Where it really falls apart and enters
>> the "wrong" category is for non-symmetric devices, like disk-io.
>>
>>
>
>
> It's not odd or wrong or wierd or awkward.

Its weird IMO because IIUC the ppc boards are not really "NICs". Yes,
their arrangement as bus-master PCI devices makes them look and smell
like "devices", but that is an implementation detail of its transport
(like hypercalls/PIO in KVM) and not relevant to its broader role in the
system.

They are more or less like "guests" from the KVM world. The x86 is
providing connectivity resources to these guests, not the other way
around. It is not a goal to make the x86 look like it has a multihomed
array of ppc based NIC adapters. The only reason we would treat these
ppc boards like NICs is because (iiuc) that is the only way vhost can be
hacked to work with the system, not because its the optimal design.

FWIW: There are a ton of chassis-based systems that look similar to
Ira's out there (PCI inter-connected nodes), and I would like to support
them, too. So its not like this is a one-off.


> An ethernet NIC is not
> symmetric, one side does DMA and issues interrupts, the other uses its
> own memory.

I never said a NIC was. I meant the ethernet _protocol_ is symmetric.
I meant it in the sense that you can ingress/egress packets in either
direction and as long as "TX" on one side is "RX" on the other and vice
versa, it all kind of works. You can even loop it back and it still works.

Contrast this to something like a disk-block protocol where a "read"
message is expected to actually do a read, etc. In this case, you
cannot arbitrarily assign the location of the "driver" and "device" like
you can with ethernet. The device should presumably be where the
storage is, and the driver should be where the consumer is.

> That's exactly the case with Ira's setup.

See "implementation detail" comment above.


>
> If the ppc boards were to emulate a disk controller, you'd run
> virtio-blk on x86 and vhost-blk on the ppc boards.

Agreed.

>
>>>> Second, the role
>>>> reversal will likely not work for many models other than ethernet (e.g.
>>>> virtio-console or virtio-blk drivers running on the x86 board would be
>>>> naturally consuming services from the slave boards...virtio-net is an
>>>> exception because 802.x is generally symmetrical).
>>>>
>>>>
>>> There is no role reversal.
>>>
>> So if I have virtio-blk driver running on the x86 and vhost-blk device
>> running on the ppc board, I can use the ppc board as a block-device.
>> What if I really wanted to go the other way?
>>
>
> You mean, if the x86 board was able to access the disks and dma into the
> ppb boards memory? You'd run vhost-blk on x86 and virtio-net on ppc.

But as we discussed, vhost doesn't work well if you try to run it on the
x86 side due to its assumptions about pagable "guest" memory, right? So
is that even an option? And even still, you would still need to solve
the aggregation problem so that multiple devices can coexist.

>
> As long as you don't use the words "guest" and "host" but keep to
> "driver" and "device", it all works out.
>
>>> The side doing dma is the device, the side
>>> accessing its own memory is the driver. Just like that other 1e12
>>> driver/device pairs out there.
>>>
>> IIUC, his ppc boards really can be seen as "guests" (they are linux
>> instances that are utilizing services from the x86, not the other way
>> around).
>
> They aren't guests. Guests don't dma into their host's memory.

Thats not relevant. They are not guests in the sense of isolated
virtualized guests like KVM. They are guests in the sense that they are
subordinate linux instances which utilize IO resources on the x86 (host).

The way this would work is that the x86 would be driving the dma
controller on the ppc board, not the other way around. The fact that
the controller lives on the ppc board is an implementation detail.

The way I envision this to work would be that the ppc board exports two
functions in its device:

1) a vbus-bridge like device
2) a dma-controller that accepts "gpas" as one parameter

so function (1) does the 4-6 verbs I mentioned for device addressing,
etc. function (2) is utilized by the x86 memctx whenever a
->copy_from() or ->copy_to() operation is invoked. The ppc board's
would be doing their normal virtio kind of things, like
->add_buf(_pa(skb->data))).

>
>> vhost forces the model to have the ppc boards act as IO-hosts,
>> whereas vbus would likely work in either direction due to its more
>> refined abstraction layer.
>>
>
> vhost=device=dma, virtio=driver=own-memory.

I agree that virtio=driver=own-memory. The problem is vhost != dma.
vhost = hva*, and it just so happens that Ira's ppc boards support host
mapping/dma so it kind of works.

What I have been trying to say is that the extra abstraction to the
memctx gets the "vhost" side away from hva*, such that it can support
hva if that makes sense, or something else (like a custom dma engine if
it doesn't)

>
>>> Of course vhost is incomplete, in the same sense that Linux is
>>> incomplete. Both require userspace.
>>>
>> A vhost based solution to Iras design is missing more than userspace.
>> Many of those gaps are addressed by a vbus based solution.
>>
>
> Maybe. Ira can fill the gaps or use vbus.
>
>

Agreed.

Kind Regards,
-Greg