From: Gregory Haskins on
Michael S. Tsirkin wrote:
> On Mon, Sep 14, 2009 at 12:08:55PM -0400, Gregory Haskins wrote:
>> Michael S. Tsirkin wrote:
>>> On Fri, Sep 11, 2009 at 12:00:21PM -0400, Gregory Haskins wrote:
>>>> FWIW: VBUS handles this situation via the "memctx" abstraction. IOW,
>>>> the memory is not assumed to be a userspace address. Rather, it is a
>>>> memctx-specific address, which can be userspace, or any other type
>>>> (including hardware, dma-engine, etc). As long as the memctx knows how
>>>> to translate it, it will work.
>>> How would permissions be handled?
>> Same as anything else, really. Read on for details.
>>
>>> it's easy to allow an app to pass in virtual addresses in its own address space.
>> Agreed, and this is what I do.
>>
>> The guest always passes its own physical addresses (using things like
>> __pa() in linux). This address passed is memctx specific, but generally
>> would fall into the category of "virtual-addresses" from the hosts
>> perspective.
>>
>> For a KVM/AlacrityVM guest example, the addresses are GPAs, accessed
>> internally to the context via a gfn_to_hva conversion (you can see this
>> occuring in the citation links I sent)
>>
>> For Ira's example, the addresses would represent a physical address on
>> the PCI boards, and would follow any kind of relevant rules for
>> converting a "GPA" to a host accessible address (even if indirectly, via
>> a dma controller).
>
> So vbus can let an application

"application" means KVM guest, or ppc board, right?

> access either its own virtual memory or a physical memory on a PCI device.

To reiterate from the last reply: the model is the "guest" owns the
memory. The host is granted access to that memory by means of a memctx
object, which must be admitted to the host kernel and accessed according
to standard access-policy mechanisms. Generally the "application" or
guest would never be accessing anything other than its own memory.

> My question is, is any application
> that's allowed to do the former also granted rights to do the later?

If I understand your question, no. Can you elaborate?

Kind Regards,
-Greg

From: Avi Kivity on
On 09/14/2009 10:14 PM, Gregory Haskins wrote:
> To reiterate, as long as the model is such that the ppc boards are
> considered the "owner" (direct access, no translation needed) I believe
> it will work. If the pointers are expected to be owned by the host,
> then my model doesn't work well either.
>

In this case the x86 is the owner and the ppc boards use translated
access. Just switch drivers and device and it falls into place.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 09/14/2009 07:47 PM, Michael S. Tsirkin wrote:
> On Mon, Sep 14, 2009 at 12:08:55PM -0400, Gregory Haskins wrote:
>
>> For Ira's example, the addresses would represent a physical address on
>> the PCI boards, and would follow any kind of relevant rules for
>> converting a "GPA" to a host accessible address (even if indirectly, via
>> a dma controller).
>>
> I don't think limiting addresses to PCI physical addresses will work
> well. From what I rememeber, Ira's x86 can not initiate burst
> transactions on PCI, and it's the ppc that initiates all DMA.
>

vhost-net would run on the PPC then.

>>> But we can't let the guest specify physical addresses.
>>>
>> Agreed. Neither your proposal nor mine operate this way afaict.
>>
> But this seems to be what Ira needs.
>

In Ira's scenario, the "guest" (x86 host) specifies x86 physical
addresses, and the ppc dmas to them. It's the virtio model without any
change. A normal guest also specifis physical addresses.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Gregory Haskins on
Avi Kivity wrote:
> On 09/14/2009 10:14 PM, Gregory Haskins wrote:
>> To reiterate, as long as the model is such that the ppc boards are
>> considered the "owner" (direct access, no translation needed) I believe
>> it will work. If the pointers are expected to be owned by the host,
>> then my model doesn't work well either.
>>
>
> In this case the x86 is the owner and the ppc boards use translated
> access. Just switch drivers and device and it falls into place.
>

You could switch vbus roles as well, I suppose. Another potential
option is that he can stop mapping host memory on the guest so that it
follows the more traditional model. As a bus-master device, the ppc
boards should have access to any host memory at least in the GFP_DMA
range, which would include all relevant pointers here.

I digress: I was primarily addressing the concern that Ira would need
to manage the "host" side of the link using hvas mapped from userspace
(even if host side is the ppc boards). vbus abstracts that access so as
to allow something other than userspace/hva mappings. OTOH, having each
ppc board run a userspace app to do the mapping on its behalf and feed
it to vhost is probably not a huge deal either. Where vhost might
really fall apart is when any assumptions about pageable memory occur,
if any.

As an aside: a bigger issue is that, iiuc, Ira wants more than a single
ethernet channel in his design (multiple ethernets, consoles, etc). A
vhost solution in this environment is incomplete.

Note that Ira's architecture highlights that vbus's explicit management
interface is more valuable here than it is in KVM, since KVM already has
its own management interface via QEMU.

Kind Regards,
-Greg

From: Avi Kivity on
On 09/15/2009 04:03 PM, Gregory Haskins wrote:
>
>> In this case the x86 is the owner and the ppc boards use translated
>> access. Just switch drivers and device and it falls into place.
>>
>>
> You could switch vbus roles as well, I suppose.

Right, there's not real difference in this regard.

> Another potential
> option is that he can stop mapping host memory on the guest so that it
> follows the more traditional model. As a bus-master device, the ppc
> boards should have access to any host memory at least in the GFP_DMA
> range, which would include all relevant pointers here.
>
> I digress: I was primarily addressing the concern that Ira would need
> to manage the "host" side of the link using hvas mapped from userspace
> (even if host side is the ppc boards). vbus abstracts that access so as
> to allow something other than userspace/hva mappings. OTOH, having each
> ppc board run a userspace app to do the mapping on its behalf and feed
> it to vhost is probably not a huge deal either. Where vhost might
> really fall apart is when any assumptions about pageable memory occur,
> if any.
>

Why? vhost will call get_user_pages() or copy_*_user() which ought to
do the right thing.

> As an aside: a bigger issue is that, iiuc, Ira wants more than a single
> ethernet channel in his design (multiple ethernets, consoles, etc). A
> vhost solution in this environment is incomplete.
>

Why? Instantiate as many vhost-nets as needed.

> Note that Ira's architecture highlights that vbus's explicit management
> interface is more valuable here than it is in KVM, since KVM already has
> its own management interface via QEMU.
>

vhost-net and vbus both need management, vhost-net via ioctls and vbus
via configfs. The only difference is the implementation. vhost-net
leaves much more to userspace, that's the main difference.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/