From: Jes Sorensen on
On 03/18/10 10:54, Ingo Molnar wrote:
> * Jes Sorensen<Jes.Sorensen(a)redhat.com> wrote:
[...]
>>
>> At my previous employer we ended up dropping all Xen efforts exactly because
>> it was like maintaining two separate operating system kernels. The key to
>> KVM is that once you have Linux, you practically have KVM as well.
>
> Yes. Please realize that what is behind it is a strikingly simple argument:
>
> "Once you have a single project to develop and maintain all is much better."

Thats a very glorified statement but it's not reality, sorry. You can do
that with something like perf because it's so small and development of
perf is limited to a very small group of developers.

>> [...] However, there is far more to it than just a couple of ioctls, for
>> example the stack of reverse device-drivers is a pretty significant code
>> base, rewriting that and maintaining it is not a trivial task. It is
>> certainly my belief that the benefit we get from sharing that with QEMU by
>> far outweighs the cost of forking it and keeping our own fork in the kernel
>> tree. In fact it would result in exactly the same problems I mentioned above
>> wrt Xen.
>
> I do not suggest forking Qemu at all, i suggest using the most natural
> development model for the KVM+Qemu shared project: a single repository.

If you are not suggesting to fork QEMU, what are you suggesting then?
You don't seriously expect that the KVM community will be able to
mandate that the QEMU community switch to the Linux kernel repository?
That would be like telling the openssl developers that they should merge
with glibc and start working out of the glibc tree.

What you are suggesting is *only* going to happen if we fork QEMU, there
is zero chance to move the main QEMU repository into the Linux kernel
tree. And trust me, you don't want to have Linus having to deal with
handling patches for tcg or embedded board emulation.

>> With this you have just thrown away all the benefits of having the QEMU
>> repository shared with other developers who will actively fix bugs in
>> components we do care about for KVM.
>
> Not if it's a unified project.

You still haven't explained how you expect create a unified KVM+QEMU
project, without forking from the existing QEMU.

>>> - encourage kernel-space and user-space KVM developers to work on both
>>> user-space and kernel-space bits as a single unit. It's one project and
>>> a single experience to the user.
>>
>> This is already happening and a total non issue.
>
> My experience as an external observer of the end result contradicts this.

What I have seen you complain about here is the lack of a good end user
GUI for KVM. However that is a different thing. So far no vendor has put
significant effort into it, but that is nothing new in Linux. We have a
great kernel, but our user applications are still lacking. We have 217
CD players for GNOME, but we have no usable calendering application.

A good GUI for virtualization is a big task, and whoever designs it will
base their design upon their preferences for whats important. A lot of
spare time developers would clearly care most about a gui installation
and fancy icons to click on, whereas server users would be much more
interested in automation and remote access to the systems. For a good
example of an incomplete solution, try installing Fedora over a serial
line, you cannot do half the things without launching VNC :( Getting a
comprehensive solution for this that would satisfy the bulk of the users
would be a huge chunk of code in the kernel tree. Imagine the screaming
that would result in? How often have we not had the moaning from x86
users who wanted to rip out all the non x86 code to reduce the size of
the tarball?

> Seemingly trivial usability changes to the KVM+Qemu combo are not being done
> often because they involve cross-discipline changes.

Which trivial usability changes?

>>> - [ and probably libvirt should go there too ]
>>
>> Now that would be interesting, next we'll have to include things like libxml
>> in the kernel git tree as well, to make sure libvirt doesn't get out of sync
>> with the version supplied by your distribution vendor.
>
> The way we have gone about this in tools/perf/ is similar to the route picked
> by Git: we only use very lowlevel libraries available everywhere, and we
> provide optional wrappers to the rest.

Did you ever look at what libvirt actually does and what it offers? Or
how about the various libraries used by QEMU to offer things like VNC
support or X support?

Again this works fine for something like perf where the primary
display is text mode.

>> So far your argument would justify pulling all of gdb into the kernel git
>> tree as well, to support the kgdb efforts, or gcc so we can get rid of the
>> gcc version quirks in the kernel header files, e2fsprogs and equivalent for
>> _all_ file systems included in the kernel so we can make sure our fs tools
>> never get out of sync with whats supported in the kernel......
>
> gdb and gcc is clearly extrinsic to the kernel so why would we move them
> there?

gdb should go with kgdb which goes with the kernel to keep it in sync.
If you want to be consistent in your argument, you have to go all the
way.

> I was talking about tools that are closely related to the kernel - where much
> of the development and actual use is in combination with the Linux kernel.

Well the file system tools would obviously have to go into the kernel
then so appropriate binaries can be distributed to match the kernel.

> 90%+ of the Qemu usecases are combined with Linux. (Yes, i know that you can
> run Qemu without KVM, and no, i dont think it matters in the grand scheme of
> things and most investment into Qemu comes from the KVM angle these days. In
> particular it for sure does not justify handicapping future KVM evolution so
> drastically.)

90+%? You got to be kidding? You clearly have no idea just how much it's
used for running embedded emulators on non Linux. You should have seen
the noise it made when I added C99 initializers to certain structs,
because it broke builds using very old GCC versions on BeOS. Linux only,
not a chance. Try subscribing to qemu-devel and you'll see a list that
is only overtaken by few lists like lkml in terms of daily traffic.

>> Oh and you completely forgot SeaBIOS. KVM+QEMU rely on SeaBIOS too, so from
>> what you're saying we should pull that into the kernel git repository as
>> well. Never mind the fact that we share SeaBIOS with the coreboot project
>> which is very actively adding features to it that benefit us as well.....
>
> SeaBIOS is in essence a firmware, so it could either be loaded as such.
>
> Just look at the qemu source code - the BIOSes are .bin images in
> qemu/pc-bios/ imported externally in essence.

Ehm no, QEMU now pulls in SeaBIOS to build it. And there are a lot of
changes that require modification in SeaBIOS to match changes to QEMU.

> qemu-kvm branch is not similar to my proposal at all: it made KVM _more_
> fragmented, not more unified. I.e. it was a move in the exact opposite
> direction and i'd expect such a move to fail.
>
> In fact the failure of qemu-kvm supports my point rather explicitly: it
> demonstrates that extra packages and split development are actively harmful.

Ehm it showed what happens when you fork QEMU to modify it primarily for
your own project, ie. KVM. You are suggesting we fork QEMU for the
benefit of KVM, and it will be exactly the same thing that happens.

I know you state that you are not suggesting we fork it, but as I showed
above, pulling QEMU into the kernel tree, can only happen as a fork.
There is no point pretending otherwise.

> I speak about this as a person who has done successful unifications of split
> codebases and in my judgement this move would be significantly beneficial to
> KVM.
>
> You cannot really validly reject this proposal with "It wont work" as it
> clearly has worked in other, comparable cases. You could only reject this with
> "I have tried it and it didnt work".
>
> Think about it: a clean and hackable user-space component in tools/kvm/. It's
> very tempting :-)

I say this based on my hacking experience, my experience with the
kernel, the QEMU base, SeaBIOS and merging projects. Yes it can be done,
but the cost is much higher than the gain.

Cheers,
Jes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Avi Kivity <avi(a)redhat.com> wrote:

> > The moment any change (be it as trivial as fixing a GUI detail or as
> > complex as a new feature) involves two or more packages, development speed
> > slows down to a crawl - while the complexity of the change might be very
> > low!
>
> Why is that?

It's very simple: because the contribution latencies and overhead compound,
almost inevitably.

If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
....

Even with the best-run projects in existence it takes forever and is very
painful - and here i talk about first hand experience over many years.

> I the maintainers of all packages are cooperative and responsive, then the
> patches will get accepted quickly. If they aren't, development will be
> slow. [...]

I'm afraid practice is different from the rosy ideal you paint there. Even
with assumed 'perfect projects' there's always random differences between
projects, causing doubled (tripled) overhead and compounded up overhead:

- random differences in release schedules

- random differences in contribution guidelines

- random differences in coding style

> [...] It isn't any different from contributing to two unrelated kernel
> subsystems (which are in fact in different repositories until the next merge
> window).

You mention a perfect example: contributing to multipe kernel subsystems. Even
_that_ is very noticeably harder than contributing to a single subsystem - due
to the inevitable buerocratic overhead, due to different development trees,
due to different merge criteria.

So you are underlining my point (perhaps without intending to): treating
closely related bits of technology as a single project is much better.

Obviously arch/x86/kvm/, virt/ and tools/kvm/ should live in a single
development repository (perhaps micro-differentiated by a few topical
branches), for exactly those reasons you mention.

Just like tools/perf/ and kernel/perf_event.c and arch/*/kernel/perf*.c are
treated as a single project.

[ Note: we actually started from a 'split' design [almost everyone picks that,
because of this false 'kernel space bits must be separate from user space
bits' myth] where the user-space component was a separate code base and
unified it later on as the project progressed.

Trust me, the practical benefits of the unified approach are enormous to
developers and to users alike, and there was no looking back once we made
the switch. ]

Also, i dont really try to 'convince' you here - you made your position very
clear early on and despite many unopposed technical arguments i made, the
positions seem to have hardened and i expect it wont change, no matter what
arguments i bring. It's a pity but hey, i'm just an observer here really -
it's the rest of _your_ life this all impacts.

I just wanted to point out the root cause of KVM's usability problems as i see
it - just like i was pointing out the mortal Xen design deficiencies back when
i was backing KVM strongly, four years ago. Back then everyone was saying that
i'm crazy and we are stuck with Xen forever and while KVM is nice it has no
chance.

Just because you got the kernel bits of KVM right a few years ago does not
mean you cannot mess up other design aspects, and sometimes badly so ;-)
Historically i messed up more than half of all first-gut-feeling technical
design decisions i did, so i had to correct the course many, many times.

I hope you are still keeping an open mind about it all and dont think that
because the project was split for 4 years (to no fault of your own, simply out
of necessity) it should be split forever ...

arch/x86 was split for a much longer period than that.

Circumstances have changed. Most Qemu users/contributions are now coming from
the KVM angle, so please simply start thinking about the next level of
evolution.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Jes Sorensen <Jes.Sorensen(a)redhat.com> wrote:

> On 03/18/10 10:54, Ingo Molnar wrote:
> >* Jes Sorensen<Jes.Sorensen(a)redhat.com> wrote:
> [...]
> >>
> >>At my previous employer we ended up dropping all Xen efforts exactly because
> >>it was like maintaining two separate operating system kernels. The key to
> >>KVM is that once you have Linux, you practically have KVM as well.
> >
> >Yes. Please realize that what is behind it is a strikingly simple argument:
> >
> > "Once you have a single project to develop and maintain all is much better."
>
> Thats a very glorified statement but it's not reality, sorry. You can do
> that with something like perf because it's so small and development of perf
> is limited to a very small group of developers.

I was not talking about just perf: i am also talking about the arch/x86/
unification which is 200+ KLOC of highly non-trivial kernel code with hundreds
of contributors and with 8000+ commits in the past two years.

Also, it applies to perf as well: people said exactly that a year ago: 'perf
has it easy to be clean as it is small, once it gets as large as Oprofile
tooling it will be in the same messy situation'.

Today perf has more features than Oprofile, has a larger and more complex code
base, has more contributors, and no, it's not in the same messy situation at
all.

So whatever you think of large, unified projects, you are quite clearly
mistaken. I have done and maintained through two different types of
unifications and the experience was very similar: both developers and users
(and maintainers) are much better off.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Avi Kivity <avi(a)redhat.com> wrote:

> On 03/18/2010 11:22 AM, Ingo Molnar wrote:
> >* Avi Kivity<avi(a)redhat.com> wrote:
> >
> >>> - move a clean (and minimal) version of the Qemu code base to tools/kvm/,
> >>> in the upstream kernel repo, and work on that from that point on.
> >>I'll ignore the repository location which should be immaterial to a serious
> >>developer and concentrate on the 'clean and minimal' aspect, since it has
> >>some merit. [...]
> >
> > To the contrary, experience shows that repository location, and in
> > particular a shared repository for closely related bits is very much
> > material!
> >
> > It matters because when there are two separate projects, even a "serious
> > developer" is finding it double and triple difficult to contribute even
> > trivial changes.
> >
> > It becomes literally a nightmare if you have to touch 3 packages: kernel,
> > a library and an app codebase. It takes _forever_ to get anything useful
> > done.
>
> You can't be serious. I find that the difficulty in contributing a patch
> has mostly to do with writing the patch, and less with figuring out which
> email address to send it to.

My own experience and everyone i've talked about such topics (developers and
distro people) about feature contribution tells the exact opposite: it's much
harder to contribute features to multiple packages than to a single project.

kernel+library+app features take forever to propagate, and there's constant
fear of version friction, productization deadlines are uncertain and ABI
messups are frequent as well due to disjoint testing. Also, each component has
essential veto power: so if the proposed API or approach is opposed or changed
in a later stage then that affects (sometimes already committed) changes. If
you've ever done it you'll know how tedious it is.

This very thread and recent threads about KVM usability demonstrate the same
complications.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 03/18/2010 12:50 PM, Ingo Molnar wrote:
> * Avi Kivity<avi(a)redhat.com> wrote:
>
>
>>> The moment any change (be it as trivial as fixing a GUI detail or as
>>> complex as a new feature) involves two or more packages, development speed
>>> slows down to a crawl - while the complexity of the change might be very
>>> low!
>>>
>> Why is that?
>>
> It's very simple: because the contribution latencies and overhead compound,
> almost inevitably.
>

It's not inevitable, if the projects are badly run, you'll have high
latencies, but projects don't have to be badly run.

> If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> ...
>
> Even with the best-run projects in existence it takes forever and is very
> painful - and here i talk about first hand experience over many years.
>

Try sending a patch to qemu-devel@, you may be pleasantly surprised.


>> I the maintainers of all packages are cooperative and responsive, then the
>> patches will get accepted quickly. If they aren't, development will be
>> slow. [...]
>>
> I'm afraid practice is different from the rosy ideal you paint there. Even
> with assumed 'perfect projects' there's always random differences between
> projects, causing doubled (tripled) overhead and compounded up overhead:
>
> - random differences in release schedules
>
> - random differences in contribution guidelines
>
> - random differences in coding style
>

None of these matter for steady contributors.

>> [...] It isn't any different from contributing to two unrelated kernel
>> subsystems (which are in fact in different repositories until the next merge
>> window).
>>
> You mention a perfect example: contributing to multipe kernel subsystems. Even
> _that_ is very noticeably harder than contributing to a single subsystem - due
> to the inevitable buerocratic overhead, due to different development trees,
> due to different merge criteria.
>
> So you are underlining my point (perhaps without intending to): treating
> closely related bits of technology as a single project is much better.
>
> Obviously arch/x86/kvm/, virt/ and tools/kvm/ should live in a single
> development repository (perhaps micro-differentiated by a few topical
> branches), for exactly those reasons you mention.
>

How is a patch for the qemu GUI eject button and the kvm shadow mmu
related? Should a single maintainer deal with both?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/