From: Ersek, Laszlo on
On Fri, 2 Apr 2010, Ian Collins wrote:

> I find that hard to swallow. A windows VM running under VirtualBox on a
> Linux or OpenSolaris host has minimal impact on the host. Sure it pinches
> some RAM, but that's inexpensive these days and it will use some CPU (but not
> a lot when idle) VirtualBox shared folders work well for sharing data, but
> you can also run Samba/CIFS on the host. Bridged networking it trivially
> easy to configure.

And I believe one can get SeamlessRDP <http://www.cendio.com/seamlessrdp/>
working, and then the individual X windows mapped for Windows applications
should be indistinguishable from other X clients in your WM, Alt-Tab-wise.

lacos
From: Ersek, Laszlo on
On Thu, 1 Apr 2010, David Given wrote:

> I work in the embedded phone world. This means I have to deal with
> crappy mobile phone toolchains on a regular basis.

Okay, this really is starting to veer off topic, but: are most embedded
toolchains crappy, in general? I see this complaint every week: on reddit,
on blogs, and now here as well. One gets the impression that C (not C++)
programming jobs are mostly available in embedded markets. Combine this
with the "embedded -> crappy toolchain" implication... Does the apparent
corollary actually hold?


> Running a real Linux distro (or at least the user-space part thereof)
> directly on top of Windows gives instant access to a vast array of
> software that behaves identically to a real Linux system.

As someone suggested in your reddit submission, did you try andLinux /
coLinux?

http://andlinux.org/
http://www.colinux.org/


> (PS. Did you mean to snip newsgroups?)

Yes. I don't post/followup to groups I'm not subscribed to. (At least
until someone enlightens me why that's a bad practice.)

Thanks for the background info.
lacos
From: Ian Collins on
On 04/ 2/10 09:53 AM, Ersek, Laszlo wrote:
> On Fri, 2 Apr 2010, Ian Collins wrote:
>
>> I find that hard to swallow. A windows VM running under VirtualBox on
>> a Linux or OpenSolaris host has minimal impact on the host. Sure it
>> pinches some RAM, but that's inexpensive these days and it will use
>> some CPU (but not a lot when idle) VirtualBox shared folders work well
>> for sharing data, but you can also run Samba/CIFS on the host. Bridged
>> networking it trivially easy to configure.
>
> And I believe one can get SeamlessRDP
> <http://www.cendio.com/seamlessrdp/> working, and then the individual X
> windows mapped for Windows applications should be indistinguishable from
> other X clients in your WM, Alt-Tab-wise.

VirtualBox seamless mode also works well.

--
Ian Collins
From: David Given on
On 01/04/10 21:04, Ian Collins wrote:
[...]
> I find that hard to swallow. A windows VM running under VirtualBox on a
> Linux or OpenSolaris host has minimal impact on the host. Sure it
> pinches some RAM, but that's inexpensive these days and it will use some
> CPU (but not a lot when idle) VirtualBox shared folders work well for
> sharing data, but you can also run Samba/CIFS on the host. Bridged
> networking it trivially easy to configure.

Well, I have to use Windows as the host and running VirtualBox on a 4GB
Core Duo cripples it --- it is *significantly* slower and more painful,
in both operating systems, than in one alone. Enough so that it's
actually unpleasant to use. (We've tried running Linux as the host and
tunnelling the phone drivers out through Windows. Generally odd stuff
happens and it's not reliable.)

VirtualBox shared folders are only okay at best. Try accessing a
subversion repo through one and you'll find out why! (The Unix view
VirtualBox provides of the Windows filesystem doesn't quite have the
right semantics to keep Subversion happy. SMB has the same problem. I
suspect it's better the other way round.)

Bridged networking I've tried, but it never seems to come out quite
right --- mostly due to the fact that I'm more familiar with Unix
networking, and Windows' networking UI is entirely different. I always
seem to end up in a weird maze of TUN/TAP drivers, firewall settings,
multiple IP addresses on the same ethernet port etc. Yeah, I'm aware
this is my failing, but it doesn't make it any less frustrating.

At the end of the day, even if it all works right, I still end up having
to run two operating systems at the same time. The setup always feels
fragile and sluggish. Linux will mysteriously stop booting for now
apparent reason. Windows will start taking longer and longer to boot,
again for no apparent reason. Seamless mode never works quite right,
leading to continual low-level frustration that focus never appears to
be quite where I expect it to be. Trying to script operations that work
on both machines at once is painful. It's just generally unsatisfactory.

The least nasty experience I had was when I could dedicate a monitor to
each OS and pretend I had two computers; but I still ended up having to
waste time synchonising data back and forth between the machines to do
trivial stuff like checkins and editing... and they both felt like
really slow computers. This sort of stuff just *should not be
necessary*. Hence LBW.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "In the beginning was the word.
│ And the word was: Content-type: text/plain" --- Unknown sage
From: David Given on
On 01/04/10 22:09, Ersek, Laszlo wrote:
[...]
> Okay, this really is starting to veer off topic, but: are most embedded
> toolchains crappy, in general? I see this complaint every week: on
> reddit, on blogs, and now here as well. One gets the impression that C
> (not C++) programming jobs are mostly available in embedded markets.
> Combine this with the "embedded -> crappy toolchain" implication... Does
> the apparent corollary actually hold?

Somthing about the whole embedded systems world appears to encourage
Fail, and I really don't know why.

I suspect it's due to system scalability failure. Embedded systems tend
to be big and complicated. A single-image mobile phone operating system
consists of everything from the RTOS kernel to the high-level UI
widgetry, all compiled into one. You have to have rigid discipline to
produce anything coherent out of that, and most organisations don't have
that discipline, especially when there's a deadline at hand.

The open source world tends to be better than the proprietary world, I
think due to a willingness to let unmaintainable systems die; Android is
a thing of beauty compared to previous embedded operating systems. But
even the open source crown jewels like gcc and gdb are legendarily
hideous, with terrible build systems and unmaintable code. Why is this?
Don't people have a vested interest in making them as smooth and
professional as possible? Most gcc releases won't actually *build* out
of the box!

I once worked on a proprietary mobile phone OS that had evolved like this:

- event driven system running on a non-preemptive executive. Let's call
this X.
- as above, running in a single task of a simple RTOS (let's call this
S), with device drivers in other tasks.
- as above, but with the S rewritten as a portability layer on top of a
different RTOS (let's call this one R).
- as above, but with R rewritten to run on top of a third RTOS (which
happened to be L4).
- as above, but with a third-party media stack (let's call this M) added
on that ran in parallel to S on top of R... and various cross-bridging
mechanisms so that the UI code in X could call directly into M,
bypassing S completely.
- as above, but a whole new shiny UI written in C++ to sit on top of X.

As you can imagine, this was a pile of fail. But wait! There's more!

- the build system was built around nested layers of makefiles, shell
scripts and .bat files. Any build, even if nothing needed doing, would
take 45 minutes. Except when it hit the random race condition and
failed, which it would do 1/3 of the time. Doing a build required having
programs built with three different and incompatible versions of Cygwin
installed at the same time, plus Visual C++ 5.0, plus ARM's RVCT
compiler, plus gcc.

- X ran its events in multiple tasks of different priorities, including
the idle task. If an event queue filled up, the phone crashed. Which
meant that if a high-priority task was runnable for more than about 30s,
the idle tasks's event queue would fill up and the phone would crash.

- Neither L4 nor R had semaphores. So S had a semaphore implementation
written in terms of R's synchronisation primitives (which basically
consisted of critical sections and task suspend/resume). Each semaphore
had a fixed-size array of tasks that could wait on the semaphore. The
size of this array was smaller than the number of tasks on the system.

- X's 'terminate current thread' function had a typo in it that meant it
compiled into no code. S's version freed the stack it was currently
running on, and then called R's version. R's version zeroed out the task
descriptor block before removing it from the task list.

Chances are that some of the people in this newsgroup have a phone with
all the above in their pockets right now.

[...]
> As someone suggested in your reddit submission, did you try andLinux /
> coLinux?

Yes; there's no real difference between them and a fully virtualised
operating system. Colinux, for example, allocates non-swappable memory
from the Windows kernel and swaps it itself to disk --- thus giving you
all the overheads of running two VMs, while combining it with an
inability to release pages back to the host operating system. That's not
to say it's *bad*; I think it's generally lighter-weight than something
like VirtualBox; but it's still not what I'm looking for.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "In the beginning was the word.
│ And the word was: Content-type: text/plain" --- Unknown sage