From: TomB on
["Followup-To:" header set to comp.os.linux.advocacy.]
On 2010-02-13, the following emerged from the brain of Aragorn:
> On Saturday 13 February 2010 07:18 in comp.os.linux.setup, somebody
> identifying as Karthik Balaguru wrote...
>
>> On Feb 13, 1:13 am, "David W. Hodgins" <dwhodg...(a)nomail.afraid.org>
>> wrote:
>>
>>> As with any m$ software, make sure it's protected by
>>> a properly configured router.
>>>
>>
>> Okay, but it is strange that there is no mechanism/tricks
>> in VirtualBox/Vmware to make the packets to flow
>> through the host OS to the guest OS ?
>
> I have no experience with VMWare or VirtualBox, but in my humble
> opinion, it should be possible to set up the virtual machine so that it
> uses the host OS as a router - I know that Xen supports different types
> of networking, so I would imagine this to apply to VMWare or VirtualBox
> as well.

Sure you can use different tupes of networking on vbox and vmware. I
have three Windows guests (one 2003 Server, one XP Pro and one 7
Ultimate) on my Debian host, and they are all on an internal
virtualbox network behind a FreeBSD guest, which is bridged to the
host's network on its 'external' interface.

I really need to look into Xen soon. Never used it, because at the
moment vbox offers everything I need for virtualisation (and at work
I'm 'forced' to use vmware because it is the 'industry standard').

--
Nothing is more reliable than a man whose loyalties can be bought with
hard cash.
~ Boris Balkan
From: David Brown on
Karthik Balaguru wrote:
> On Feb 13, 9:26 pm, The Natural Philosopher <t...(a)invalid.invalid>
> wrote:
>> Karthik Balaguru wrote:
>>> Okay, So how can we tweak either VirtualBox or Vmware
>>> and other configurations so that the packets get filtered/scanned
>>> before going to the Guest OS(Windows) .
>> run a mail server obn linux, scan there and pickup from then on.
>>
>> use linux as a proxy web server. Maybe.
>>
>>

<snip>
> Any other thoughts ?
>

Yes - it's overkill.

There are two things you need to do to keep a windows system safe. One
is to use technical measures to block things that happen without your
knowledge or consent, and the other is to use your brain to block things
that happen /with/ your knowledge and consent.

For the first part, you use a proper firewall (i.e., not a windows
software firewall) - using a Linux host as a NAT router for a guest is
perfectly good. And avoid using software that risks doing things
without asking you - i.e., avoid Internet Explorer, Outlook, MSN client,
and any other software that accesses the web using IE's engine.

For the second part, think about what you are doing, where you are
wandering in cyberspace, and what software you choose to download and run.

No amount of technical measures will ever protect you from yourself. If
you get fooled by phishing links telling you to change your bank account
password, it's /your/ fault - no proxy or virus scanner will protect
you. Even running doing your browsing from Linux won't help you.
From: Karthik Balaguru on
On Feb 14, 9:03 pm, "Ezekiel" <z...(a)nosuchmail.com> wrote:
> "TomB" <tommy.bongae...(a)gmail.com> wrote in message
>
> news:20100214155725.828(a)usenet.drumscum.be...
>
>
>
>
>
> > On 2010-02-14, the following emerged from the brain of Ezekiel:
>
> >> "TomB" <tommy.bongae...(a)gmail.com> wrote in message
> >>news:20100214141701.957(a)usenet.drumscum.be...
> >>> On 2010-02-14, the following emerged from the brain of Ezekiel:
> >>>> "TomB" <tommy.bongae...(a)gmail.com> wrote in message
> >>>>> I really need to look into Xen soon. Never used it, because at the
> >>>>> moment vbox offers everything I need for virtualisation (and at
> >>>>> work I'm 'forced' to use vmware because it is the 'industry
> >>>>> standard').
>
> >>>> I don't get the comment about being 'forced' to use vmware. I've
> >>>> used both and I'm willing to pay money and buy vmware. What do you
> >>>> find so bad about vmware that you feel that you're 'forced' to use
> >>>> it?
>
> >>> I just like vbox better. Better command line interface.
>
> >> I suspect this comment is based on what's included 'out of the box'
> >> because
> >> if you download and install the VMware infrastructure toolkit there isn't
> >> anything that you can't script or do from the CLI. And I'm talking about
> >> some very, very low-level functionality.
>
> > Didn't know that.
>
> If you feel like taking a peek -  look here:
>
>    http://pubs.vmware.com/vi-sdk/visdk250/ReferenceGuide/
>
> To be fair - I rarely use this (once to be exactly via the Perl interface)
> but our systems group at work does some really cool stuff with this. For
> when I need the CLI with VMWare (not very often) it's usually just scripting
> like 'vmrun list | start | stop | etc'
>
> >>> Open source (the OSE edition that is).
> >> Meh - I'm more interested in what works for me. But if that's more
> >> important
> >> to you then I won't knock it.
>
> > Mind you, it's not the most important reason why I like vbox better.
> > It's just a nice plus. It also means that GNU/Linux distributors can
> > include pre-built packages in their repositories, which is nice too.
> > As Chris already pointed out installing vmware on Debian can be a
> > pain.
>
> I use it with Ubuntu which works very, very well.
>
> >>> A little faster in my experience too.
> >> This is always an interesting issue to me. VMWare tends to have better
> >> video
> >> drivers so graphic intensive tasks run faster for me on VMware. In terms
> >> of
> >> pure performance... I forget who wins. There's the case where there's one
> >> VM
> >> doing a lot of work and the case where you have multiple VM's all doing a
> >> lot of work. In one scenario VMWare usually wins and Vbox wins the other.
> >> I
> >> forget who wins which.
>
> > I never did any real comparison. It's just a general impression. Can't
> > back it up at all.
>
> Your impression might have been right. I've read some benchmarks on this in
> the past and they both do well. To a large degree it depends on what you're
> doing with the VMs but overall the performance-diffs are minor enough where
> they don't usually matter.
>
>
>
>
>
> >>> One thing vmware handles better is clean starting and stopping of the
> >>> guests along with the hosts (although it can be scripted with vbox
> >>> too).
> >> When I used VBox I didn't use it long enough to get into any of the
> >> scripting abilities. I've been a VMWare user for about 10 years now so
> >> it's
> >> a product that I'm very comfortable with which probably has a strong
> >> influence on my decision.
>
> > Sure. Same here. With vbox setting up about anything is second nature
> > to me, while on vmware I have to poke around a little sometimes.
>
> >>> My favorite way of 'virtualisation' is still FreeBSD jails by the way..
> >> Not familiar with BSD jails but after a quick search they appear to be
> >> similar to Solaris containers - which is a good thing.
>
> > Yes, very similar to that. Not 'real' virtualisation, but very cool
> > and virtually transparent in performance.
>
> Which IMO is a better solution if you need to run 'multiple instances' of
> the host OS. But if you want to run a different OS then you need to fully
> virtualize it.- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

I came across the below from internet -
The below seems to give some thoughts
regd perfomance analysis & methods used -
http://virtualizationreview.com/Articles/2009/03/02/Lab-Experiment-Hypervisors.aspx

A very long comparative list of platform virtual machines -
http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines

Karthik Balaguru
From: Karthik Balaguru on
On Feb 14, 6:41 pm, David Brown
<david.br...(a)hesbynett.removethisbit.no> wrote:
> Karthik Balaguru wrote:
> > On Feb 13, 9:26 pm, The Natural Philosopher <t...(a)invalid.invalid>
> > wrote:
> >> Karthik Balaguru wrote:
> >>> Okay, So how can we tweak either VirtualBox or Vmware
> >>> and other configurations so that the packets get filtered/scanned
> >>> before going to the Guest OS(Windows) .
> >> run a mail server obn linux, scan there and pickup from then on.
>
> >> use linux as a proxy web server. Maybe.
>
> <snip>
>
> > Any other thoughts ?
>
> For the first part, you use a proper firewall (i.e., not a windows
> software firewall) - using a Linux host as a NAT router for a guest is
> perfectly good.  

Okay !

> And avoid using software that risks doing things
> without asking you - i.e., avoid Internet Explorer, Outlook, MSN client,
> and any other software that accesses the web using IE's engine.
>

Karthik Balaguru
From: Aragorn on
On Sunday 14 February 2010 12:58 in comp.os.linux.setup, somebody
identifying as TomB wrote...

> ["Followup-To:" header set to comp.os.linux.advocacy.]

Follow-up header respected, albeit that I haven't been subscribed to
that group anymore in a few years already.

> On 2010-02-13, the following emerged from the brain of Aragorn:
>
>> On Saturday 13 February 2010 07:18 in comp.os.linux.setup, somebody
>> identifying as Karthik Balaguru wrote...
>>
>>> On Feb 13, 1:13 am, "David W. Hodgins" <dwhodg...(a)nomail.afraid.org>
>>> wrote:
>>>
>>>> As with any m$ software, make sure it's protected by
>>>> a properly configured router.
>>>
>>> Okay, but it is strange that there is no mechanism/tricks
>>> in VirtualBox/Vmware to make the packets to flow
>>> through the host OS to the guest OS ?
>>
>> I have no experience with VMWare or VirtualBox, but in my humble
>> opinion, it should be possible to set up the virtual machine so that
>> it uses the host OS as a router - I know that Xen supports different
>> types of networking, so I would imagine this to apply to VMWare or
>> VirtualBox as well.
>
> Sure you can use different tupes of networking on vbox and vmware. I
> have three Windows guests (one 2003 Server, one XP Pro and one 7
> Ultimate) on my Debian host, and they are all on an internal
> virtualbox network behind a FreeBSD guest, which is bridged to the
> host's network on its 'external' interface.
>
> I really need to look into Xen soon. Never used it, because at the
> moment vbox offers everything I need for virtualisation (and at work
> I'm 'forced' to use vmware because it is the 'industry standard').

Xen is primarily suited for server deployment, albeit that it can be
used with workstation set-ups as well - see farther down The thing
about Xen is that it's a bare metal hypervisor, so it doesn't run
inside a host operating system. Everything running on top of Xen -
including the management system - is a virtual machine.

It's similar to how mainframes work, but with the difference that the
operating system in direct control of the hypervisor on a mainframe is
a specialized single-user system, while on Xen it must be either
GNU/Linux, NetBSD, OpenBSD or (Open)Solaris, all of which are
UNIX-style systems and thus multi-user. It is however advised,
especially for server set-ups, not to have any users log into the
management virtual machine, or perhaps, just one user, and have that
user then use /su/ and/or /sudo/ to obtain root privileges.

(Note with regard to the above: I always disable all direct root logins,
both remote and local, on all of my machines, virtual or physical, and
thus an unprivileged user account must then be used to log in directly,
and /su/ from there on. This means that any cracker breaking into the
system must instead of guessing only the root password now guess my
user account's login, its password, and then the root password.)

If you intend to run GNU/Linux-only guest systems (and as servers), then
you might also want to look into OpenVZ and Vserver as an alternative
solution. This is another kind of virtualization, at the operating
system level, i.e. you then run multiple userspace "containers" (also
called "zones") on top of a common kernel, with one userspace context
being "the host", from which you can access all others.

OpenVZ and Vserver are similar to eachother but there are some important
differences. For instance, Vserver uses a copy-on-write system for the
guests which is economic in diskspace, but OpenVZ has more
possibilities and uses a more recent kernel - 2.6.18 for "stable" and
2.6.26 and 2.6.27 for "testing". Another operating system which offers
this kind of virtualization would for instance be (Open)Solaris.

Personally I would like to see the OpenVZ adopted into the upstream
Linux kernel. It already has a now quite mature Xen support built-in
(for both dom0 and domU) and it also offers KVM and lguest as
additional virtualization technologies, but those are too much akin to
the third party virtual machine monitor set-ups of VMWare (Workstation)
and VirtualBox. Operating system level virtualization would be a nice
complement to GNU/Linux, especially since OpenSolaris already offers it
as well, and if I'm not mistaken, then FreeBSD also already had it at
one stage (although I think they've removed it again now - I'm not
sure.)

In the event of a Xen set-up, each of the virtual machines runs a
complete operating system, i.e. kernel plus userspace. So there's a
little more RAM overhead than with OpenVZ or Vserver. Otherwise, Xen
performs very well in comparison. With paravirtualization, performance
of the guests is only some 1% or 2% slower than if they were running on
the bare metal.

Another advantage is that Xen can run different types of guest operating
systems. You can even run Windows as an unprivileged guest on Xen, but
only on the condition that your hardware has virtualization extensions,
because Windows can obviously not be paravirtualized, since the code is
not free. Microsoft did at one stage - during the development of Xen -
supply a paravirtualized version of Windows XP, but this version was
never licensed for retail; it was solely intended for testing by the
Xen developers.

Performance-wise, hardware virtualization is slower than
paravirtualization, though. With hardware virtualization, part of the
hardware the HW virtual machine sees is emulated by Xen, using the Qemu
device manager. This emulated hardware is also not exactly "the latest
and greatest", but at least it works reliably.

Paravirtualization on the other hand is an approach in which the
unprivileged guest operating system is "aware" that it is running
virtualized. A paravirtual guest has a kernel which uses
so-called "front-end" drivers, which are basically an abstraction layer
that connects to the "real" back-end drivers running in the dom0
virtual machine. So there is no emulation involved, and all of the
systems running on that one physical machine actually become one big
multifunctional virtual machine operating system. Again, it's like a
mainframe system.

Xen also allows the sysadmin to tailor performance by configuring how
many virtual CPUs each guest can use, and for performance-critical
virtual machines, it is possible to assign one or multiple physical CPU
cores to them, so that the other virtual machines cannot use those.
There are also multiple scheduling options for shared physical CPUs.

On the networking side of things, Xen defaults to bridging, but it is
possible to use routing/NAT as well - scripts are supplied to easily
set up whatever configuration you prefer. Xen also supports isolating
certain hardware from the management virtual machine (dom0) so that
this hardware can be directly accessed by one of the unprivileged
virtual machines (domU), which is then considered a "secondary driver
domain"; this is again often applied in (Open)Solaris, even with Sun's
own bare metal hypervisor - I forgot what it's called. In other words,
if you're running a virtual machine which needs a lot of network
bandwidth and your physical machine has two NICs, then you can choose
to hide one NIC from dom0 and have this particular virtual machine
access the second NIC directly with a regular driver and without having
to use the bridging or routing via dom0's NIC.

The machine I am currently working on - i.e. setting up; I'm not talking
of the machine I am typing this from - is going to be running Xen with
multiple paravirtualized Gentoo GNU/Linux virtual machines - the dom0
plus two domUs. One of the domUs will have direct access to a limited
set of hardware - i.e. a dedicated PCIe video adapter card, the
on-board sound chip and all USB hubs - which will then of course be
hidden from dom0, and the second domU will be running an OpenVZ kernel
with multiple "zones", installed as "headless servers".

Virtualization on top of virtualization, and all of it is Free & Open
Source Software. ;-) (Okay, that last line was specifically intended
for COLA. :p)

--
*Aragorn*
(registered GNU/Linux user #223157)