From: Andy Glew "newsgroup at on
On 7/21/2010 3:46 PM, Alex McDonald wrote:
> On 20 July, 22:31, "David L. Craig"<dlc....(a)gmail.com> wrote:
>> On Jul 20, 2:49 pm, Robert Myers<rbmyers...(a)gmail.com> wrote:
>>
>
>>
>>> I doubt if mass-market x86 hypervisors ever crossed the
>>> imagination at IBM, even as the barbarians were at the
>>> gates.
>>
>> You'd be wrong. A lot of IBMers and customer VMers were
>> watching what Intel was going to do with the 80386 next
>> generations to support machine virtualization. While
>> Intel claimed it was coming, by mainframe standards, they
>> showed they just weren't serious. Not only can x86 not
>> fully virtualize itself, it has known design flaws that
>> can be exploited to compromise the integrity of its
>> guests and the hypervisor. That it is used widely as a
>> consolidation platform boggles the minds of those in the
>> know. We're waiting for the eventual big stories.
>>
>
> Can you be more explicit on this? I understand the lack of complete
> virtualization is an issue with the x86, but I'm fascinated by your
> claim of exploitable design flaws; what are they?

The 80386 and othrr processors, up until recently, were incompletely
self virtualizing.

However, as far as I know, with the addition of VMX at Intel, and
Pacifica at AMD, the x86 processors are now cimpldetely self virtualizing.

From: nmm1 on
In article <b1oe46pjp0lqi30fr03i75tnii94j40bb8(a)4ax.com>,
George Neuner <gneuner2(a)comcast.net> wrote:
>On Tue, 20 Jul 2010 15:41:13 +0100 (BST), nmm1(a)cam.ac.uk wrote:
>
>>In article <04cb46947eo6mur14842fqj45pvrqp61l1(a)4ax.com>,
>>George Neuner <gneuner2(a)comcast.net> wrote:
>>>
>>>ISTM bandwidth was the whole point behind pipelined vector processors
>>>in the older supercomputers. ...
>>> ... the staging data movement provided a lot of opportunity to
>>>overlap with real computation.
>>>
>>>YMMV, but I think pipeline vector units need to make a comeback.
>>
>>NO chance! It's completely infeasible - they were dropped because
>>the vendors couldn't make them for affordable amounts of money any
>>longer.
>
>Actually I'm a bit skeptical of the cost argument ... obviously it's
>not feasible to make large banks of vector registers fast enough for
>multiple GHz FPUs to fight over, but what about a vector FPU with a
>few dedicated registers?

'Tain't the computation that's the problem - it's the memory access,
as "jacko" said.

Many traditional vector units had enough bandwidth to keep an AXPY
running at full tilt - nowadays, one would need 1 TB/sec for a low
end vector computer, and 1 PB/sec for a high-end one. Feasible,
but not cheap.

Also, the usefulness of such things was very dependent on whether
they would allow 'fancy' vector operations, such as strided and
indexed vectors, gather/scatter and so on. The number of programs
that need only simple vector operations is quite small.

I believe that, by the end, 90% of the cost of such machines was in
the memory management and only 10% in the computation. At very
rough hand-waving levels.


Regards,
Nick Maclaren.
From: nedbrek on
Hello all,

"Robert Myers" <rbmyersusa(a)gmail.com> wrote in message
news:jRH1o.23291$KT3.18906(a)newsfe13.iad...
> jacko wrote:
>> On 21 July, 19:13, Robert Myers <rbmyers...(a)gmail.com> wrote:
>
>>>
>>> The actual problem -> accurate representation of a nonlinear free field
>>> + non-trivial geometry == bureaucrats apparently prefer to pretend that
>>> the problem doesn't exist, or at least not to scrutinize too closely
>>> what's behind the plausible-looking pictures that come out.
>>>
>>
>> Umm, I think a need for upto cubic fields is resonable in modelling.
>> Certain effects do not show in the quadratic or linear approximations.
>> This can be done by tripling the variable count, and lots more
>> computation, but surely there must be ways.
>>
>> Quartic modelling may not serve that much of an extra purpose, as a
>> cusp catastrophy is within the cubic. Mapping the field to x, and
>> performing an inverse map to find applied force can linearize certain
>> problems.
>
> Truncation of the hierarchy of equations for turbulence by assuming that
> the fourth cumulant is zero leads to unphysical results, like negative
> energies in the spectral energy distribution. I'm a tad muddy on the
> actual history now, but I knew that result decades ago.
>
> There is, as far as I know, no ab initio or even natural truncation of the
> infinite hiearchy of conserved quantities that isn't problematical. There
> are various hacks that work--sort of. Every single plot that you see that
> purports to represent the calculation of a fluid flow at a reasonable
> Reynolds number depends on some kind of hack.
>
> For the Navier-Stokes equations, nature provides a natural cut-off scale
> in length, the turbulent dissipation scale, and ab initio calculations at
> interesting turbulent Reynolds numbers do exist up to Re~10,000.

I'm not following very well...

I think you are saying the problem is resistant to mathematical models
(which is fine with me, I am skeptical of mathematical models of physical
processes). jacko is suggesting some sort of simulation (finite elements?)
where 4D is necessary, although you might be able to reduce it to 3D. You
seem certain that 4D is necessary. 4D will add a lot of data, and
operations, but should be doable.

My main concern is the "infinite hierarchy" and Re~10,000. If there are
infinites involved, some sort of mathematical analysis will be necessary -
we cannot simulate infinite :) If Re~10,000 means we need to do 10,000
operations at each node - I think that is doable (although expensive). If
it means 10,000 dimensions for data - that is probably too much.

Ned


From: jacko on
> I'm not following very well...
>
> I think you are saying the problem is resistant to mathematical models
> (which is fine with me, I am skeptical of mathematical models of physical
> processes).  jacko is suggesting some sort of simulation (finite elements?)
> where 4D is necessary, although you might be able to reduce it to 3D.  You
> seem certain that 4D is necessary.  4D will add a lot of data, and
> operations, but should be doable.
>
> My main concern is the "infinite hierarchy" and Re~10,000.  If there are
> infinites involved, some sort of mathematical analysis will be necessary -
> we cannot simulate infinite :)  If Re~10,000 means we need to do 10,000
> operations at each node - I think that is doable (although expensive).  If
> it means 10,000 dimensions for data - that is probably too much.

Simply put, although in doing might make practical use prone to mis-
simulation, Re is a turbulance merit figure, below and all's fine,
above and it's as swirly as a cavitation soup. It's probly linked to
the Bernoulli effect. The hierarchy is due to the fractal swirls of
the turbulance. It needs to be 4D if all fluid flow is modelled, and
no unexpected effects are to be occuring.

It is good for a benchmark, as many sub-problems such as EM wave
equation and 'heat in a solid' equation, which are more ordered in
terms of no turbulance, will still have similar memory access patterns.
From: George Neuner on
On Wed, 21 Jul 2010 18:18:19 -0400, George Neuner
<gneuner2(a)comcast.net> wrote:

>
> Some stuff
>

I think the exchange among Robert, Mitch and Andy that just appeared
answered most of my question.

George