From: Rick Jones on
Morten Reistad <first(a)last.name> wrote:
> In article <ht6f5f$ef$4(a)usenet01.boi.hp.com>,
> Rick Jones <rick.jones2(a)hp.com> wrote:
> >> >From the measurements we have done the important contention parts
> >> >happen in the OS kernel. That is the single most critical piece to
> >> >parallellise.
> >
> >> Yup. That's the next step. I am interested in the one after that,
> >> like any good academic :-)
> >
> >That's ironic - in my corner of the 'net at least a great deal of
> >effort was put into getting a kernel to scale and thus out of the
> >way and we are having to address the applications and their developers
> >:)

> The performance in Linux and the common internet utilities is not
> bad. It is also amusing to see the difference between Linux and
> (Free|Open)BSD. Linux handles media shuffling very well, but there
> is a little latency (sub-millisecond, but still) that get in the way of
> the single-packet servers. On Linux we see the best results for
> asterisk, mysql, apache, but the BSDs give better performance for
> ser&friends, bind etc.

In my post, I was referring to HP-UX actually - most of the
benchmarks of keen interest to HP-UX folks involve mass storage
scaling, memory scaling, but not so much network scaling.

rick jones
--
Wisdom Teeth are impacted, people are affected by the effects of events.
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
From: Morten Reistad on
In article <ht6vlg$7eb$2(a)usenet01.boi.hp.com>,
Rick Jones <rick.jones2(a)hp.com> wrote:
>Morten Reistad <first(a)last.name> wrote:
>> In article <ht6f8c$ef$5(a)usenet01.boi.hp.com>,
>> Rick Jones <rick.jones2(a)hp.com> wrote:
>> >That sounds like something the folks on the netdev list hosted on
>> >vger.kernel.org might like to hear more about.
>
>> Netdev? They seem swamped with devices and fixes, and have had
>> several notices about "fixes only please".
>
>"Fixes only please" is a regular event at particular phases in the
>release process when they are getting a release candidate ready. Then
>there should be a message announcing the opening of the merge window.

This, and the dozen of similar lists regarding core Linux stuff
is similarly filled with a wall of detail an logistics in getting
support for all the stuff out there.

I haven't seen a meaningful higher-level discussion about what
makes Linux perform in years. Yet, there obviously are some people
that care, that introduce stuff like interrupt coalescing, fixing
the interrupt load balancer etc. But they just superficially
interact with these lists. The thinking about high level performance
does not happen there.

-- mrr

PS. We are trying to get a few kernels on different systems properly
instrumented, so we can publish firm figures on what effects the
different Internet servers have from different machine designs.
From: Piotr Wyderski on
Terje Mathisen wrote:

> For some problems, Jave makes things even worse due to even stricter
> insistence on "there can only be one possible answer here, and that is
> the one you get by evaluating all fp operations in the exact order
> specified".
>
> Not conductive to optimized code.

Although I am a programmer working full-time on performance
and scalability on SMPs, I like the above. Performance itself is
not the most important thing. Correctness is. Then one can start
optimizing.

Best regards,
Piotr Wyderski

From: nmm1 on
In article <htdmcv$is2$1(a)node1.news.atman.pl>,
Piotr Wyderski <piotr.wyderski(a)mothers.against.spam.gmail.com> wrote:
>Terje Mathisen wrote:
>
>> For some problems, Jave makes things even worse due to even stricter
>> insistence on "there can only be one possible answer here, and that is
>> the one you get by evaluating all fp operations in the exact order
>> specified".
>>
>> Not conductive to optimized code.
>
>Although I am a programmer working full-time on performance
>and scalability on SMPs, I like the above. Performance itself is
>not the most important thing. Correctness is. Then one can start
>optimizing.

You are confusing correctness with consistency. They aren't the same
when either parallelism or approximations to real numbers (including
floating-point) are involved. The expectation of determinism was
introduced by the new 'computer scientists' ("nothing to do with
computing and not a science"), who believed that nobody before them
had a clue and hence knew a negative amount about floating-point.

Kahan excepted, of course, but the problem is that virtually no
computer scientist understands why he believes what he does, and
even fewer could even start to use his computational model correctly.
His remarks about Java make that VERY clear!


Regards,
Nick Maclaren.
From: Piotr Wyderski on
nmm1(a)cam.ac.uk wrote:

> You are confusing correctness with consistency. They aren't the same
> when either parallelism or approximations to real numbers (including
> floating-point) are involved.

No Nick, I am not confusing these terms. Correctness and consistency
are orthogonal. If you have a piece of code claimed to be a PDE solver,
it is correct if it solves PDEs, basically. It is obvious the result will
seldom
be precise, but the error must be bounded (in this context by numeric
analysis based on stability theory) -- certain amount of indeterminism is
allowed. But the compiler doesn't know much about error analysis, and,
more important, doesn't know the details involved in that particular
process of computation, e.g. whather the series is convergent or how
close to the boundary of the stability region the calculations are. The
compiler can:

a) limit the set of applicable optimizations always to be on the
safe side, i.e. instruction ordering interchange or temporary
range/precision extension is rarely a problem. But the excessive
safety comes at a price: the gain is rather mediocre.

b) perform "aggresive" optimizations, where the result can
unboundedly vary as the optimization level changes.

I prefer the program to execute exactly as implemented, because
it is what it is for a reason. It reflects the design made by a human
who is supposed to know all the arcane details. Floating-point
calculations are not for everybody, if the programmer doesn't know
what is going under the hood, no compiler will ever help him. And
no compiler should try to outsmart an expert.

> The expectation of determinism was introduced by the new 'computer
> scientists'

Numeric analysis is more applied math than anything else
and centuries older than programmable computers.

> Kahan excepted, of course, but the problem is that virtually no
> computer scientist understands why he believes what he does, and
> even fewer could even start to use his computational model correctly.
> His remarks about Java make that VERY clear!

Yes, his report is very detailed.

Best regards
Piotr Wyderski