From: Dragontamer on

randyhyde(a)earthlink.net wrote:
> o//annabee wrote:

> > And I am still new to the thing. The FillRect routine posted above was
> > written in a few minutes.
>
> Tim is *sooooo* far beyond such trivial little things. Game
> programmers, in general, are far beyond this (and yes, I know a few).
> Try writing a shader in a few minutes. A high-performance shader. Trust
> me, you're not going to do that using Rene's list of "25 instructions
> I've bothered to learn." If you're not using the GPU directly, you'll
> definitely be using those SIMD instructions that you and Rene seem to
> find so confusing. So writing your shader is going to take some time,
> more than a few minutes, because first you'll need to learn some more
> x86 instructions.

Oy. I've looked at (some very little) math behind Shading and Shadows
in 3d; let
alone optimization of them.

If you wanted a time for mathematical proofs; now is one of those
times. Or at least
sufficient knowledge in matricies and 3-dimentional trig to do that
magic math stuff...

I got quite a few years of study before I get to that math level
however :-p

[snip]

> > Why must it be like this? Why are we (because I must include
> > myself, because of my previous experience) so easily fooled by the HLL
> > myths?
>
> Just because you never seemed to learn Delphi (and the reason is
> obvious, listening to you talk about learning APIs), doesn't mean that
> assembly is easier. Perhaps you could make a point that people who
> don't want to read anything or actually *study* can learn enough
> assembly to hack out really slow rectangle fills. But the same could be
> said about *any* language. Is there any reason, for example, why a
> proficient C or Pascal programmer couldn't do your same code in a HLL?
> It was pretty straight-forward stuff. The real trick is writing a
> high-performance version of that code. Writing sloppy code (as you have
> done) is *easy*. Writing great code takes a little more effort, I'm
> afraid.

May I detect a hidden plug for your book Randall?

:-p

(just kidding; no offense given)

[Snip: the rest doesn't really give me a place to put my opinion]

--Dragontamer

From: hutch-- on
Having gone for a shovel through the PPT presentation, it is generally
phrased at such a high level that I seriously doubt that it could
interface with assembler code. The overriding assumption is that of
massive processing power available which does not really fit a real
world model in the PC market.

The thought comes to mind of an off the shelf 512 processor SGI box
which is truly exciting if you need 25 megapixel frames at hundreds of
frames each second but software that makes assumptions of this type
scales down very badly on less powerful hardware.

Shifting the burden to GPU hardware is certainly a reasonable approach
but as the demands get higher, it runs out of processor puff in much
the same way as the normal CPU is doing at the moment.

Multiple core and multiple processor hardware comes at a cost apart
from the financial outlay and fast redundancy, even with very complex
and expensive hardware to interface and synchronise a large count of
processors, you get increasing processor power loss with the increase
in processor count.

I am much of the view that you get what you pay for and with software
that assumes massive processor power logarithmically increasing into
the future, you will be paying big bucks for ever more sloppy
programming that has the luxury of JAVA style memory management.

Abstraction may look nice but below the cute simplified notation is the
same bucket of dirty byte counting procedural programming that is
necessary to make abstraction of that style work.

Usually in gaming design with a 10 million dollar budget, they are
trying to get it to run on as many platforms as possible to recover
their development costs and this effectively rules out direct assembler
programming but it comes at a cost if it requires hardware that is
beyond most platforms.

The lesson to learn was that taught by Cormack who could get games to
run on hadware that many of his competitors could not. There is still
no alternative to efficient code, whether it is assembler code or C and
similar code.

Regards,

hutch at movsd dot com

From: Evenbit on

randyhyde(a)earthlink.net wrote:
> > http://cr.yp.to/
> >
> Hi Phil,
> I couldn't find the reference. Which of the lins on this page contains
> the sample code?

I found this... http://cr.yp.to/qhasm.html

Nathan.

From: Phil Carmody on
"Evenbit" <nbaker2328(a)charter.net> writes:

> randyhyde(a)earthlink.net wrote:
> > > http://cr.yp.to/
> > >
> > Hi Phil,
> > I couldn't find the reference. Which of the lins on this page contains
> > the sample code?
>
> I found this... http://cr.yp.to/qhasm.html

Yeah, he's not exactly pushing stuff down our throats about it -
it's pretty much all embedded within the description of one
particular application for the assembler.

Example output from QHAsm:

http://cr.yp.to/mac/poly1305_athlon.s
http://cr.yp.to/mac/aes_athlon.s

http://cr.yp.to/mac/poly1305_aix.s
http://cr.yp.to/mac/aes_aix.s

http://cr.yp.to/mac/poly1305_ppro.s
http://cr.yp.to/mac/aes_ppro.s

.... and everything else 2 clicks away from
http://cr.yp.to/mac/speed.html

Phil
--
What is it: is man only a blunder of God, or God only a blunder of man?
-- Friedrich Nietzsche (1844-1900), The Twilight of the Gods
From: Dave on
There are places and times where optimization is required. There are
also times and places where it is not required nor is it desirable.

There are times where the time-to-market factor is primary. I hate to
say it, but FireFox, OpenSSL, Linux, and even Microsoft's products
have bugs, security holes, and inefficiencies. It is not cost
effective for even the 'free' products to keep working on them to make
them so efficient that they can run even on old 386's with 64MB of
memory. Early versions of Linux worked well on minimal systems, but
current distributions such as Suse require up to date hardware though
not as much as Windows.

If Linux is to stay a factor on the PC they can't spend too much time
going for efficiency, but they need to get working versions out and in
the public before they go the way of Solaris, CP/M, and DOS.

There is a saying about people with wisdom knowing what can be
changed, what cannot be changed, and having the wisdom to know the
difference. The same applies to software.

On 10 Mar 2006 20:57:13 -0800, "hutch--" <hutch(a)movsd.com> wrote:

>Having gone for a shovel through the PPT presentation, it is generally
>phrased at such a high level that I seriously doubt that it could
>interface with assembler code. The overriding assumption is that of
>massive processing power available which does not really fit a real
>world model in the PC market.
>
>The thought comes to mind of an off the shelf 512 processor SGI box
>which is truly exciting if you need 25 megapixel frames at hundreds of
>frames each second but software that makes assumptions of this type
>scales down very badly on less powerful hardware.
>
>Shifting the burden to GPU hardware is certainly a reasonable approach
>but as the demands get higher, it runs out of processor puff in much
>the same way as the normal CPU is doing at the moment.
>
>Multiple core and multiple processor hardware comes at a cost apart
>from the financial outlay and fast redundancy, even with very complex
>and expensive hardware to interface and synchronise a large count of
>processors, you get increasing processor power loss with the increase
>in processor count.
>
>I am much of the view that you get what you pay for and with software
>that assumes massive processor power logarithmically increasing into
>the future, you will be paying big bucks for ever more sloppy
>programming that has the luxury of JAVA style memory management.
>
>Abstraction may look nice but below the cute simplified notation is the
>same bucket of dirty byte counting procedural programming that is
>necessary to make abstraction of that style work.
>
>Usually in gaming design with a 10 million dollar budget, they are
>trying to get it to run on as many platforms as possible to recover
>their development costs and this effectively rules out direct assembler
>programming but it comes at a cost if it requires hardware that is
>beyond most platforms.
>
>The lesson to learn was that taught by Cormack who could get games to
>run on hadware that many of his competitors could not. There is still
>no alternative to efficient code, whether it is assembler code or C and
>similar code.
>
>Regards,
>
>hutch at movsd dot com