From: Nick Hounsome on
On 18 June, 16:24, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:

> > When has anyone EVER said to you "Make this as fast as possible and
> > damn the cost and delivery date"?
> > It's never happened to me in over 25 years.
>
> Not as fast as possible. In the ballpark (within at least an order of
> magnitude) of as fast as possible. Also even this goal only applies to
> things such as compilers and operating systems.

I repeat my previous question - How do you know in advance what the
best possible performance is?
If you don't know then how do you know when you're within an order of
magnitude?
Contrariwise if you DO know and you choose a suboptimal algorithm then
presumably you have a good reason for doing so.
In neither case do you need to depart from "conventional wisdom".

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/18/2010 10:32 AM, Pete Becker wrote:
> On 2010-06-17 15:15:16 -0400, Peter Olcott said:
>
>> On 6/17/2010 5:40 PM, Bo Persson wrote:
>>> Peter Olcott wrote:
>>>> Conventional wisdom says to get the code working and then profile
>>>> it and then make those parts that are too slow faster.
>>>>
>>>> What if the original goal is to make the code in the ball park of as
>>>> fast as possible?
>>>>
>>>> For example how fast is fast enough for a compiler? In the case of
>>>> the time it takes for a compiler to compile, any reduction in time
>>>> is a reduction of wasted time.
>>>>
>>>
>>> Yes, but you still have to find out where it spends most of its time.
>>> Is it disk I/O? Parsing the source? Symbol table management?
>>> Instantiating templates? Something els�?
>>>
>>
>> I just make sure that it does not spend much more time than necessary
>> anywhere.
>
> The danger is that you can end up spending far more of your own time
> than necessary because you're optimizing things that have no significant
> effect on performance. This isn't the olden days when it made sense to
> spend a great deal of programmer time at $5 per hour to reduce CPU time
> that cost $200 per minute (IBM 7094 in 1970, if I remember correctly).
> CPU time is cheap and programmer time is not.
>

Yes this is true. I am only making this recommendation for senior
developers that can tell what kind of things would tend to have large
impacts on performance in advance. Also the focus is to get a fast basic
approach, not to optimize every minute detail.

This advice is also restricted to things where there is no limit
indicating "fast enough", things such as compilers and operating systems
where any increase in speed directly results in a decrease of wasted time.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/18/2010 1:19 PM, Nick Hounsome wrote:
> On 18 June, 16:24, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>
>>> When has anyone EVER said to you "Make this as fast as possible and
>>> damn the cost and delivery date"?
>>> It's never happened to me in over 25 years.
>>
>> Not as fast as possible. In the ballpark (within at least an order of
>> magnitude) of as fast as possible. Also even this goal only applies to
>> things such as compilers and operating systems.
>
> I repeat my previous question - How do you know in advance what the
> best possible performance is?

You can know this in advance of coding, not in advance of design.

Also we are not seeking the best possible performance, only the ballpark
of the best possible performance. The ballpark of the best possible
performance can be defined by the following heuristic:

There are only a finite number of ways to achieve any specific
functional result. Simply choose the one with the fewest instructions in
the execution path.

> If you don't know then how do you know when you're within an order of
> magnitude?
> Contrariwise if you DO know and you choose a suboptimal algorithm then
> presumably you have a good reason for doing so.
> In neither case do you need to depart from "conventional wisdom".
>


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Nick Hounsome on
On 18 June, 20:21, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> On 6/18/2010 1:19 PM, Nick Hounsome wrote:
>
> > On 18 June, 16:24, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>
> >>> When has anyone EVER said to you "Make this as fast as possible and
> >>> damn the cost and delivery date"?
> >>> It's never happened to me in over 25 years.
>
> >> Not as fast as possible. In the ballpark (within at least an order of
> >> magnitude) of as fast as possible. Also even this goal only applies to
> >> things such as compilers and operating systems.
>
> > I repeat my previous question - How do you know in advance what the
> > best possible performance is?
>
> You can know this in advance of coding, not in advance of design.

No you can't.
Even in relatively constrained subject areas academics are still
discovering ever more efficient ways of doing things - For an example
consider the problem of cracking public key cryptography by factoring
primes. This is an entire area of academic research occupying the
minds of hundreds of the worlds finest brains. Your typical app has
just you to design it within a timescale acceptable to your boss.

Nobody says that you shouldn't code what you believe to be the most
efficient algorithm. If a computation is at best O(f(n)) then do it
O(f(n)) but that is effectively m*f(n)+c. The whole point of the O
notation is that for large n you don't even need to optimize m and c
to be "in the ballpark".

> Also we are not seeking the best possible performance, only the ballpark
> of the best possible performance. The ballpark of the best possible
> performance can be defined by the following heuristic:
>
> There are only a finite number of ways to achieve any specific
> functional result. Simply choose the one with the fewest instructions in
> the execution path.

1) Disregarding time and memory limitations, there are actually an
infinite number of ways to acheive any specific functional result.
2) Even if there were only a finite number how do you know which has
the fewest instructions without enumerating them?
3) How do you know how many instructions in the execution path without
dissassembling actual code? - Here's an idea - You could profile the
code and the time results are a surrogate for the number of
instructions ;-)


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Bo Persson wrote:
> Peter Olcott wrote:
>> For example how fast is fast enough for a compiler? In the case of
>> the time it takes for a compiler to compile, any reduction in time
>> is a reduction of wasted time.
>>
>
> Yes, but you still have to find out where it spends most of its time.
> Is it disk I/O? Parsing the source? Symbol table management?
> Instantiating templates? Something els�?
>
> It doesn't help a bit if your parsing is lightning fast, if that just
> means the compiler has to wait longer for the next I/O operation to
> complete.


I may have something to say here, having been building (very fast) compilers for
30 years now. Making every part of the compiler fast as hell pays off. It pays
off because you have many thousands of people using it, sitting around at
$$$$/hr twiddling their thumbs reading usenet while waiting for their compiler
to finish, and they do this 10's or 100's of times a day. Every tenth of a
second you can shave off pays off.

Parsing speed *is* the gating factor in compiling C code. If it is dominated by
I/O, you need to revisit the I/O code. (iostreams is miserably slow, even fgetc
is poor.)

Compiling D code tends to be more gated by I/O speed since the language is
designed for fast parsing. I've managed to squeeze more out of this by running
the I/O and the parsing in parallel, a technique that won't work compiling C and
C++. But if you've got a multiprocessing make program, even with C/C++ you can
get the I/O in parallel with the parsing, and then there's no escaping the
penalty for a slow parser.

Yes, I've spent many hours profiling compilers. I/O, parsing, symbol tables, and
instantiating templates all show up as big time consumers in the profile.

How fast is fast enough? From experience: users notice it when you double the
compile speed, even though you get there an agonizing tenth of a second at a
time. If you can get a 10x improvement, it's a game changer.

---
Walter Bright
free C, C++, and D programming language compilers (Javascript too!)
http://www.digitalmars.com


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]