From: Peter Olcott on
On 6/20/2010 3:52 AM, Nick Hounsome wrote:
> On 18 June, 20:21, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>> On 6/18/2010 1:19 PM, Nick Hounsome wrote:
>>
>>> On 18 June, 16:24, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>>
>>>>> When has anyone EVER said to you "Make this as fast as possible and
>>>>> damn the cost and delivery date"?
>>>>> It's never happened to me in over 25 years.
>>
>>>> Not as fast as possible. In the ballpark (within at least an order of
>>>> magnitude) of as fast as possible. Also even this goal only applies to
>>>> things such as compilers and operating systems.
>>
>>> I repeat my previous question - How do you know in advance what the
>>> best possible performance is?
>>
>> You can know this in advance of coding, not in advance of design.
>
> No you can't.
> Even in relatively constrained subject areas academics are still
> discovering ever more efficient ways of doing things - For an example
> consider the problem of cracking public key cryptography by factoring
> primes. This is an entire area of academic research occupying the
> minds of hundreds of the worlds finest brains. Your typical app has
> just you to design it within a timescale acceptable to your boss.
>
> Nobody says that you shouldn't code what you believe to be the most
> efficient algorithm. If a computation is at best O(f(n)) then do it
> O(f(n)) but that is effectively m*f(n)+c. The whole point of the O
> notation is that for large n you don't even need to optimize m and c
> to be "in the ballpark".

I have found that even within the same big-O there is easily another
ten-fold improvement by minimizing the executed instructions in the
execution path. I am not talking about hand tweaked assembly language
here. All that I am referring to is roughly minimizing the number of
instructions that get executed. Also I only do this to the extent that
the resulting faster code is just as readable as the prior slower code.

>
>> Also we are not seeking the best possible performance, only the ballpark
>> of the best possible performance. The ballpark of the best possible
>> performance can be defined by the following heuristic:
>>
>> There are only a finite number of ways to achieve any specific
>> functional result. Simply choose the one with the fewest instructions in
>> the execution path.
>
> 1) Disregarding time and memory limitations, there are actually an
> infinite number of ways to acheive any specific functional result.

Although this may be literally true because one can always add yet
another useless instruction that does not provide any functional benefit
to deriving the desired end result, in practice this is not true.

In practice there are typically only a few basic approaches that can
reasonably derive any desired small unit of functionality. One only need
choose the best one of these and then implement this by roughly
minimizing the instructions in the execution path. The key here is
decomposing the problem into very small units of functionality.

> 2) Even if there were only a finite number how do you know which has
> the fewest instructions without enumerating them?

You probably do have to enumerate the small set of basic approaches. I
have never found this set to be larger than six.

> 3) How do you know how many instructions in the execution path without
> dissassembling actual code?

It is most often not worth this degree of effort. Simply minimize the
C++ source code instructions. I myself do generate assembly language for
my most time critical code and examine certain crucial parts of this.
Profiling won't help here because what I am looking for is code that the
compiler generated that is not really needed.

> - Here's an idea - You could profile the
> code and the time results are a surrogate for the number of
> instructions ;-)
>
>


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/20/2010 3:50 AM, Walter Bright wrote:
> Bo Persson wrote:
>> Peter Olcott wrote:
>>> For example how fast is fast enough for a compiler? In the case of
>>> the time it takes for a compiler to compile, any reduction in time
>>> is a reduction of wasted time.
>>>
>>
>> Yes, but you still have to find out where it spends most of its time.
>> Is it disk I/O? Parsing the source? Symbol table management?
>> Instantiating templates? Something els�?
>>
>> It doesn't help a bit if your parsing is lightning fast, if that just
>> means the compiler has to wait longer for the next I/O operation to
>> complete.
>
>
> I may have something to say here, having been building (very fast)
> compilers for
> 30 years now. Making every part of the compiler fast as hell pays off.
> It pays
> off because you have many thousands of people using it, sitting around at
> $$$$/hr twiddling their thumbs reading usenet while waiting for their
> compiler
> to finish, and they do this 10's or 100's of times a day. Every tenth of a
> second you can shave off pays off.
>
> Parsing speed *is* the gating factor in compiling C code. If it is
> dominated by
> I/O, you need to revisit the I/O code. (iostreams is miserably slow,
> even fgetc
> is poor.)
>
> Compiling D code tends to be more gated by I/O speed since the language is
> designed for fast parsing. I've managed to squeeze more out of this by
> running
> the I/O and the parsing in parallel, a technique that won't work
> compiling C and
> C++. But if you've got a multiprocessing make program, even with C/C++
> you can
> get the I/O in parallel with the parsing, and then there's no escaping the
> penalty for a slow parser.
>
> Yes, I've spent many hours profiling compilers. I/O, parsing, symbol
> tables, and
> instantiating templates all show up as big time consumers in the profile.
>
> How fast is fast enough? From experience: users notice it when you
> double the
> compile speed, even though you get there an agonizing tenth of a second
> at a
> time. If you can get a 10x improvement, it's a game changer.
>
> ---
> Walter Bright
> free C, C++, and D programming language compilers (Javascript too!)
> http://www.digitalmars.com

{ reminder: please do not quote signatures. thanks. -mod }

So would you say that it is helpful to make it as fast as possible at
design time, instead of just getting it working, and then later making
it fast? (It also sounds like you might follow this with repeated
iterations of speed improvement re-factorings).


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Peter Olcott wrote:
> So would you say that it is helpful to make it as fast as possible at
> design time, instead of just getting it working, and then later making
> it fast? (It also sounds like you might follow this with repeated
> iterations of speed improvement re-factorings).

If you have the experience to know how to design something for speed, sure, do
it. Designing for speed isn't inherently bad, the problem comes inexperience
with the problem domain leading to a poor choice of tradeoffs.

I suppose it's like designing an airplane. If you want the airplane to go fast,
you've got to make those decisions favoring speed at every step of the design.
But you'll still need an experienced aero engineer to get those tradeoffs right.
Trying to retrofit speed in later isn't going to end well.

---
Walter Bright
free C, C++, and D programming language compilers (Javascript too!)
http://www.digitalmars.com

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: nmm1 on
In article <hvmiqb$vud$1(a)news.eternal-september.org>,
Walter Bright <walter(a)digitalmars-nospamm.com> wrote:
>Peter Olcott wrote:
>> So would you say that it is helpful to make it as fast as possible at
>> design time, instead of just getting it working, and then later making
>> it fast? (It also sounds like you might follow this with repeated
>> iterations of speed improvement re-factorings).
>
>If you have the experience to know how to design something for speed, sure, do
>it. Designing for speed isn't inherently bad, the problem comes inexperience
>with the problem domain leading to a poor choice of tradeoffs.
>
>I suppose it's like designing an airplane. If you want the airplane to go fast,
>you've got to make those decisions favoring speed at every step of the design.
>But you'll still need an experienced aero engineer to get those tradeoffs right.
>Trying to retrofit speed in later isn't going to end well.

Yes and no. I teach (and generally use) a structured approach to
design, and it is imperative to design for speed only at the higher
levels - the lower ones can be fixed up later. For example, from
the top down:

Precise formulation of problem, constraints and objectives.
Well, obviously, if you don't include it here, you will have to
start from scratch.

Generic design of data structures, control flow and potential
algorithms. This is the level at which it is critical, and often
forgotten, because adding it later means a redesign.

Specific design of the same. If you don't include it here, you
will have to rewrite, but that's not a major problem in general, as
it's just replacing one code/data unit by another, very similar, one
and fixing up loose ends.

Actual coding and optimisation. Including it initially is a
Bad Idea, because it often interferes with making the code clean,
easy to debug, maintainable and robust.

Old fogies like me do quite a lot of the last semi-automatically,
because back in the 1960s and 1970s we had no option. Quite a lot
of people had to rewrite simply to get their code small and fast
enough to run their initial tests! But that was then, and it is
almost never an issue nowadays.


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Bart van Ingen Schenau on
On Jun 17, 8:54 pm, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> Conventional wisdom says to get the code working and then profile it and
> then make those parts that are too slow faster.
>
My understanding is that this wisdom mostly applies to micro-
optimizations.

When writing software, one of my targets, besides correctness,
readability and maintainability, is to reach a good efficiency, where
efficiency is a trade-off between execution speed, memory usage,
development costs (mostly -time) and implementation constraints
(preferred language, platform limitations, etc.).

At design time, you select the most efficient algorithm.
At coding time, you write the most efficient implementation of the
algorithm.
And only when it is still not fast enough, you break out the profiler
and see what further optimizations are needed. Hopefully only micro-
optimizations, or you made a wrong trade-off in the earlier stages.
And it is only for micro-optimizations that goals like maintainability
may become less important.

regards,
Bart v Ingen Schenau


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]