From: Peter Olcott on
On 6/21/2010 8:07 AM, nmm1(a)cam.ac.uk wrote:
> In article<hvmiqb$vud$1(a)news.eternal-september.org>,
> Walter Bright<walter(a)digitalmars-nospamm.com> wrote:
>> Peter Olcott wrote:
>>> So would you say that it is helpful to make it as fast as possible at
>>> design time, instead of just getting it working, and then later making
>>> it fast? (It also sounds like you might follow this with repeated
>>> iterations of speed improvement re-factorings).
>>
>> If you have the experience to know how to design something for speed, sure, do
>> it. Designing for speed isn't inherently bad, the problem comes inexperience
>> with the problem domain leading to a poor choice of tradeoffs.
>>
>> I suppose it's like designing an airplane. If you want the airplane to go fast,
>> you've got to make those decisions favoring speed at every step of the design.
>> But you'll still need an experienced aero engineer to get those tradeoffs right.
>> Trying to retrofit speed in later isn't going to end well.
>
> Yes and no. I teach (and generally use) a structured approach to
> design, and it is imperative to design for speed only at the higher
> levels - the lower ones can be fixed up later. For example, from
> the top down:
>
> Precise formulation of problem, constraints and objectives.
> Well, obviously, if you don't include it here, you will have to
> start from scratch.
>
> Generic design of data structures, control flow and potential
> algorithms. This is the level at which it is critical, and often
> forgotten, because adding it later means a redesign.
>
> Specific design of the same. If you don't include it here, you
> will have to rewrite, but that's not a major problem in general, as
> it's just replacing one code/data unit by another, very similar, one
> and fixing up loose ends.
>
> Actual coding and optimisation. Including it initially is a
> Bad Idea, because it often interferes with making the code clean,
> easy to debug, maintainable and robust.
>
> Old fogies like me do quite a lot of the last semi-automatically,
> because back in the 1960s and 1970s we had no option. Quite a lot
> of people had to rewrite simply to get their code small and fast
> enough to run their initial tests! But that was then, and it is
> almost never an issue nowadays.
>
>
> Regards,
> Nick Maclaren.
>

This is a good detailed breakdown of exactly what I mean by aiming for
the ballpark of the fastest possible code. Design for speed at every
step of design. When you implement the design as code don't bother to
optimize focus on writing clean code. The implemented design is the one
step where further optimization does not require redesign.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/21/2010 1:14 AM, Walter Bright wrote:
> Peter Olcott wrote:
>> So would you say that it is helpful to make it as fast as possible at
>> design time, instead of just getting it working, and then later making
>> it fast? (It also sounds like you might follow this with repeated
>> iterations of speed improvement re-factorings).
>
> If you have the experience to know how to design something for speed,
> sure, do
> it. Designing for speed isn't inherently bad, the problem comes
> inexperience
> with the problem domain leading to a poor choice of tradeoffs.
>
> I suppose it's like designing an airplane. If you want the airplane to
> go fast,
> you've got to make those decisions favoring speed at every step of the
> design.
> But you'll still need an experienced aero engineer to get those
> tradeoffs right.
> Trying to retrofit speed in later isn't going to end well.
>

That was exactly the point that I was trying to make with this thread.
Designing speed in from the beginning is much cheaper than just getting
it working, and then later making it faster, if very fast code is
initially known to be a primary design goal.

{ please do not quote signatures - redacted. -mod }

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Keith H Duggar on
On Jun 21, 10:00 am, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> On 6/21/2010 1:14 AM, Walter Bright wrote:
> > Peter Olcott wrote:
> >> So would you say that it is helpful to make it as fast as possible at
> >> design time, instead of just getting it working, and then later making
> >> it fast? (It also sounds like you might follow this with repeated
> >> iterations of speed improvement re-factorings).
>
> > If you have the experience to know how to design something for speed,
> > sure, do
> > it. Designing for speed isn't inherently bad, the problem comes
> > inexperience
> > with the problem domain leading to a poor choice of tradeoffs.
>
> > I suppose it's like designing an airplane. If you want the airplane to
> > go fast,
> > you've got to make those decisions favoring speed at every step of the
> > design.
> > But you'll still need an experienced aero engineer to get those
> > tradeoffs right.
> > Trying to retrofit speed in later isn't going to end well.
>
> That was exactly the point that I was trying to make with this thread.
> Designing speed in from the beginning is much cheaper than just getting
> it working, and then later making it faster, if very fast code is
> initially known to be a primary design goal.

Umm ... except that your thread title and various statements
tried to create a (false) distinction between "conventional
wisdom" and "this way" that does not exist. What Walter above
describes is conventional wisdom at least among those who are
educated (or have otherwise learned) proper /engineering/
(software and otherwise).

What you described as "conventional wisdom" is more aptly
termed "naive wisdom" ie an oxymoron. That we are working in
an industry where it has somehow become acceptable for those
without proper education and training to produce our product
(software) is another matter entirely; and it shows.

KHD


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/21/2010 8:07 AM, Bart van Ingen Schenau wrote:
> On Jun 17, 8:54 pm, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>> Conventional wisdom says to get the code working and then profile it and
>> then make those parts that are too slow faster.
>>
> My understanding is that this wisdom mostly applies to micro-
> optimizations.
>
> When writing software, one of my targets, besides correctness,
> readability and maintainability, is to reach a good efficiency, where
> efficiency is a trade-off between execution speed, memory usage,
> development costs (mostly -time) and implementation constraints
> (preferred language, platform limitations, etc.).
>
> At design time, you select the most efficient algorithm.
> At coding time, you write the most efficient implementation of the
> algorithm.
> And only when it is still not fast enough, you break out the profiler
> and see what further optimizations are needed. Hopefully only micro-
> optimizations, or you made a wrong trade-off in the earlier stages.
> And it is only for micro-optimizations that goals like maintainability
> may become less important.
>
> regards,
> Bart v Ingen Schenau
>
>

I had one employer recently where this degree of code quality was far
too expensive. All of the code was to be used internally by this one
small company and it was to be executed in batch mode.

I am thinking that a focus on speed may be most appropriate when speed
directly impacts response time, and the user is idle while waiting for
the response. This is increasingly more important depending on the
number of users, the duration of the response time, and the cost of the
user's time.

Millions of software engineers waiting several minutes for a compiler to
finish (10-100 times a day) would seem to provide a good example of a
great need for very fast execution.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
{ Your quoting is excessive. Please try to include only the minimum
necessary to establish the context. -mod }


On 6/21/2010 6:34 PM, Keith H Duggar wrote:
> On Jun 21, 10:00 am, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>> On 6/21/2010 1:14 AM, Walter Bright wrote:
>>> Peter Olcott wrote:
>>>> So would you say that it is helpful to make it as fast as possible at
>>>> design time, instead of just getting it working, and then later making
>>>> it fast? (It also sounds like you might follow this with repeated
>>>> iterations of speed improvement re-factorings).
>>
>>> If you have the experience to know how to design something for speed,
>>> sure, do
>>> it. Designing for speed isn't inherently bad, the problem comes
>>> inexperience
>>> with the problem domain leading to a poor choice of tradeoffs.
>>
>>> I suppose it's like designing an airplane. If you want the airplane to
>>> go fast,
>>> you've got to make those decisions favoring speed at every step of the
>>> design.
>>> But you'll still need an experienced aero engineer to get those
>>> tradeoffs right.
>>> Trying to retrofit speed in later isn't going to end well.
>>
>> That was exactly the point that I was trying to make with this thread.
>> Designing speed in from the beginning is much cheaper than just getting
>> it working, and then later making it faster, if very fast code is
>> initially known to be a primary design goal.
>
> Umm ... except that your thread title and various statements
> tried to create a (false) distinction between "conventional
> wisdom" and "this way" that does not exist. What Walter above
> describes is conventional wisdom at least among those who are
> educated (or have otherwise learned) proper /engineering/
> (software and otherwise).
>
> What you described as "conventional wisdom" is more aptly
> termed "naive wisdom" ie an oxymoron. That we are working in
> an industry where it has somehow become acceptable for those
> without proper education and training to produce our product
> (software) is another matter entirely; and it shows.
>
> KHD
>
>

What I cited as conventional wisdom has been cited to me countless times
as conventional wisdom by various respondents on other similar forums.
Essentially just get it working without considering speed at all, and
then later make it faster as needed. This does sound pretty naive.

However as Pete Becker so aptly pointed out it is very often the case
that the above degree of focus on performance is far too costly. I had
one employer that required batch programs for internal use where any
time spent on performance was time wasted.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]