From: nmm1 on
In article <b7mdnUm0cN66woLRnZ2dnUVZ_vmdnZ2d(a)giganews.com>,
Peter Olcott <peteolcott(a)gmail.com> wrote:
>On 6/21/2010 8:07 AM, Bart van Ingen Schenau wrote:
>>
>>> Conventional wisdom says to get the code working and then profile it and
>>> then make those parts that are too slow faster.
>>>
>> My understanding is that this wisdom mostly applies to micro-
>> optimizations.
>>
>> When writing software, one of my targets, besides correctness,
>> readability and maintainability, is to reach a good efficiency, where
>> efficiency is a trade-off between execution speed, memory usage,
>> development costs (mostly -time) and implementation constraints
>> (preferred language, platform limitations, etc.).
>>
>> At design time, you select the most efficient algorithm.
>> At coding time, you write the most efficient implementation of the
>> algorithm.
>> And only when it is still not fast enough, you break out the profiler
>> and see what further optimizations are needed. Hopefully only micro-
>> optimizations, or you made a wrong trade-off in the earlier stages.
>> And it is only for micro-optimizations that goals like maintainability
>> may become less important.
>
>I had one employer recently where this degree of code quality was far
>too expensive. All of the code was to be used internally by this one
>small company and it was to be executed in batch mode.

Er, no. As I teach, increasing quality (mainly including consistency
checking) often doubles the time to get the demonstration working,
and halves the time until you can actually use the results - and that
is for researchers running their own code. One extreme example was
IBM CICS, when it was redesigned using Z - they didn't even start
coding until a long way through the schedule, and finished ahead
of schedule with many fewer bugs than budgetted for! Impressive.

I agree with you about mere performance - that is far less often an
issue than is made out. Nowadays.

>I am thinking that a focus on speed may be most appropriate when speed
>directly impacts response time, and the user is idle while waiting for
>the response. This is increasingly more important depending on the
>number of users, the duration of the response time, and the cost of the
>user's time.
>
>Millions of software engineers waiting several minutes for a compiler to
>finish (10-100 times a day) would seem to provide a good example of a
>great need for very fast execution.

Definitely NOT. If they spend more than a few percent of their time
compiling, someone has got it badly wrong. Again, that was not so
40 years back, but please let's move on!

I will agree that a lot of the time the people who have got it badly
wrong are the language and library designers and implementors. With
both C++ and (modern) Fortran, recompiling a very small amount of
code can take a disproportionate time. But that's NOT an argument
for optimising the code, but of simplifying the (language) design
to allow for genuinely incremental compilation.


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Bart van Ingen Schenau on
On Jun 22, 1:33 am, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> On 6/21/2010 8:07 AM, Bart van Ingen Schenau wrote:
>
> > When writing software, one of my targets, besides correctness,
> > readability and maintainability, is to reach a good efficiency, where
> > efficiency is a trade-off between execution speed, memory usage,
> > development costs (mostly -time) and implementation constraints
> > (preferred language, platform limitations, etc.).
>
> > At design time, you select the most efficient algorithm.
> > At coding time, you write the most efficient implementation of the
> > algorithm.
> > And only when it is still not fast enough, you break out the profiler
> > and see what further optimizations are needed. Hopefully only micro-
> > optimizations, or you made a wrong trade-off in the earlier stages.
> > And it is only for micro-optimizations that goals like maintainability
> > may become less important.
>
> > regards,
> > Bart v Ingen Schenau

{ ^ such a closing greeting and signature should have been removed
from the quoting in the first place. -mod }

>
> I had one employer recently where this degree of code quality was far
> too expensive.

Somehow I doubt that. Note that I have factored the development cost
also in the efficiency measure. If they don't have the budget to do an
evaluation of the existing algorithms, then they will get what my
experience tells me is best and can be implemented within the given
constraints.
I will not implement a sub-optimal algorithm if I know I can implement
a better one within the same time-frame.

> All of the code was to be used internally by this one
> small company and it was to be executed in batch mode.
>
> I am thinking that a focus on speed may be most appropriate when speed
> directly impacts response time, and the user is idle while waiting for
> the response. This is increasingly more important depending on the
> number of users, the duration of the response time, and the cost of the
> user's time.

But why would you make something that is designed as a batch job
knowingly inefficient?
Perhaps someone will someday try to run the job interactively.
Probably the amount of data being fed into the process will increase
over time, which might mean that an inefficient process causes the
nightly run to take more than a night.

Bart v Ingen Schenau


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Keith H Duggar on
On Jun 22, 8:31 am, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> On 6/21/2010 6:34 PM, Keith H Duggar wrote:
> > What you described as "conventional wisdom" is more aptly
> > termed "naive wisdom" ie an oxymoron. That we are working in
> > an industry where it has somehow become acceptable for those
> > without proper education and training to produce our product
> > (software) is another matter entirely; and it shows.
>
> What I cited as conventional wisdom has been cited to me countless times
> as conventional wisdom by various respondents on other similar forums.
> Essentially just get it working without considering speed at all, and
> then later make it faster as needed. This does sound pretty naive.

In my opinion, Joshua Maurice sums up what we are discussing
perfectly in his post

http://groups.google.com/group/comp.lang.c++.moderated/msg/dacba7e87ded4dd7

Engineering hinges on reasoning about the goals and resources.
Once you replace reasoning with mantras it is not engineering.
As Joshua points out, many such mantras are reactions pushing
back against equally naive mantras and practices.

> However as Pete Becker so aptly pointed out it is very often the case
> that the above degree of focus on performance is far too costly. I had
> one employer that required batch programs for internal use where any
> time spent on performance was time wasted.

Sure, and a good engineer will determine where time is well put.
Also, as Walter Bright hinted at, if you have enough experience
knowing which optimizations will pay off can allow one to rather
quickly design and implement with those in mind without losing
much if any productivity even in the short-term. And yeah that
can often pay big dividends over the project lifetime.

KHD

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Dragan Milenkovic on
Peter Olcott wrote:
> On 6/21/2010 6:34 PM, Keith H Duggar wrote:
>>
>> Umm ... except that your thread title and various statements
>> tried to create a (false) distinction between "conventional
>> wisdom" and "this way" that does not exist. What Walter above
>> describes is conventional wisdom at least among those who are
>> educated (or have otherwise learned) proper /engineering/
>> (software and otherwise).
>>
>> What you described as "conventional wisdom" is more aptly
>> termed "naive wisdom" ie an oxymoron. That we are working in
>> an industry where it has somehow become acceptable for those
>> without proper education and training to produce our product
>> (software) is another matter entirely; and it shows.
>>
>> KHD
>>
>>
>
> What I cited as conventional wisdom has been cited to me countless times
> as conventional wisdom by various respondents on other similar forums.
> Essentially just get it working without considering speed at all, and
> then later make it faster as needed. This does sound pretty naive.

This is true, but only if applied properly. For example, you can
(and should) leave for later revisions to improve data representation,
implemented algorithms, etc. You can also rewrite larger pieces
of your software, provided that the new code can fit well.
But you really need a _proper_ design to start with... redesigning
the compiler is much costly than implementing a faster parser.
By "proper" design, I mean that the software solves specified
goals on the design level, whether those goals include performance,
scalability, response time, reliability, throughput,
user experience :-D ...

So, those who cite "conventional wisdom" are saying: "Why do you care
to write the fastest parser all at once?!?". But they do hope that
you won't have to change 80% of the whole compiler because you
suddenly realized that the parser is not modular and has spread
its roots all over the place. (I'm bad at examples... sorry...)

At least, this is how I interpret Keith's words.

"Conventional wisdom" you mention is there to make you spend more time
on analyzing, design, refactoring... instead of worrying too much
prematurely for lower level implementation and representation.
You _do_ have to worry about those things, too... and know in advance
what can and cannot be done; but you don't need to write it immediately.

--
Dragan

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
>> All of the code was to be used internally by this one
>> small company and it was to be executed in batch mode.
>>
>> I am thinking that a focus on speed may be most appropriate when speed
>> directly impacts response time, and the user is idle while waiting for
>> the response. This is increasingly more important depending on the
>> number of users, the duration of the response time, and the cost of the
>> user's time.
>
> But why would you make something that is designed as a batch job
> knowingly inefficient?

You simply don't spend any time on making it efficient.

> Perhaps someone will someday try to run the job interactively.
> Probably the amount of data being fed into the process will increase
> over time, which might mean that an inefficient process causes the
> nightly run to take more than a night.
>
> Bart v Ingen Schenau
>
>

All of this was moot to the employer in question. If time was taken to
make the code fast the whole project would become infeasibly expensive.
Take a look at what Pete Becker said about this, he made a good point.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]