From: Peter Olcott on
> In my opinion, Joshua Maurice sums up what we are discussing
> perfectly in his post
>
> http://groups.google.com/group/comp.lang.c++.moderated/msg/dacba7e87ded4dd7
>

I would agree with this except the statement indicating that my UTF8.h
code was hackery. None of the many comments indicated that there was
anything at all substantively wrong with the provided implementation.

> Engineering hinges on reasoning about the goals and resources.
> Once you replace reasoning with mantras it is not engineering.
> As Joshua points out, many such mantras are reactions pushing
> back against equally naive mantras and practices.
>
>> However as Pete Becker so aptly pointed out it is very often the case
>> that the above degree of focus on performance is far too costly. I had
>> one employer that required batch programs for internal use where any
>> time spent on performance was time wasted.
>
> Sure, and a good engineer will determine where time is well put.
> Also, as Walter Bright hinted at, if you have enough experience
> knowing which optimizations will pay off can allow one to rather
> quickly design and implement with those in mind without losing
> much if any productivity even in the short-term. And yeah that
> can often pay big dividends over the project lifetime.
>
> KHD
>

That is almost a restatement of my own position. My own position would
have the first priority of determining whether or not and to what degree
high performance is necessary, and then provide that degree of
performance especially at design time. Micro optimizations can be
provided later on as needed after a good design has been implemented. If
you really need high speed, then this must be designed in from the start.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
>> I had one employer recently where this degree of code quality was
>> far too expensive. All of the code was to be used internally by
>> this one small company and it was to be executed in batch mode.
>
> Er, no. As I teach, increasing quality (mainly including
> consistency checking) often doubles the time to get the demonstration
> working, and halves the time until you can actually use the results -
> and that is for researchers running their own code. One extreme
> example was

The owner of this company was not aware of this. Also I am not talking
about reducing the quality of the code, merely not spending any time on
improvements to the the performance of the code.

> IBM CICS, when it was redesigned using Z - they didn't even start
> coding until a long way through the schedule, and finished ahead of
> schedule with many fewer bugs than budgetted for! Impressive.

Yes, great design directly results in disproportional reductions in
debugging time.

>
> I agree with you about mere performance - that is far less often an
> issue than is made out. Nowadays.
>
>> I am thinking that a focus on speed may be most appropriate when
>> speed directly impacts response time, and the user is idle while
>> waiting for the response. This is increasingly more important
>> depending on the number of users, the duration of the response
>> time, and the cost of the user's time.
>>
>> Millions of software engineers waiting several minutes for a
>> compiler to finish (10-100 times a day) would seem to provide a
>> good example of a great need for very fast execution.
>
> Definitely NOT. If they spend more than a few percent of their time
> compiling, someone has got it badly wrong. Again, that was not so 40
> years back, but please let's move on!

The original Microsoft Pascal compiler was sixty-fold slower that Turbo
Pascal 3.0. Three seconds on TP took 3 minutes on MS.

>
> I will agree that a lot of the time the people who have got it badly
> wrong are the language and library designers and implementors. With
> both C++ and (modern) Fortran, recompiling a very small amount of
> code can take a disproportionate time. But that's NOT an argument
> for optimising the code, but of simplifying the (language) design to
> allow for genuinely incremental compilation.

Or adapting the compiler design to account for the existing language so
that it can do a better job on incremental compilation.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
>> What I cited as conventional wisdom has been cited to me countless times
>> as conventional wisdom by various respondents on other similar forums.
>> Essentially just get it working without considering speed at all, and
>> then later make it faster as needed. This does sound pretty naive.
>
> This is true, but only if applied properly. For example, you can
> (and should) leave for later revisions to improve data representation,
> implemented algorithms, etc. You can also rewrite larger pieces
> of your software, provided that the new code can fit well.
> But you really need a _proper_ design to start with... redesigning
> the compiler is much costly than implementing a faster parser.
> By "proper" design, I mean that the software solves specified
> goals on the design level, whether those goals include performance,
> scalability, response time, reliability, throughput,
> user experience :-D ...
>
> So, those who cite "conventional wisdom" are saying: "Why do you care
> to write the fastest parser all at once?!?". But they do hope that
> you won't have to change 80% of the whole compiler because you
> suddenly realized that the parser is not modular and has spread
> its roots all over the place. (I'm bad at examples... sorry...)
>
> At least, this is how I interpret Keith's words.
>
> "Conventional wisdom" you mention is there to make you spend more time
> on analyzing, design, refactoring... instead of worrying too much
> prematurely for lower level implementation and representation.
> You _do_ have to worry about those things, too... and know in advance
> what can and cannot be done; but you don't need to write it immediately.
>

What I am saying is that if high speed is a design goal, than it must be
considered throughout the design process and not postponed to later on.
This would directly contradict the conventional wisdom that I have had
related to me.

Also I agree with you the the implementation of a good design can wait
until later on for additional improvements as needed. This is exactly
what I mean what I am referring to aiming for the ballpark of the
fastest code: Design it to be fast, implement it to be simple.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Joshua Maurice on
On Jun 22, 11:30 pm, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
> > In my opinion, Joshua Maurice sums up what we are discussing
> > perfectly in his post
>
> > http://groups.google.com/group/comp.lang.c++.moderated/msg/dacba7e87d...
>
> I would agree with this except the statement indicating that my UTF8.h
> code was hackery. None of the many comments indicated that there was
> anything at all substantively wrong with the provided implementation.

No offense intended. I'm sure your code is fine quality code. However,
I view this as a micro optimization in the same vein as my coworker
who spent an hour measuring the time of clock() because he was afraid
it would be too slow in a function which allocated 10 MB blocks (or
something). I think both are overkill in a professional setting. It's
hackery in terms of where you put your effort, not in terms of code
quality. If you're doing it just for fun, more power to you.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Olcott on
On 6/24/2010 6:04 AM, Joshua Maurice wrote:
> On Jun 22, 11:30 pm, Peter Olcott<NoS...(a)OCR4Screen.com> wrote:
>>> In my opinion, Joshua Maurice sums up what we are discussing
>>> perfectly in his post
>>
>>> http://groups.google.com/group/comp.lang.c++.moderated/msg/dacba7e87d...
>>
>> I would agree with this except the statement indicating that my UTF8.h
>> code was hackery. None of the many comments indicated that there was
>> anything at all substantively wrong with the provided implementation.
>
> No offense intended. I'm sure your code is fine quality code. However,
> I view this as a micro optimization in the same vein as my coworker
> who spent an hour measuring the time of clock() because he was afraid
> it would be too slow in a function which allocated 10 MB blocks (or
> something). I think both are overkill in a professional setting. It's
> hackery in terms of where you put your effort, not in terms of code
> quality. If you're doing it just for fun, more power to you.
>
>

I don't see how you consider this a micro optimization. There are
numerous aspects that were deliberately not optimized.
http://www.ocr4screen.com/UTF8.h

Another advantage of this design is that it directly maps to the
definition of UTF-8 so it was very easy to maximize its reliability.
Another implementation that I was considering was 300% faster, yet kept
crashing.

The above code is a concrete example of exactly what I mean by the
ballpark of as fast as possible:
(1) Maximize self-documentation
(2) Minimize inessential complexity
(3) Which minimizes debugging time.

It was the simplicity of this design (the way that the code directly
maps to the definition of UTF-8) that provides its high performance.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]