From: Joseph M. Newcomer on
Do you know the "guard page" trick?

This deals with the fact that for some algorithms, the loop bounds check is actually a
significant cost of the entire loop. So the trick is to put an unreadable page just
beyond the last element of the vector you are running, and write a loop of the form:

for(int i = 0; ; i++)
...data[i]...
and when the exception is taken when the guard page is hit, you have let the hardware do
the loop bounds check; the rest of the time it costs you nothing because there is no
compare-i-to-limit code. This technique was developed for supercomputers in the early
1970s. It has the advantage that there is no branch prediction to worry about, and
consequently there is nothing that breaks the prefetch pipe, so the loop really screams.
But you have to use __try/__except to catch the boundary condition. This is actually a
legitimate use of the technique, and was used a lot. Windows (see VirtualAlloc) even
supports guard pages, so it can still be used today.

And yes, the other usage is when you can have user-written code that throws exceptions
which you must catch to avoid ungraceful termination of the entire app (I did this with an
XLISP interpreter back in 1987 or so, even implemented the equivalent of guard pages in
MS-DOS for handling stack overflow if the user wrote an XLISP recursion that used up all
the stack, no easy trick). I first used this trick under TSS/360 where we built an
interpretive language, around 1968. And I had to catch "exceptions" from the FORTRAN
runtime, which was a horrible example of some of the worst coding in the world (a
subroutine could do the equivalent of exit(1), which is always a mistake in ANY library!)
joe

On Thu, 25 Mar 2010 08:18:27 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Joseph M. Newcomer wrote:
>
>>> The #1 exploit hackers look for when overloading an input source
>>> looking for the right size that would trigger maybe a:
>>>
>>> EXCEPTION_CONTINUE_EXECUTION
>> ****
>> This is a value which tells the stack unwinder what to do, it is itself, not an exception.
>> Read about __try/__except to understand what this symbol means.
>
>> ****
>
>
>
>Right joe, it is the return value for the RTL exception handler. The
>possible return values are:
>
> EXCEPTION_CONTINUE_SEARCH
> EXCEPTION_CONTINUE_EXECUTION
> EXCEPTION_EXECUTE_HANDLER
>
>The point was that hackers are looking for poorly programmed
>applications who rely on TRY/EXCEPT exception and error trapping to
>solve their run time problems with the attempt to recover.
>
>I philosophically try to avoid it and try to avoid any library that
>forces its usage. I prefer boolean functionality and logic flow over
>using exception handling for normal logic flow. All errors MUST be
>understood - thats the essence of black box interfacing and getting
>high degree of quality assurance.
>
>But when you have application server products where customers can
>write their own p-code applications for this operations or allow 3rd
>party applications to run, your RTE has to behave like an OS and trap
>run time exceptions and try to keep their hosting server running,
>producing your own little Dr. Watson report. We have yet to get the
>cojoness to do "spy ware" to send reports to our HQ servers like MS
>does. :)
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Wed, 24 Mar 2010 22:24:03 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:tnjlq55gn872v0p54bmd89p35hvjct0i18(a)4ax.com...
>> See below...
>>
>> On Wed, 24 Mar 2010 20:07:32 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>>>message
>>>news:uIE36l6yKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>>>> Peter Olcott wrote:
>>>>
>>>>
>>>>> I learned this from an email from Ward Cunningham the
>>>>> inventor of CRC cards. I asked him if he could only
>>>>> choose a single criterion measure of code quality what
>>>>> would it be. He said code size, eliminate redundant
>>>>> code.
>>>>
>>>> He probably meant about reusability Peter.
>>>
>>>Yes and code size (lines-of-code) quantified the degree of
>>>re-use.
>> ****
>> Back when I was at the SEI we studied metrics used to
>> measure reuse, and lines of code was
>> the least useful and poorest predictor. I spent a couple
>> years looking at this problem,
>
>He actually said code size meaning source code size, and I
>am sure that he did not mean to make all variable names very
>short, because this would make the program more complex and
>not be a measure of the degree of re-use.
****
I'm sorry, I missed where "length of variable names" became a relevant concept here.
Please explain why this came up.

Note that the abstract measure of complexity is neither identifier length nor lines of
code, but "lexemes". And measure of complexity of a statement is lexemes per line, which
back inthe early 1960s was essentially 3 for most real programs. With newer languages it
is probably slighly higher. But you were blathering about code size as if it had meaning,
which we established 20 years ago it doesn't have meaning. Read any of the classic texts
on software reuse that came out in the mid-1980s.
joe
****
>
>> talking to some of the key designers and managers in the
>> field, and generally they would
>> say "we use lines of code, but nobody trusts it" and give
>> several cogent reasons it was
>> meaningless as a metric of reuse, productivity, or
>> anything else.
>
>It is a meaningless measure of programmer productivity, when
>it is taken to be that more is better. It is not as
>meaningless as a measure of re-use when it is taken that
>less is better.
****
Actually, you have no idea what you are talking about. I spent years studying this
problem, and I know you are spouting gibberish here. Did you know the Japanese "software
factory" measures reuse by counting the lines of subroutine libraries as part of the
"productivity" of a programmer? And do you know why this actually has more validity than
raw line count? I do. I talked with them in 1986. Did you? I have the information
first-hand. I even know why they are a bit distrustful of these numbers.
****
>
>What other quantifiable object measure of the specific
>degree of re-use would you propose?
****
There are no good metrics. We already know this. Read, for example, the ouvre of Barry
Boehm (several books, including the famous "spiral model" software development approach).
Read the papers he cites. Go to software engineering conferences (at least those that
took place in the mid-1980s where we cared about these things). LEARN SOMETHING!
joe
****
>
>> ****
>>>
>>>>
>>>> In programming, you can code for size or speed.
>>>> redundant
>>>> code is faster because you reduce stack overhead. When
>>>> you code for size, you are reusing code which has stack
>>>> overhead.
>>>
>>>No that is not quite it. Fewer lines-of-code are fewer
>>>lines-of-code that you ever have to deal with. By
>>>maximizing
>>>re-use changes get propagated with fewer changes.
>> ****
>> The old "top-down" programming argument. It doesn't
>> actually work, but Dijkstra made it
>> sound cool. It looks good until you try to use it in
>> practice, then it crumbles to dust.
>> The reason is that decisions get bound top to bottom, and
>> changing a top-level decision
>> ripples the whole way down the design tree; and if, at the
>> lower level, you need to make a
>> change, it ripples upward. Actually "rips" more correctly
>> describes the effect.
>>
>> Parnas got it right with the notion of module interfaces,
>> screw the lines of code. A
>> friend of mine got an award for software accomplishment
>> when he changed the code review
>> process to review ONLY the "interface" files (in the true
>> sense of INTERFACE, that is, no
>> private methods, no variables, period; only public methods
>> are in an interface). He said
>> that it was every bit as productive, and the code reviews
>> went faster. IBM though so to,
>> and gave him a corporate recognition award for
>> contributing to software productivity.
>>
>> Lines of code don't matter. Interfaces are all that
>> matter. Parnas said this in the late
>> 1960s and early 1970s, and essentially forty years of
>> history have proven him right.
>
>
>Minimizing the public interface? Yes that does sound like a
>better measure. Ward's reply was off-the-cuff and informal.
>
>>
>> Part of the role of a good compiler is to peek across
>> interfaces and produce good code in
>> spite of the abstractions. The MS C++ compiler is perhaps
>> the best optimizing compiler I
>> have ever seen, and I've seen a lot. I've heard the Intel
>> compiler is pretty good, too,
>> but I can't afford it.
>>
>> [disclosure: Dave Parnas was one of my professors at CMU,
>> taught one of the toughest
>> courses I ever took, which was operating systems, and
>> lectured in the hardware course. I
>> have a deep respect for him. He is one of the founders of
>> the field of Software Safety,
>> and works at a completely different level of specification
>> than mere mortals]
>> joe
>> ****
>
>The hardest one for me was compiler construction, it was
>also my favorite.
>
>>>
>>>>
>>>> But in the today's world of super fast machines and
>>>> bloated windows, higher dependency on dlls, proxies and
>>>> p-code RTL, and high code generated sizes, the code vs
>>>> speed ideas is, IMO, a thing of the past.
>>>>
>>>> Cases in point:
>>>>
>>>> 1) .NET, reusability, higher stack overhead, but faster
>>>> machines makes it all feasible.
>>>>
>>>> 2) The evolution of templates. Once a code for speed
>>>> with
>>>> the expense of redundant code and bigger size, today, it
>>>> is doesn't really matter and is more virtualize with
>>>> functional coding and interfacing.
>>>>
>>>> You do want speed, don't get me wrong, but you are not
>>>> going to waste type not creating reusable code. One
>>>> thing you can do quickly with functions is to use the
>>>> inline statement. This is good for low overhead black
>>>> box
>>>> functions:
>>>>
>>>> inline
>>>> const DWORD &GetRandom(const DWORD &size)
>>>> {
>>>> return (rand()*rand())%size;
>>>> }
>>>>
>>>> This gives the smaller functional programming sizing,yet
>>>> some speed considerations with reduce stack overhead.
>>>>
>>>>
>>>> --
>>>> HLS
>>>
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:qk0nq5ht4hmpa8qchhrttdfvs3g6j5aidu(a)4ax.com...
> See below...
> On Wed, 24 Mar 2010 22:24:03 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:tnjlq55gn872v0p54bmd89p35hvjct0i18(a)4ax.com...
>>> See below...
>>>
>>> On Wed, 24 Mar 2010 20:07:32 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>
>>>>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>>>>message
>>>>news:uIE36l6yKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>>>>> Peter Olcott wrote:
>>>>>
>>>>>
>>>>>> I learned this from an email from Ward Cunningham the
>>>>>> inventor of CRC cards. I asked him if he could only
>>>>>> choose a single criterion measure of code quality
>>>>>> what
>>>>>> would it be. He said code size, eliminate redundant
>>>>>> code.
>>>>>
>>>>> He probably meant about reusability Peter.
>>>>
>>>>Yes and code size (lines-of-code) quantified the degree
>>>>of
>>>>re-use.
>>> ****
>>> Back when I was at the SEI we studied metrics used to
>>> measure reuse, and lines of code was
>>> the least useful and poorest predictor. I spent a
>>> couple
>>> years looking at this problem,
>>
>>He actually said code size meaning source code size, and I
>>am sure that he did not mean to make all variable names
>>very
>>short, because this would make the program more complex
>>and
>>not be a measure of the degree of re-use.
> ****
> I'm sorry, I missed where "length of variable names"
> became a relevant concept here.
> Please explain why this came up.
>
> Note that the abstract measure of complexity is neither
> identifier length nor lines of
> code, but "lexemes". And measure of complexity of a
> statement is lexemes per line, which
> back inthe early 1960s was essentially 3 for most real
> programs. With newer languages it
> is probably slighly higher. But you were blathering about
> code size as if it had meaning,
> which we established 20 years ago it doesn't have meaning.
> Read any of the classic texts
> on software reuse that came out in the mid-1980s.
> joe
> ****
>>
>>> talking to some of the key designers and managers in the
>>> field, and generally they would
>>> say "we use lines of code, but nobody trusts it" and
>>> give
>>> several cogent reasons it was
>>> meaningless as a metric of reuse, productivity, or
>>> anything else.
>>
>>It is a meaningless measure of programmer productivity,
>>when
>>it is taken to be that more is better. It is not as
>>meaningless as a measure of re-use when it is taken that
>>less is better.
> ****
> Actually, you have no idea what you are talking about. I
> spent years studying this
> problem, and I know you are spouting gibberish here. Did
> you know the Japanese "software
> factory" measures reuse by counting the lines of
> subroutine libraries as part of the
> "productivity" of a programmer? And do you know why this
> actually has more validity than
> raw line count? I do. I talked with them in 1986. Did
> you? I have the information
> first-hand. I even know why they are a bit distrustful of
> these numbers.
> ****
>>
>>What other quantifiable object measure of the specific
>>degree of re-use would you propose?
> ****
> There are no good metrics. We already know this. Read,
> for example, the ouvre of Barry
> Boehm (several books, including the famous "spiral model"
> software development approach).
> Read the papers he cites. Go to software engineering
> conferences (at least those that
> took place in the mid-1980s where we cared about these
> things). LEARN SOMETHING!
> joe

Minimizing the public interface might be a good metric. Even
if there are no metrics that are entirely sufficient there
is always some combination of the best that we can do right
now.

> ****
>>
>>> ****
>>>>
>>>>>
>>>>> In programming, you can code for size or speed.
>>>>> redundant
>>>>> code is faster because you reduce stack overhead.
>>>>> When
>>>>> you code for size, you are reusing code which has
>>>>> stack
>>>>> overhead.
>>>>
>>>>No that is not quite it. Fewer lines-of-code are fewer
>>>>lines-of-code that you ever have to deal with. By
>>>>maximizing
>>>>re-use changes get propagated with fewer changes.
>>> ****
>>> The old "top-down" programming argument. It doesn't
>>> actually work, but Dijkstra made it
>>> sound cool. It looks good until you try to use it in
>>> practice, then it crumbles to dust.
>>> The reason is that decisions get bound top to bottom,
>>> and
>>> changing a top-level decision
>>> ripples the whole way down the design tree; and if, at
>>> the
>>> lower level, you need to make a
>>> change, it ripples upward. Actually "rips" more
>>> correctly
>>> describes the effect.
>>>
>>> Parnas got it right with the notion of module
>>> interfaces,
>>> screw the lines of code. A
>>> friend of mine got an award for software accomplishment
>>> when he changed the code review
>>> process to review ONLY the "interface" files (in the
>>> true
>>> sense of INTERFACE, that is, no
>>> private methods, no variables, period; only public
>>> methods
>>> are in an interface). He said
>>> that it was every bit as productive, and the code
>>> reviews
>>> went faster. IBM though so to,
>>> and gave him a corporate recognition award for
>>> contributing to software productivity.
>>>
>>> Lines of code don't matter. Interfaces are all that
>>> matter. Parnas said this in the late
>>> 1960s and early 1970s, and essentially forty years of
>>> history have proven him right.
>>
>>
>>Minimizing the public interface? Yes that does sound like
>>a
>>better measure. Ward's reply was off-the-cuff and
>>informal.
>>
>>>
>>> Part of the role of a good compiler is to peek across
>>> interfaces and produce good code in
>>> spite of the abstractions. The MS C++ compiler is
>>> perhaps
>>> the best optimizing compiler I
>>> have ever seen, and I've seen a lot. I've heard the
>>> Intel
>>> compiler is pretty good, too,
>>> but I can't afford it.
>>>
>>> [disclosure: Dave Parnas was one of my professors at
>>> CMU,
>>> taught one of the toughest
>>> courses I ever took, which was operating systems, and
>>> lectured in the hardware course. I
>>> have a deep respect for him. He is one of the founders
>>> of
>>> the field of Software Safety,
>>> and works at a completely different level of
>>> specification
>>> than mere mortals]
>>> joe
>>> ****
>>
>>The hardest one for me was compiler construction, it was
>>also my favorite.
>>
>>>>
>>>>>
>>>>> But in the today's world of super fast machines and
>>>>> bloated windows, higher dependency on dlls, proxies
>>>>> and
>>>>> p-code RTL, and high code generated sizes, the code vs
>>>>> speed ideas is, IMO, a thing of the past.
>>>>>
>>>>> Cases in point:
>>>>>
>>>>> 1) .NET, reusability, higher stack overhead, but
>>>>> faster
>>>>> machines makes it all feasible.
>>>>>
>>>>> 2) The evolution of templates. Once a code for speed
>>>>> with
>>>>> the expense of redundant code and bigger size, today,
>>>>> it
>>>>> is doesn't really matter and is more virtualize with
>>>>> functional coding and interfacing.
>>>>>
>>>>> You do want speed, don't get me wrong, but you are not
>>>>> going to waste type not creating reusable code. One
>>>>> thing you can do quickly with functions is to use the
>>>>> inline statement. This is good for low overhead black
>>>>> box
>>>>> functions:
>>>>>
>>>>> inline
>>>>> const DWORD &GetRandom(const DWORD &size)
>>>>> {
>>>>> return (rand()*rand())%size;
>>>>> }
>>>>>
>>>>> This gives the smaller functional programming
>>>>> sizing,yet
>>>>> some speed considerations with reduce stack overhead.
>>>>>
>>>>>
>>>>> --
>>>>> HLS
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Joseph M. Newcomer on
According to Einstein, "An interface should be as complex as necessary but no more so".
No, wait, he was talking about theories in physics. But anyone who wants to put numbers
on things like this is demonstrating total cluelessness; for example, I was at a
presentation by the Bullseye people who were talking about their Code Coverage tool, and
someone stood up and told the story that at his company, the rule was "85% of all code
must have met code coverage testing". He indicated that this was nearly impossible, so
instead of figuring out how to improve code coverage, which is hard when the code is error
recovery code, he said "We just eliminated all error recovery code; since it wasn't there
we didn't have to test it!". WHich, of course, any intelligent programming (including the
enitre room we were in) recoganizes as blindingly stupid. But it satisfied a bean counter
somewhere. As soon as you have bean counters apply metrics to productivity, "goodness" of
any sort, or pretty much anything, your project is doomed.
joe

On Thu, 25 Mar 2010 14:25:06 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:qk0nq5ht4hmpa8qchhrttdfvs3g6j5aidu(a)4ax.com...
>> See below...
>> On Wed, 24 Mar 2010 22:24:03 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>message news:tnjlq55gn872v0p54bmd89p35hvjct0i18(a)4ax.com...
>>>> See below...
>>>>
>>>> On Wed, 24 Mar 2010 20:07:32 -0500, "Peter Olcott"
>>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>
>>>>>
>>>>>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>>>>>message
>>>>>news:uIE36l6yKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>>>>>> Peter Olcott wrote:
>>>>>>
>>>>>>
>>>>>>> I learned this from an email from Ward Cunningham the
>>>>>>> inventor of CRC cards. I asked him if he could only
>>>>>>> choose a single criterion measure of code quality
>>>>>>> what
>>>>>>> would it be. He said code size, eliminate redundant
>>>>>>> code.
>>>>>>
>>>>>> He probably meant about reusability Peter.
>>>>>
>>>>>Yes and code size (lines-of-code) quantified the degree
>>>>>of
>>>>>re-use.
>>>> ****
>>>> Back when I was at the SEI we studied metrics used to
>>>> measure reuse, and lines of code was
>>>> the least useful and poorest predictor. I spent a
>>>> couple
>>>> years looking at this problem,
>>>
>>>He actually said code size meaning source code size, and I
>>>am sure that he did not mean to make all variable names
>>>very
>>>short, because this would make the program more complex
>>>and
>>>not be a measure of the degree of re-use.
>> ****
>> I'm sorry, I missed where "length of variable names"
>> became a relevant concept here.
>> Please explain why this came up.
>>
>> Note that the abstract measure of complexity is neither
>> identifier length nor lines of
>> code, but "lexemes". And measure of complexity of a
>> statement is lexemes per line, which
>> back inthe early 1960s was essentially 3 for most real
>> programs. With newer languages it
>> is probably slighly higher. But you were blathering about
>> code size as if it had meaning,
>> which we established 20 years ago it doesn't have meaning.
>> Read any of the classic texts
>> on software reuse that came out in the mid-1980s.
>> joe
>> ****
>>>
>>>> talking to some of the key designers and managers in the
>>>> field, and generally they would
>>>> say "we use lines of code, but nobody trusts it" and
>>>> give
>>>> several cogent reasons it was
>>>> meaningless as a metric of reuse, productivity, or
>>>> anything else.
>>>
>>>It is a meaningless measure of programmer productivity,
>>>when
>>>it is taken to be that more is better. It is not as
>>>meaningless as a measure of re-use when it is taken that
>>>less is better.
>> ****
>> Actually, you have no idea what you are talking about. I
>> spent years studying this
>> problem, and I know you are spouting gibberish here. Did
>> you know the Japanese "software
>> factory" measures reuse by counting the lines of
>> subroutine libraries as part of the
>> "productivity" of a programmer? And do you know why this
>> actually has more validity than
>> raw line count? I do. I talked with them in 1986. Did
>> you? I have the information
>> first-hand. I even know why they are a bit distrustful of
>> these numbers.
>> ****
>>>
>>>What other quantifiable object measure of the specific
>>>degree of re-use would you propose?
>> ****
>> There are no good metrics. We already know this. Read,
>> for example, the ouvre of Barry
>> Boehm (several books, including the famous "spiral model"
>> software development approach).
>> Read the papers he cites. Go to software engineering
>> conferences (at least those that
>> took place in the mid-1980s where we cared about these
>> things). LEARN SOMETHING!
>> joe
>
>Minimizing the public interface might be a good metric. Even
>if there are no metrics that are entirely sufficient there
>is always some combination of the best that we can do right
>now.
>
>> ****
>>>
>>>> ****
>>>>>
>>>>>>
>>>>>> In programming, you can code for size or speed.
>>>>>> redundant
>>>>>> code is faster because you reduce stack overhead.
>>>>>> When
>>>>>> you code for size, you are reusing code which has
>>>>>> stack
>>>>>> overhead.
>>>>>
>>>>>No that is not quite it. Fewer lines-of-code are fewer
>>>>>lines-of-code that you ever have to deal with. By
>>>>>maximizing
>>>>>re-use changes get propagated with fewer changes.
>>>> ****
>>>> The old "top-down" programming argument. It doesn't
>>>> actually work, but Dijkstra made it
>>>> sound cool. It looks good until you try to use it in
>>>> practice, then it crumbles to dust.
>>>> The reason is that decisions get bound top to bottom,
>>>> and
>>>> changing a top-level decision
>>>> ripples the whole way down the design tree; and if, at
>>>> the
>>>> lower level, you need to make a
>>>> change, it ripples upward. Actually "rips" more
>>>> correctly
>>>> describes the effect.
>>>>
>>>> Parnas got it right with the notion of module
>>>> interfaces,
>>>> screw the lines of code. A
>>>> friend of mine got an award for software accomplishment
>>>> when he changed the code review
>>>> process to review ONLY the "interface" files (in the
>>>> true
>>>> sense of INTERFACE, that is, no
>>>> private methods, no variables, period; only public
>>>> methods
>>>> are in an interface). He said
>>>> that it was every bit as productive, and the code
>>>> reviews
>>>> went faster. IBM though so to,
>>>> and gave him a corporate recognition award for
>>>> contributing to software productivity.
>>>>
>>>> Lines of code don't matter. Interfaces are all that
>>>> matter. Parnas said this in the late
>>>> 1960s and early 1970s, and essentially forty years of
>>>> history have proven him right.
>>>
>>>
>>>Minimizing the public interface? Yes that does sound like
>>>a
>>>better measure. Ward's reply was off-the-cuff and
>>>informal.
>>>
>>>>
>>>> Part of the role of a good compiler is to peek across
>>>> interfaces and produce good code in
>>>> spite of the abstractions. The MS C++ compiler is
>>>> perhaps
>>>> the best optimizing compiler I
>>>> have ever seen, and I've seen a lot. I've heard the
>>>> Intel
>>>> compiler is pretty good, too,
>>>> but I can't afford it.
>>>>
>>>> [disclosure: Dave Parnas was one of my professors at
>>>> CMU,
>>>> taught one of the toughest
>>>> courses I ever took, which was operating systems, and
>>>> lectured in the hardware course. I
>>>> have a deep respect for him. He is one of the founders
>>>> of
>>>> the field of Software Safety,
>>>> and works at a completely different level of
>>>> specification
>>>> than mere mortals]
>>>> joe
>>>> ****
>>>
>>>The hardest one for me was compiler construction, it was
>>>also my favorite.
>>>
>>>>>
>>>>>>
>>>>>> But in the today's world of super fast machines and
>>>>>> bloated windows, higher dependency on dlls, proxies
>>>>>> and
>>>>>> p-code RTL, and high code generated sizes, the code vs
>>>>>> speed ideas is, IMO, a thing of the past.
>>>>>>
>>>>>> Cases in point:
>>>>>>
>>>>>> 1) .NET, reusability, higher stack overhead, but
>>>>>> faster
>>>>>> machines makes it all feasible.
>>>>>>
>>>>>> 2) The evolution of templates. Once a code for speed
>>>>>> with
>>>>>> the expense of redundant code and bigger size, today,
>>>>>> it
>>>>>> is doesn't really matter and is more virtualize with
>>>>>> functional coding and interfacing.
>>>>>>
>>>>>> You do want speed, don't get me wrong, but you are not
>>>>>> going to waste type not creating reusable code. One
>>>>>> thing you can do quickly with functions is to use the
>>>>>> inline statement. This is good for low overhead black
>>>>>> box
>>>>>> functions:
>>>>>>
>>>>>> inline
>>>>>> const DWORD &GetRandom(const DWORD &size)
>>>>>> {
>>>>>> return (rand()*rand())%size;
>>>>>> }
>>>>>>
>>>>>> This gives the smaller functional programming
>>>>>> sizing,yet
>>>>>> some speed considerations with reduce stack overhead.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> HLS
>>>>>
>>>> Joseph M. Newcomer [MVP]
>>>> email: newcomer(a)flounder.com
>>>> Web: http://www.flounder.com
>>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>>
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:d1lnq5lnhup0mei8r1psgiuh06dag5rm99(a)4ax.com...
> According to Einstein, "An interface should be as complex
> as necessary but no more so".
> No, wait, he was talking about theories in physics. But
> anyone who wants to put numbers
> on things like this is demonstrating total cluelessness;
> for example, I was at a
> presentation by the Bullseye people who were talking about
> their Code Coverage tool, and
> someone stood up and told the story that at his company,
> the rule was "85% of all code
> must have met code coverage testing". He indicated that
> this was nearly impossible, so
> instead of figuring out how to improve code coverage,
> which is hard when the code is error
> recovery code, he said "We just eliminated all error
> recovery code; since it wasn't there
> we didn't have to test it!". WHich, of course, any
> intelligent programming (including the
> enitre room we were in) recoganizes as blindingly stupid.
> But it satisfied a bean counter
> somewhere. As soon as you have bean counters apply
> metrics to productivity, "goodness" of
> any sort, or pretty much anything, your project is doomed.
> joe
>

So I guess your opinion of CMMI level 5 is not very high.

> On Thu, 25 Mar 2010 14:25:06 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:qk0nq5ht4hmpa8qchhrttdfvs3g6j5aidu(a)4ax.com...
>>> See below...
>>> On Wed, 24 Mar 2010 22:24:03 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>
>>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>>message
>>>>news:tnjlq55gn872v0p54bmd89p35hvjct0i18(a)4ax.com...
>>>>> See below...
>>>>>
>>>>> On Wed, 24 Mar 2010 20:07:32 -0500, "Peter Olcott"
>>>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>>
>>>>>>
>>>>>>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>>>>>>message
>>>>>>news:uIE36l6yKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>>>>>>> Peter Olcott wrote:
>>>>>>>
>>>>>>>
>>>>>>>> I learned this from an email from Ward Cunningham
>>>>>>>> the
>>>>>>>> inventor of CRC cards. I asked him if he could only
>>>>>>>> choose a single criterion measure of code quality
>>>>>>>> what
>>>>>>>> would it be. He said code size, eliminate redundant
>>>>>>>> code.
>>>>>>>
>>>>>>> He probably meant about reusability Peter.
>>>>>>
>>>>>>Yes and code size (lines-of-code) quantified the
>>>>>>degree
>>>>>>of
>>>>>>re-use.
>>>>> ****
>>>>> Back when I was at the SEI we studied metrics used to
>>>>> measure reuse, and lines of code was
>>>>> the least useful and poorest predictor. I spent a
>>>>> couple
>>>>> years looking at this problem,
>>>>
>>>>He actually said code size meaning source code size, and
>>>>I
>>>>am sure that he did not mean to make all variable names
>>>>very
>>>>short, because this would make the program more complex
>>>>and
>>>>not be a measure of the degree of re-use.
>>> ****
>>> I'm sorry, I missed where "length of variable names"
>>> became a relevant concept here.
>>> Please explain why this came up.
>>>
>>> Note that the abstract measure of complexity is neither
>>> identifier length nor lines of
>>> code, but "lexemes". And measure of complexity of a
>>> statement is lexemes per line, which
>>> back inthe early 1960s was essentially 3 for most real
>>> programs. With newer languages it
>>> is probably slighly higher. But you were blathering
>>> about
>>> code size as if it had meaning,
>>> which we established 20 years ago it doesn't have
>>> meaning.
>>> Read any of the classic texts
>>> on software reuse that came out in the mid-1980s.
>>> joe
>>> ****
>>>>
>>>>> talking to some of the key designers and managers in
>>>>> the
>>>>> field, and generally they would
>>>>> say "we use lines of code, but nobody trusts it" and
>>>>> give
>>>>> several cogent reasons it was
>>>>> meaningless as a metric of reuse, productivity, or
>>>>> anything else.
>>>>
>>>>It is a meaningless measure of programmer productivity,
>>>>when
>>>>it is taken to be that more is better. It is not as
>>>>meaningless as a measure of re-use when it is taken that
>>>>less is better.
>>> ****
>>> Actually, you have no idea what you are talking about.
>>> I
>>> spent years studying this
>>> problem, and I know you are spouting gibberish here.
>>> Did
>>> you know the Japanese "software
>>> factory" measures reuse by counting the lines of
>>> subroutine libraries as part of the
>>> "productivity" of a programmer? And do you know why
>>> this
>>> actually has more validity than
>>> raw line count? I do. I talked with them in 1986. Did
>>> you? I have the information
>>> first-hand. I even know why they are a bit distrustful
>>> of
>>> these numbers.
>>> ****
>>>>
>>>>What other quantifiable object measure of the specific
>>>>degree of re-use would you propose?
>>> ****
>>> There are no good metrics. We already know this. Read,
>>> for example, the ouvre of Barry
>>> Boehm (several books, including the famous "spiral
>>> model"
>>> software development approach).
>>> Read the papers he cites. Go to software engineering
>>> conferences (at least those that
>>> took place in the mid-1980s where we cared about these
>>> things). LEARN SOMETHING!
>>> joe
>>
>>Minimizing the public interface might be a good metric.
>>Even
>>if there are no metrics that are entirely sufficient there
>>is always some combination of the best that we can do
>>right
>>now.
>>
>>> ****
>>>>
>>>>> ****
>>>>>>
>>>>>>>
>>>>>>> In programming, you can code for size or speed.
>>>>>>> redundant
>>>>>>> code is faster because you reduce stack overhead.
>>>>>>> When
>>>>>>> you code for size, you are reusing code which has
>>>>>>> stack
>>>>>>> overhead.
>>>>>>
>>>>>>No that is not quite it. Fewer lines-of-code are fewer
>>>>>>lines-of-code that you ever have to deal with. By
>>>>>>maximizing
>>>>>>re-use changes get propagated with fewer changes.
>>>>> ****
>>>>> The old "top-down" programming argument. It doesn't
>>>>> actually work, but Dijkstra made it
>>>>> sound cool. It looks good until you try to use it in
>>>>> practice, then it crumbles to dust.
>>>>> The reason is that decisions get bound top to bottom,
>>>>> and
>>>>> changing a top-level decision
>>>>> ripples the whole way down the design tree; and if, at
>>>>> the
>>>>> lower level, you need to make a
>>>>> change, it ripples upward. Actually "rips" more
>>>>> correctly
>>>>> describes the effect.
>>>>>
>>>>> Parnas got it right with the notion of module
>>>>> interfaces,
>>>>> screw the lines of code. A
>>>>> friend of mine got an award for software
>>>>> accomplishment
>>>>> when he changed the code review
>>>>> process to review ONLY the "interface" files (in the
>>>>> true
>>>>> sense of INTERFACE, that is, no
>>>>> private methods, no variables, period; only public
>>>>> methods
>>>>> are in an interface). He said
>>>>> that it was every bit as productive, and the code
>>>>> reviews
>>>>> went faster. IBM though so to,
>>>>> and gave him a corporate recognition award for
>>>>> contributing to software productivity.
>>>>>
>>>>> Lines of code don't matter. Interfaces are all that
>>>>> matter. Parnas said this in the late
>>>>> 1960s and early 1970s, and essentially forty years of
>>>>> history have proven him right.
>>>>
>>>>
>>>>Minimizing the public interface? Yes that does sound
>>>>like
>>>>a
>>>>better measure. Ward's reply was off-the-cuff and
>>>>informal.
>>>>
>>>>>
>>>>> Part of the role of a good compiler is to peek across
>>>>> interfaces and produce good code in
>>>>> spite of the abstractions. The MS C++ compiler is
>>>>> perhaps
>>>>> the best optimizing compiler I
>>>>> have ever seen, and I've seen a lot. I've heard the
>>>>> Intel
>>>>> compiler is pretty good, too,
>>>>> but I can't afford it.
>>>>>
>>>>> [disclosure: Dave Parnas was one of my professors at
>>>>> CMU,
>>>>> taught one of the toughest
>>>>> courses I ever took, which was operating systems, and
>>>>> lectured in the hardware course. I
>>>>> have a deep respect for him. He is one of the
>>>>> founders
>>>>> of
>>>>> the field of Software Safety,
>>>>> and works at a completely different level of
>>>>> specification
>>>>> than mere mortals]
>>>>> joe
>>>>> ****
>>>>
>>>>The hardest one for me was compiler construction, it was
>>>>also my favorite.
>>>>
>>>>>>
>>>>>>>
>>>>>>> But in the today's world of super fast machines and
>>>>>>> bloated windows, higher dependency on dlls, proxies
>>>>>>> and
>>>>>>> p-code RTL, and high code generated sizes, the code
>>>>>>> vs
>>>>>>> speed ideas is, IMO, a thing of the past.
>>>>>>>
>>>>>>> Cases in point:
>>>>>>>
>>>>>>> 1) .NET, reusability, higher stack overhead, but
>>>>>>> faster
>>>>>>> machines makes it all feasible.
>>>>>>>
>>>>>>> 2) The evolution of templates. Once a code for speed
>>>>>>> with
>>>>>>> the expense of redundant code and bigger size,
>>>>>>> today,
>>>>>>> it
>>>>>>> is doesn't really matter and is more virtualize with
>>>>>>> functional coding and interfacing.
>>>>>>>
>>>>>>> You do want speed, don't get me wrong, but you are
>>>>>>> not
>>>>>>> going to waste type not creating reusable code.
>>>>>>> One
>>>>>>> thing you can do quickly with functions is to use
>>>>>>> the
>>>>>>> inline statement. This is good for low overhead
>>>>>>> black
>>>>>>> box
>>>>>>> functions:
>>>>>>>
>>>>>>> inline
>>>>>>> const DWORD &GetRandom(const DWORD &size)
>>>>>>> {
>>>>>>> return (rand()*rand())%size;
>>>>>>> }
>>>>>>>
>>>>>>> This gives the smaller functional programming
>>>>>>> sizing,yet
>>>>>>> some speed considerations with reduce stack
>>>>>>> overhead.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> HLS
>>>>>>
>>>>> Joseph M. Newcomer [MVP]
>>>>> email: newcomer(a)flounder.com
>>>>> Web: http://www.flounder.com
>>>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm