From: Patricia Shanahan on
Ben Pfaff wrote:
> Patricia Shanahan <pats(a)acm.org> writes:
>
>> Anyone who is writing everything twice to get their unit tests is
>> seriously lacking in imagination. Generally, a unit test should be
>> checking the tested module's postconditions given known data, not trying
>> to reproduce the steps it took to achieve those postconditions.
>
> Writing everything twice can in fact be a good approach to
> writing unit tests. It makes sense sometimes to write a clever
> implementation of an optimized data structure, and then to write
> the unit tests in the form of a simple, "obviously correct"
> version of that data structure plus a driver that performs the
> same operations on both data structures and compares the results.

I serious question using that strategy for "everything". It would imply
that the entire program is clever implementations using optimized data
structures.

I do use that strategy sometimes, but when I do I usually already have
both the simpler implementation and some existing unit tests. I tend to
start by implementing things the simplest way I know, and then only go
to a clever implementation if needed for performance.

Patricia
From: Ben Pfaff on
Patricia Shanahan <pats(a)acm.org> writes:

> Ben Pfaff wrote:
>> Patricia Shanahan <pats(a)acm.org> writes:
>>
>>> Anyone who is writing everything twice to get their unit tests is
>>> seriously lacking in imagination. Generally, a unit test should be
>>> checking the tested module's postconditions given known data, not trying
>>> to reproduce the steps it took to achieve those postconditions.
>>
>> Writing everything twice can in fact be a good approach to
>> writing unit tests. It makes sense sometimes to write a clever
>> implementation of an optimized data structure, and then to write
>> the unit tests in the form of a simple, "obviously correct"
>> version of that data structure plus a driver that performs the
>> same operations on both data structures and compares the results.
>
> I serious question using that strategy for "everything". It would imply
> that the entire program is clever implementations using optimized data
> structures.

I was taking "everything" to mean a particular unit. I certainly
wouldn't do that for an entire program. Most code is boring,
obvious, and straightforward, or at least should be.
--
"If a person keeps faithfully busy each hour of the working day, he
can count on waking up some morning to find himself one of the
competent ones of his generation."
--William James
From: Espen Myrland on
Brett Williams <mungojelly(a)gmail.com> writes:

>
> I've been thinking about your point that the old implementation has
> expectations of the surrounding code. It seems like there's a sort of
> destructive chain reaction: Each adjustment sends out ripples of
> destruction to everywhere that depended on the old behavior, which
> necessitates more adjustments, that surprise more clients, etc. I'm
> assuming there's a reason for enduring this chaos, but I'm not
> grokking it yet.
>
> I've been trying to think of how to express my idea in other forms
> than try/except, because that's just an example (probably a bad
> example) of what I'm thinking. I just don't understand in general why
> the process of programming should be so destructive, why the program
> needs to be an amnesiac who only remembers the very last way we taught
> it how to do anything.



What you is thinking about is called macros. Check out LaTex.


Regards,

/myr
From: Lie Ryan on
All things being equal, I refactor aggresively, even if it means
breaking old behavior.

I realize this approach won't work for all software, especially for
libraries where having a stable API and stable behavior is wanted. But
for end-user software, aggressive refactoring is usually better for the
agility of the software.
From: Daniel Pitts on
On 6/10/2010 1:10 AM, Brett Williams wrote:
> This is probably just a naive question, so I'd genuinely appreciate
> anyone's critical response. This just keeps coming to mind and I
> can't quite understand why everyone does things they way they do.
>
> Suppose our program is doing something in a particular way. It works
> OK, or it works most of the time, but we've figured out a way to make
> it work better, or work more of the time. It seems like the standard
> thing to do is to replace:
>
> try { old implementation }
> except { fall flat on our face }
>
> With:
>
> try { new implementation }
> except { fall flat on our face }
>
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
> log that there's apparently a problem with the new implementation;
> try { old implementation } // we know this used to work!
> except { NOW fall flat on our face! }
> }
>
> What am I missing? Why is code that used to be considered important
> and reliable constantly scrapped, to somewhere deep in a heap of old
> dead code that can only be resurrected by human intervention, just
> because we thought of something a little faster or broader or newer or
> cleverer?
>
> <3,
> mungojelly
Because if either of those pieces of code implement business rules that
might change, you now have two places to maintain that business rule.

Also, sometimes a failure leaves things in an inconsistent state (this
can be avoided, but takes extra care and may impose overhead). If that
is the case, then running the old implementation may have disastrous
results.

Also, in general, if you can make something work with less code, then it
is less likely to contain subtle bugs. The old implementation may have
subtle bugs, or even side-effects which can cause bugs in other code.
Less code, less to worry about.

C.A.R. Hoare has been quoted: "There are two ways of constructing a
software design: One way is to make it so simple that there are
obviously no deficiencies, and the other way is to make it so
complicated that there are no obvious deficiencies."


--
Daniel Pitts' Tech Blog: <http://virtualinfinity.net/wordpress/>