From: Brett Williams on
This is probably just a naive question, so I'd genuinely appreciate
anyone's critical response. This just keeps coming to mind and I
can't quite understand why everyone does things they way they do.

Suppose our program is doing something in a particular way. It works
OK, or it works most of the time, but we've figured out a way to make
it work better, or work more of the time. It seems like the standard
thing to do is to replace:

try { old implementation }
except { fall flat on our face }

With:

try { new implementation }
except { fall flat on our face }

What occurs to me instead is:

try { new implementation }
except
{
log that there's apparently a problem with the new implementation;
try { old implementation } // we know this used to work!
except { NOW fall flat on our face! }
}

What am I missing? Why is code that used to be considered important
and reliable constantly scrapped, to somewhere deep in a heap of old
dead code that can only be resurrected by human intervention, just
because we thought of something a little faster or broader or newer or
cleverer?

<3,
mungojelly
From: Tim Harig on
On 2010-06-10, Brett Williams <mungojelly(a)gmail.com> wrote:
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
> log that there's apparently a problem with the new implementation;
> try { old implementation } // we know this used to work!
> except { NOW fall flat on our face! }
> }

Well, that assumes that there have been no changes in the
structure/interfaces of the program that the old implementation requires,
that new implementation hasn't lost or modified any information that old
implementation needs to do its job, and that the new implementation
performs exactly the same task that the old implementation did (including
any expected side affects). If all of those assumptions are valid, and they
should be thoroughly tested, then it wouldn't be a bad idea to fall back
to the older implementation; otherwise, the old implementation is simply
likely to do more harm then good.
From: Daniel T. on
Brett Williams <mungojelly(a)gmail.com> wrote:

> This is probably just a naive question, so I'd genuinely appreciate
> anyone's critical response. This just keeps coming to mind and I
> can't quite understand why everyone does things they way they do.
>
> Suppose our program is doing something in a particular way. It works
> OK, or it works most of the time, but we've figured out a way to make
> it work better, or work more of the time. It seems like the standard
> thing to do is to replace:
>
> try { old implementation }
> except { fall flat on our face }
>
> With:
>
> try { new implementation }
> except { fall flat on our face }
>
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
> log that there's apparently a problem with the new implementation;
> try { old implementation } // we know this used to work!
> except { NOW fall flat on our face! }
> }
>
> What am I missing? Why is code that used to be considered important
> and reliable constantly scrapped, to somewhere deep in a heap of old
> dead code that can only be resurrected by human intervention, just
> because we thought of something a little faster or broader or newer or
> cleverer?

I think the assumption in the above is that old implementation does X
and Y correctly but fails for A and B, while the new implementation does
X and A correctly but fails for Y and B. In such a case, I wouldn't
replace old implementation with the new. At minimum new implementation
must do everything that old implementation did or I don't consider it a
new implementation.
From: Patricia Shanahan on
Brett Williams wrote:
> This is probably just a naive question, so I'd genuinely appreciate
> anyone's critical response. This just keeps coming to mind and I
> can't quite understand why everyone does things they way they do.
>
> Suppose our program is doing something in a particular way. It works
> OK, or it works most of the time, but we've figured out a way to make
> it work better, or work more of the time. It seems like the standard
> thing to do is to replace:
>
> try { old implementation }
> except { fall flat on our face }
>
> With:
>
> try { new implementation }
> except { fall flat on our face }
>
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
> log that there's apparently a problem with the new implementation;
> try { old implementation } // we know this used to work!
> except { NOW fall flat on our face! }
> }
>
> What am I missing? Why is code that used to be considered important
> and reliable constantly scrapped, to somewhere deep in a heap of old
> dead code that can only be resurrected by human intervention, just
> because we thought of something a little faster or broader or newer or
> cleverer?

I think the key issue you seem to be ignoring is the tendency for
complexity in code to decrease maintainability. Keeping the program as
simple as possible is one of the main objectives of a sane maintenance
organization - to the extent that many people recommend doing periodical
refactoring passes whose objective is to simplify the code without
making any functional change.

Your idea does not bring any additional guarantees of correctness. You
only know the old implementation works in the absence of the new
implementation. You do not have any history at all on its behavior after
partial execution of the new implementation.

It would create an intractable testing problem. You would not only have
to test the new implementation. You would also have to artificially
introduce some forms of failure, in order to test the behavior of the
old implementation when run after an attempt to use the new
implementation fails at various points.

Of course, the trade-off between the old and new implementation needs to
be thought about carefully, but I believe the developers should pick one
of them and commit to it.

Patricia
From: BGB / cr88192 on

"Brett Williams" <mungojelly(a)gmail.com> wrote in message
news:0853bc61-04ec-4318-b930-efc3b2749815(a)e5g2000yqn.googlegroups.com...
> This is probably just a naive question, so I'd genuinely appreciate
> anyone's critical response. This just keeps coming to mind and I
> can't quite understand why everyone does things they way they do.
>
> Suppose our program is doing something in a particular way. It works
> OK, or it works most of the time, but we've figured out a way to make
> it work better, or work more of the time. It seems like the standard
> thing to do is to replace:
>
> try { old implementation }
> except { fall flat on our face }
>
> With:
>
> try { new implementation }
> except { fall flat on our face }
>
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
> log that there's apparently a problem with the new implementation;
> try { old implementation } // we know this used to work!
> except { NOW fall flat on our face! }
> }
>
> What am I missing? Why is code that used to be considered important
> and reliable constantly scrapped, to somewhere deep in a heap of old
> dead code that can only be resurrected by human intervention, just
> because we thought of something a little faster or broader or newer or
> cleverer?
>

it really depends on the specifics of what one is doing...

if the old implementation is no longer usable or relevant, then it doesn't
make sense to keep it around...

but, if the new and old implementation deal with different cases, then it
may make sense to have both around...


for example, in a 3D engine:
one can have a plain (non-shader) model renderer, which is likely to be
specialized for doing the task (rendering basic meshes with no shaders
applied, ...);
one can have an alternate shadered renderer, which may use less optimized
rendering, but is more generic (as to handle shaders and a number of other
special-effects which may be applied to the 3D model).


or, one can have 2 different means of figuring out where to make a character
go:
an older/simpler "turn to face and go" strategy (monster turns in the
direction of enemy and tries to go forwards until blocked, maybe randomly
varying angle on impact to try to work around obstacles).

the use of a full-on path-finder, where the path-finder may trace a path all
around the map and point one to the location of the target.

but, each has its merits, where face-and-go may be better when either a path
can't be found, or the path can't be navigated. so, for example, running
into an obstacle along the way might cause the AI to fall back to the old
strategy in an attempt to work around said obstacle, or because a direct
path does not exist.


or, many examples are possible also from the land of compiler-writing...