From: Gene on
On Jun 10, 4:10 am, Brett Williams <mungoje...(a)gmail.com> wrote:
> This is probably just a naive question, so I'd genuinely appreciate
> anyone's critical response.  This just keeps coming to mind and I
> can't quite understand why everyone does things they way they do.
>
> Suppose our program is doing something in a particular way.  It works
> OK, or it works most of the time, but we've figured out a way to make
> it work better, or work more of the time.  It seems like the standard
> thing to do is to replace:
>
> try { old implementation }
> except { fall flat on our face }
>
> With:
>
> try { new implementation }
> except { fall flat on our face }
>
> What occurs to me instead is:
>
> try { new implementation }
> except
> {
>     log that there's apparently a problem with the new implementation;
>     try { old implementation } // we know this used to work!
>     except { NOW fall flat on our face! }
>
> }
>
> What am I missing?  Why is code that used to be considered important
> and reliable constantly scrapped, to somewhere deep in a heap of old
> dead code that can only be resurrected by human intervention, just
> because we thought of something a little faster or broader or newer or
> cleverer?
>

A rough form of this does happen e.g. in systems that must support
legacy functionality too complex or expensive to create in a new
version. Where this happens, the new system merely punts to the old
plate of spaghetti code and hopes for the best. The easiest example is
probably the old Win16 GDI. I remember reading that since the event
passing protocol was never defined, the code was its only
documentation. The only workable way to build Win32 was to embed the
original GDI code--which no one really understood by that time--down
deep in Win32's belly. In all likelihood, it's still there. Same
when you have an old shared library that does something useful but
can't be updated (maybe the code is lost). The whole DLL comes along
for a ride in the new system even if only a tiny part of its
functionality is in use. Ergo system bloat.



From: Brett Williams on
On Jun 10, 7:56 am, "Daniel T." <danie...(a)earthlink.net> wrote:
>
> I think the assumption in the above is that old implementation does X
> and Y correctly but fails for A and B, while the new implementation does
> X and A correctly but fails for Y and B. In such a case, I wouldn't
> replace old implementation with the new. At minimum new implementation
> must do everything that old implementation did or I don't consider it a
> new implementation.


Thanks for your reply, Daniel. I think I understand what you mean,
but I'm not sure those are the assumptions I'm making.

What I'm assuming is more like that the old implementation does some
space correctly, and the new implementation does some space correctly,
but that I actually have no idea what the boundaries are of either of
those spaces. All I know is that they both include some area covered
by some tests, and that there's some area (or some quality to its
performance) where I know that the new one is better than the old
one.

I'm also assuming though that there could be something I've missed, so
that the new implementation, as well as having the strengths that
caused me to choose it as the new way to do things, could have an
unforeseen weakness that causes it to fail in some situation--
particularly, in a situation where the old implementation would have
succeeded. So feel sorry for my poor theoretical program, imagining
it falling over helplessly in a situation it used to be able to
handle, and it makes me wonder, why shouldn't I just let it remember
all of the ways I've thought of to do something, so it can have them
at hand in emergencies?

<3,
mungojelly
From: Brett Williams on
On Jun 10, 6:30 am, Tim Harig <user...(a)ilthio.net> wrote:
>
> Well, that assumes that there have been no changes in the
> structure/interfaces of the program that the old implementation requires,
> that new implementation hasn't lost or modified any information that old
> implementation needs to do its job, and that the new implementation
> performs exactly the same task that the old implementation did (including
> any expected side affects).  If all of those assumptions are valid, and they
> should be thoroughly tested, then it wouldn't be a bad idea to fall back
> to the older implementation; otherwise, the old implementation is simply
> likely to do more harm then good.


Thanks for your reply, Tim!

I've been thinking about your point that the old implementation has
expectations of the surrounding code. It seems like there's a sort of
destructive chain reaction: Each adjustment sends out ripples of
destruction to everywhere that depended on the old behavior, which
necessitates more adjustments, that surprise more clients, etc. I'm
assuming there's a reason for enduring this chaos, but I'm not
grokking it yet.

I've been trying to think of how to express my idea in other forms
than try/except, because that's just an example (probably a bad
example) of what I'm thinking. I just don't understand in general why
the process of programming should be so destructive, why the program
needs to be an amnesiac who only remembers the very last way we taught
it how to do anything.

OK here's another idea: Why not just have every version of every
routine remain available? We do for instance with versions of a whole
package of software: Sometimes you have to install both Foo 1.5 and
Foo 1.6 to run two things that rely on different versions. Viewing
the routines in our programs as clients, aren't we always violently
forcing them to upgrade immediately to the latest versions?

<3,
mungojelly
From: Brett Williams on
On Jun 10, 9:09 am, Patricia Shanahan <p...(a)acm.org> wrote:
>
> I think the key issue you seem to be ignoring is the tendency for
> complexity in code to decrease maintainability. Keeping the program as
> simple as possible is one of the main objectives of a sane maintenance
> organization - to the extent that many people recommend doing periodical
> refactoring passes whose objective is to simplify the code without
> making any functional change.


Thanks for your response, Patricia! I've been thinking about what
you've said carefully.

At first what you said above about adding complexity to the code made
sense to me. But then I thought about it again, and it started to
remind me of what people often say about testing: That it complicates
the code and reduces maintainability by forcing you to write
everything twice. I disagree, of course, and I assume you do as well
(since you mention testing later in your post).


> Your idea does not bring any additional guarantees of correctness. You
> only know the old implementation works in the absence of the new
> implementation. You do not have any history at all on its behavior after
> partial execution of the new implementation.


But it seems to me the bar for success is quite low. To make sure the
bar was low, I supposed that the program in question was about to
completely fail, to just throw in the towel and go home. If we don't
do anything, the chance of failure is 100%. So given that we're
failing, why not try something desperate-- like just doing what worked
last week?


> It would create an intractable testing problem. You would not only have
> to test the new implementation. You would also have to artificially
> introduce some forms of failure, in order to test the behavior of the
> old implementation when run after an attempt to use the new
> implementation fails at various points.


Hmm, yes, there's a lot of complications of the try/except form. I
just mean that as an example, really, I don't want to get hung up on
those particular semantics.

Here's another strategy: Have all the implementations lined up, and
when the program starts have it briefly go over them and test them to
see which ones work and how well, before choosing which to use for the
rest of its operation.


> Of course, the trade-off between the old and new implementation needs to
> be thought about carefully, but I believe the developers should pick one
> of them and commit to it.


There's surely many cases where it's turned out the old implementation
was just plainly faulty, and the new implementation is better in every
way. But aren't there also lots of cases where the old implementation
is healthy and functioning and useful, and just doesn't fulfill all
our present needs? Does progress always have to mean destroying the
past?


<3,
mungojelly
From: Brett Williams on
On Jun 10, 3:18 pm, "BGB / cr88192" <cr88...(a)hotmail.com> wrote:
>
> it really depends on the specifics of what one is doing...


as always :)


> if the old implementation is no longer usable or relevant, then it doesn't
> make sense to keep it around...
>
> but, if the new and old implementation deal with different cases, then it
> may make sense to have both around...


I guess I'm mostly thinking of the case where the new implementation
does pretty much the same thing as the old implementation. Like if
you write something to be faster or smaller, for instance replacing an
algorithm with an equivalent one that's more suitable. It seems to me
then you've written two versions of the same thing, except they have
slightly different characteristics. So I don't see the sense of
tossing one of them. Just as an example, one might be faster, but
take more memory. You might usually prefer the faster one. But then,
oops, you're running low on memory (like maybe you're doing my crazy
idea and your memory's full of a million versions of everything!), so
you could switch everything to the low memory versions.

But now that I think about it, if the behavior of the code is
changing, then it seems even odder to me to forget the old version.
Why do we want our code to forget how to do things? Because it's
confusing to us if our code knows too much?


> for example, in a 3D engine:
> one can have a plain (non-shader) model renderer, which is likely to be
> specialized for doing the task (rendering basic meshes with no shaders
> applied, ...);
> one can have an alternate shadered renderer, which may use less optimized
> rendering, but is more generic (as to handle shaders and a number of other
> special-effects which may be applied to the 3D model).


yeah, that's something like what i'm talking about.. and then if the
more complicated renderer can't keep up, instead of failing entirely
maybe you could switch to the basic version.

but i mean supposing we have this renderer as a fairly isolated
component. why not have lots of versions of it? i mean like
forinstance suppose we're developing it and we try a version out and
it turns out it makes everything look all jagged and weird! my
thought is instead of saying "that's a bug, we meant to have it look
nice," and then just trashing it, to say, hmm, that's interesting, and
tuck it away in an organized place, call it
"renderer_that_looks_all_jagged_and_weird". then you've got something
new in your palette.


> or, one can have 2 different means of figuring out where to make a character
> go:
> an older/simpler "turn to face and go" strategy (monster turns in the
> direction of enemy and tries to go forwards until blocked, maybe randomly
> varying angle on impact to try to work around obstacles).
>
> the use of a full-on path-finder, where the path-finder may trace a path all
> around the map and point one to the location of the target.
>
> but, each has its merits, where face-and-go may be better when either a path
> can't be found, or the path can't be navigated. so, for example, running
> into an obstacle along the way might cause the AI to fall back to the old
> strategy in an attempt to work around said obstacle, or because a direct
> path does not exist.


yeah i've been thinking of examples like that, like you're trying to
make the movement of a character interesting. or trying to make
anything interesting! so any weird way of doing it is a feature. ok
well not every one of them. but you discover all sorts of interesting
things as you explore a space of possibilities.

it seems to me like in a lot of creative things like that, a monster
coming after you, it's hard for me to think of something that's a
bug. like suppose there's something i tried out that made the monster
not even able to catch you at all, it's stumbling around drunk. well
awesome, make it so you can put a potion on the monster and it starts
to stumble like that.


> or, many examples are possible also from the land of compiler-writing...


Hmm. I actually have no idea what you might mean. I don't know much
about compilers; I think about the only trick I've heard of is loop
unrolling.


<3,
mungojelly