From: Brett Williams on 12 Jun 2010 05:58 On Jun 11, 3:17 pm, Gene <gene.ress...(a)gmail.com> wrote: > > A rough form of this does happen e.g. in systems that must support > legacy functionality too complex or expensive to create in a new > version. Where this happens, the new system merely punts to the old > plate of spaghetti code and hopes for the best. The easiest example is > probably the old Win16 GDI. I remember reading that since the event > passing protocol was never defined, the code was its only > documentation. The only workable way to build Win32 was to embed the > original GDI code--which no one really understood by that time--down > deep in Win32's belly. In all likelihood, it's still there. Same > when you have an old shared library that does something useful but > can't be updated (maybe the code is lost). The whole DLL comes along > for a ride in the new system even if only a tiny part of its > functionality is in use. Ergo system bloat. Thanks for your comments, Gene! I've been learning a lot the past few years about how software grows. It's not all what I expected. Like I've been following a conversation lately on comp.lang.python about GUIs, where people are justifying the inclusion in the Python standard library of the GUI toolkit Tkinter, which is a wrapper I guess around the Tcl/Tk GUI library, where Tcl is a whole scripting language! It's another whole language included along with Python, just to do some GUI stuff. So I can imagine myself from a few years back and I would totally be on the side of thinking, "well this is ridiculous, why are we including another language along with this language, how could this possibly be an efficient or sensible way to do things." But now I'm reading those same old arguments and I find myself more on the side of understanding the genuine, built-up, irreplaceable human effort behind a software artifact. <3, mungojelly
From: Daniel T. on 12 Jun 2010 10:56 Brett Williams <mungojelly(a)gmail.com> wrote: > On Jun 10, 7:56�am, "Daniel T." <danie...(a)earthlink.net> wrote: > > > > I think the assumption in the above is that old implementation does X > > and Y correctly but fails for A and B, while the new implementation does > > X and A correctly but fails for Y and B. In such a case, I wouldn't > > replace old implementation with the new. At minimum new implementation > > must do everything that old implementation did or I don't consider it a > > new implementation. > > Thanks for your reply, Daniel. I think I understand what you mean, > but I'm not sure those are the assumptions I'm making. > > What I'm assuming is more like that the old implementation does some > space correctly, and the new implementation does some space correctly, > but that I actually have no idea what the boundaries are of either of > those spaces. All I know is that they both include some area covered > by some tests, and that there's some area (or some quality to its > performance) where I know that the new one is better than the old > one. > > I'm also assuming though that there could be something I've missed, so > that the new implementation, as well as having the strengths that > caused me to choose it as the new way to do things, could have an > unforeseen weakness that causes it to fail in some situation-- > particularly, in a situation where the old implementation would have > succeeded. So feel sorry for my poor theoretical program, imagining > it falling over helplessly in a situation it used to be able to > handle, and it makes me wonder, why shouldn't I just let it remember > all of the ways I've thought of to do something, so it can have them > at hand in emergencies? The point remains, there are three areas of behavior: 1) Areas where the new implementation works better than the old. 2) Areas where the old implementation works better than the new. 3) Areas where they work equally well. If you are keeping the old code in place as a fallback because you are assuming that #2 is non-zero without empirical evidence, then you are complicating the program for no good reason. If you are worried that #2 is non-zero but have no empirical evidence, then find some. Write some tests for which the old implementation performs better than the new one. Lastly even if you do have evidence that #2 is non-zero (the new implementation fails tests that the old implementation passed,) then the best solution would be to modify the new implementation so it passed those tests and made area #2 equal zero.
From: Patricia Shanahan on 12 Jun 2010 12:45 Brett Williams wrote: > On Jun 10, 9:09 am, Patricia Shanahan <p...(a)acm.org> wrote: >> I think the key issue you seem to be ignoring is the tendency for >> complexity in code to decrease maintainability. Keeping the program as >> simple as possible is one of the main objectives of a sane maintenance >> organization - to the extent that many people recommend doing periodical >> refactoring passes whose objective is to simplify the code without >> making any functional change. > > > Thanks for your response, Patricia! I've been thinking about what > you've said carefully. > > At first what you said above about adding complexity to the code made > sense to me. But then I thought about it again, and it started to > remind me of what people often say about testing: That it complicates > the code and reduces maintainability by forcing you to write > everything twice. I disagree, of course, and I assume you do as well > (since you mention testing later in your post). Good unit tests do not add complexity because they can be as modularized as the original code. They are usually not even present in the code as distributed. Anyone who is writing everything twice to get their unit tests is seriously lacking in imagination. Generally, a unit test should be checking the tested module's postconditions given known data, not trying to reproduce the steps it took to achieve those postconditions. I think the tendency for increased complexity to reduce maintainability is well documented. Do you have access to a technical library? If so, I'll get you references to some of the relevant software engineering research papers. >> Your idea does not bring any additional guarantees of correctness. You >> only know the old implementation works in the absence of the new >> implementation. You do not have any history at all on its behavior after >> partial execution of the new implementation. > > > But it seems to me the bar for success is quite low. To make sure the > bar was low, I supposed that the program in question was about to > completely fail, to just throw in the towel and go home. If we don't > do anything, the chance of failure is 100%. So given that we're > failing, why not try something desperate-- like just doing what worked > last week? You seem to be assuming that a detected failure is the worst thing that can happen. I strongly disagree. Either a silently corrupted database or a series of plausible but incorrect results would be far worse. >> It would create an intractable testing problem. You would not only have >> to test the new implementation. You would also have to artificially >> introduce some forms of failure, in order to test the behavior of the >> old implementation when run after an attempt to use the new >> implementation fails at various points. > > > Hmm, yes, there's a lot of complications of the try/except form. I > just mean that as an example, really, I don't want to get hung up on > those particular semantics. > > Here's another strategy: Have all the implementations lined up, and > when the program starts have it briefly go over them and test them to > see which ones work and how well, before choosing which to use for the > rest of its operation. What is the advantage of doing that at run time, compared to the current practice of doing it during development? Generally, releasing a change into a production program requires at a minimum a demonstration that the new code passes all known tests that the old code passes. It also has to either pass at least one test the old code fails, or be clearly better in some other way such as being usefully faster, or a better development base for the next series of changes. >> Of course, the trade-off between the old and new implementation needs to >> be thought about carefully, but I believe the developers should pick one >> of them and commit to it. > > > There's surely many cases where it's turned out the old implementation > was just plainly faulty, and the new implementation is better in every > way. But aren't there also lots of cases where the old implementation > is healthy and functioning and useful, and just doesn't fulfill all > our present needs? Does progress always have to mean destroying the > past? If the old code is in good shape, the "new" code will often incorporate it, with e.g. some new cases in switch statements to deal with new features. Complete replacement usually only happens if there is something about the design of the old code that makes it impossible to fix the bugs or add the new features without making it unnecessarily complicated. Patricia
From: Ben Pfaff on 12 Jun 2010 13:14 Patricia Shanahan <pats(a)acm.org> writes: > Anyone who is writing everything twice to get their unit tests is > seriously lacking in imagination. Generally, a unit test should be > checking the tested module's postconditions given known data, not trying > to reproduce the steps it took to achieve those postconditions. Writing everything twice can in fact be a good approach to writing unit tests. It makes sense sometimes to write a clever implementation of an optimized data structure, and then to write the unit tests in the form of a simple, "obviously correct" version of that data structure plus a driver that performs the same operations on both data structures and compares the results. -- "I was born lazy. I am no lazier now than I was forty years ago, but that is because I reached the limit forty years ago. You can't go beyond possibility." --Mark Twain
From: BGB / cr88192 on 12 Jun 2010 13:16 "Brett Williams" <mungojelly(a)gmail.com> wrote in message news:ce2aa03d-af15-4074-b696-e937ea6e004f(a)x21g2000yqa.googlegroups.com... On Jun 10, 3:18 pm, "BGB / cr88192" <cr88...(a)hotmail.com> wrote: > > it really depends on the specifics of what one is doing... <-- as always :) --> > if the old implementation is no longer usable or relevant, then it doesn't > make sense to keep it around... > > but, if the new and old implementation deal with different cases, then it > may make sense to have both around... <-- I guess I'm mostly thinking of the case where the new implementation does pretty much the same thing as the old implementation. Like if you write something to be faster or smaller, for instance replacing an algorithm with an equivalent one that's more suitable. It seems to me then you've written two versions of the same thing, except they have slightly different characteristics. So I don't see the sense of tossing one of them. Just as an example, one might be faster, but take more memory. You might usually prefer the faster one. But then, oops, you're running low on memory (like maybe you're doing my crazy idea and your memory's full of a million versions of everything!), so you could switch everything to the low memory versions. --> well, if there is no real difference, then it is usually drop-in-and-replace. but, if new code replaces old code, usually it either is making a general improvement or is fixing something. it usually makes little sense to keep old code active if it only does the same thing but worse. in C or C++, the usual strategy is to be like: #if 0 old code... #endif #if 1 new code... #endif and maybe clean it up later. I am not sure what most Java people do, maybe copy off backups of the old files somewhere... <-- But now that I think about it, if the behavior of the code is changing, then it seems even odder to me to forget the old version. Why do we want our code to forget how to do things? Because it's confusing to us if our code knows too much? --> well, it depends, but at least in my practice I tend not to reduce the functionality of code (or at least not without some solidly good reason). > for example, in a 3D engine: > one can have a plain (non-shader) model renderer, which is likely to be > specialized for doing the task (rendering basic meshes with no shaders > applied, ...); > one can have an alternate shadered renderer, which may use less optimized > rendering, but is more generic (as to handle shaders and a number of other > special-effects which may be applied to the 3D model). <-- yeah, that's something like what i'm talking about.. and then if the more complicated renderer can't keep up, instead of failing entirely maybe you could switch to the basic version. --> typically, but it is done manually... notably though, usually failing on some more fancy features (such as pixel shaders not working) simply falls back to not using them. but, there is a reasonable tradeoff as well, like if one ends up with multiple renderers, and the old renderer is notably bulky (such as the SW renderer vs HW renderer case), then usually one has to go, as keeping a SW renderer up to date is a bit of a pain... for example, I have a customized version of Quake2, and also my own (independent) 3D engine. in my engine, it supports both real-time lighting via phong shaders, and also via the built-in GL lights. no shaders means it falls back to GL lights. in the Q2 variant, it also supports real-time lighting, but the graphics get breaky if the shaders don't work. I ended up disabling full real-time lighting by default, mostly because: it somewhat reduces the overall framerate (vs lightmaps); with low-saturation textures, it looks much worse (the colors are washed out and grayish, whereas with a radiant-based lightmapper, it looks much better); .... however, I did add support for real-time shadows, which I suspect work similarly to the HL2 shadows. in some cases though, they may exhibit shadowing-bugs, mostly shadows going through geometry and showing up where they are not supposed to, but some level of hackery has reduced this issue. but, in the Q2 variant, most features are enabled/disabled manually... <-- but i mean supposing we have this renderer as a fairly isolated component. why not have lots of versions of it? i mean like forinstance suppose we're developing it and we try a version out and it turns out it makes everything look all jagged and weird! my thought is instead of saying "that's a bug, we meant to have it look nice," and then just trashing it, to say, hmm, that's interesting, and tuck it away in an organized place, call it "renderer_that_looks_all_jagged_and_weird". then you've got something new in your palette. --> I am not sure I understand what kind of practices are in use here... one doesn't usually trash code (large scale), on account of bugs, usually instead one fixes code. if the bug were interesting, it could be noted somewhere... > or, one can have 2 different means of figuring out where to make a > character > go: > an older/simpler "turn to face and go" strategy (monster turns in the > direction of enemy and tries to go forwards until blocked, maybe randomly > varying angle on impact to try to work around obstacles). > > the use of a full-on path-finder, where the path-finder may trace a path > all > around the map and point one to the location of the target. > > but, each has its merits, where face-and-go may be better when either a > path > can't be found, or the path can't be navigated. so, for example, running > into an obstacle along the way might cause the AI to fall back to the old > strategy in an attempt to work around said obstacle, or because a direct > path does not exist. <-- yeah i've been thinking of examples like that, like you're trying to make the movement of a character interesting. or trying to make anything interesting! so any weird way of doing it is a feature. ok well not every one of them. but you discover all sorts of interesting things as you explore a space of possibilities. it seems to me like in a lot of creative things like that, a monster coming after you, it's hard for me to think of something that's a bug. like suppose there's something i tried out that made the monster not even able to catch you at all, it's stumbling around drunk. well awesome, make it so you can put a potion on the monster and it starts to stumble like that. --> pathfinding doesn't do much "that" interesting... in this case (tweaked Quake2 variant), if the pathfinding works, it essentially "overrides" some of the other logic, and essentially forces the monster to move about like it were on a train-track or similar, but if there is no path, or the monster runs into something, then it is automatically released from this track, and may fall back to the default logic. but, a little fiddling is needed, as both the pathfinder (based on the A* strategy), and the default logic, each have merits and drawbacks, and some fiddling was needed in making it all work, and later in smoothing out the transitions (there were issues related to things like disengaging from following a path to acting freely, like the monster spinning in a circle upon hitting the end of the path, ...). note: for Q2 I ended up using a brute-force strategy to build the waypoint nodes, namely iterating at every point on the map (at regular steps in a cube-like manner), and probing whether a monster could potentially stand there (actually, it probes for several different sized monsters to see which sizes could fit there...). after this, it starts checking between pairs of nodes, seeing if they have line of sight, are within a sufficiently small distance, and if a monster could stand in the midpoint. however, sometimes problematic paths are still generated (typically going through areas a monster can't actually travel). ammusingly, at the moment flying monsters also often end up following a path along the ground (I could either disable pathfinding for flying monsters, or potentially build nodes for them as well, although the worry is that this would add a huge number of nodes, since open-air volume is much larger than on-the-ground area). if really needed, possibly coarser stepping could by used for flying and swimming monsters. > or, many examples are possible also from the land of compiler-writing... <-- Hmm. I actually have no idea what you might mean. I don't know much about compilers; I think about the only trick I've heard of is loop unrolling. --> not all in compilers is optimization... optimization is actually only a small part of what all is involved. typically, there is a large and complicated mess of parts, and lots of possible paths through the logic (especially in the code generator). so, there is lots of room for alternate ways to do essentially the same sorts of tasks... but, I have ended up with several different parsers (mostly for different front-end languages), several different AST representaions (some of my front-ends internally use S-Expressions, and others use an XML-based representation), multiple IL's (generally "siblings" based on stack-machines, but differ in their specifics), ... even something fairly simple like an assembler can get a little hairy in its internals... but, yeah...
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 5 Prev: Ctalk 0.0.96a 20100606 Released Next: Simple Hack to get $500 to your home |