|
From: Arved Sandstrom on 24 May 2010 17:14 Tom Anderson wrote: > On Fri, 21 May 2010, Arved Sandstrom wrote: > >> 3. Code coverage - Huge, IMO. How can you know that your unit tests or >> integration tests (or even human-tester-driven acceptance tests) are >> doing any good unless you know how much of the code is actually being >> exercised? Code coverage is very simple to do, and for starters you >> can't go wrong investigating Emma or Cobertura. These simply >> instrument the Java bytecode, such that when the bytecode is executed >> (by any mechanism) coverage counts by line/branch/method/class/package >> are written to HTML or XML reports. > > Out of interest, do those play well with other things which modify > bytecode, like JDO/JPA enhancers? > > tom That's a very good question. I've used both Emma and Cobertura with EclipseLink, so you'd have EclipseLink dynamic weaving (change tracking, lazy fetching) after the code coverage instrumentation; only in the entity classes though. Only my gut feeling, but if I were using anything that did more elaborate bytecode manipulation than that then I figure I'd want to make sure that the code coverage instrumentation was last. AHS
From: Arved Sandstrom on 24 May 2010 17:27 Rhino wrote: > Arved Sandstrom <dcest61(a)hotmail.com> wrote in > news:jlvJn.4710$z%6.465(a)edtnps83: > [ SNIP ] >> 1. Code reviews/inspections and static code analysis with tools _are_ >> a form of testing. Keep them in mind. >> > I see that you've given some good concrete suggestions for tools to use > in upcoming paragraphs. Are any of these static code analysis tools? If > not, can you suggest some good ones? I'd love to have some feedback on my > code via code reviews too but that's a little difficult to do. It's not > realistic to full (non-trivial) classes in the newsgroup so I just pick > and choose especially concerning problems and post snippets reflecting > them. I'm sure that you folks are all too busy to do detailed > walkthroughs of dozens of classes with hundreds of lines of codes in many > of them ;-) I must see if there is a local Java user group in this area > and see if I can develop a few buddies for doing code walkthroughs.... > [ SNIP ] FindBugs, if you're not using it already, is a good static code analyzer. In my experience, just one decent tool like that, plus the requisite excellent books like Bloch's "Effective Java" and willingness to spend time reading those books to understand what the analyzer tells you, is all you really need. It's also useful to complement FindBugs with Checkstyle, which latter is a good part of your toolkit for other purposes as well. AHS
From: Arved Sandstrom on 24 May 2010 17:36 Patricia Shanahan wrote: > Tom Anderson wrote: >> On Fri, 21 May 2010, Patricia Shanahan wrote: >> >>> Arved Sandstrom wrote: >>> ... >>>> 5. Your tests are themselves defective. The sad truth is that if the >>>> core source code you just wrote is riddled with defects, then so >>>> probably are your tests. Main take-away here is, be aware that just >>>> because all your unit tests pass, some not insignificant percentage >>>> of those results are wrong. >>> >>> Although it is not 100% effective, there is some value in writing, >>> and running, the test before implementing the feature. Start testing >>> a method when it is just an empty stub. At that point, a test that >>> passes is either useless or has a bug in it. >> >> The true extremos work to this rule - you must always start with a >> failing test, then work to make it pass. >> >> In fact, the way we (the programmers where i work) try to work is to >> write the test first, before even writing the stub of the method, then >> treat the fact that the test doesn't compile as a failure, and work in >> from there. > > This principle is also very important for bug fixing. Any bug represents > a defect in the test suite. Fix the test suite first, and verify that > the new tests fail, before any attempt to fix the underlying bug. > > Patricia This discussion brings to mind something I'm faced with at work, for one client. Although several of the applications I work on have sizeable numbers of unit tests, they have been allowed to obsolesce, and there have always been big gaps in coverage. We have a push on to add tests and fix the ones that are already there. We are also adding functional tests. Now, here's the rub - in many cases it's known that the underlying code is defective - human testers have recorded defects for specific use cases. So when fixing or adding tests as part of this work, we have to write no small number of tests in such a way that they will fail, because if they don't fail then the tests are wrong. :-) AHS
From: Arved Sandstrom on 24 May 2010 18:54 Rhino wrote: >> Speaking as someone who occasionally sits in on job interviews, I'm >> delighted when we find a candidate who's written any tests at all. >> Most of them haven't. We don't expect people to know more than we do >> (it would be great if they did) and we all learned as we went along >> and wish we understood it better. >> > As I said to Arved elsewhere in the thread, I find it frightening that > anyone could apply for a job as anything more than a trainee programmer > (one who didn't know anything about programming at all and was expecting > to be trained by the employer) without having some familiarity with > testing and, hopefully being able to demonstrate the familiarity. If > that's the norm, then I feel better about my chances of getting a job > programming in Java already :-) > [ SNIP ] One good thing about TDD, which has been mentioned elsewhere in this thread (and in other threads), is that you can slide tests in as part of the implementation, with no non-technical managers any the wiser. The major problem with writing tests after the fact, even in reasonably enlightened shops, is that at least three-quarters of all software projects don't hit deadlines, for reasons that have very little to do with the coding. In these circumstances anything that was planned for the last phases, and appears to be non-critical, gets dispensed with. Whereas if you start with tests, and refine them as your code gets written, you barely impact the project schedule in the real world, but end up with much better code, a complete set of tests, and the ability to better react to change. I don't have much sympathy for programmers who have never written tests, but I do understand programmers who have no *work experience* writing tests. The latter is quite common, because a majority of software teams are disinterested in writing tests, or are not permitted to "waste time" writing tests, or everyone is agreeable but they commit the mistake I describe above. The former situation tells me that the programmer in question doesn't do much professional reading, and can't be bothered to spend some of their own time trying out a test framework...which would take, like, a single evening. AHS
From: Tom Anderson on 24 May 2010 18:55
On Mon, 24 May 2010, Arved Sandstrom wrote: > Patricia Shanahan wrote: >> Tom Anderson wrote: >>> On Fri, 21 May 2010, Patricia Shanahan wrote: >>> >>>> Arved Sandstrom wrote: >>>> ... >>>>> 5. Your tests are themselves defective. The sad truth is that if the >>>>> core source code you just wrote is riddled with defects, then so >>>>> probably are your tests. Main take-away here is, be aware that just >>>>> because all your unit tests pass, some not insignificant percentage of >>>>> those results are wrong. >>>> >>>> Although it is not 100% effective, there is some value in writing, and >>>> running, the test before implementing the feature. Start testing a method >>>> when it is just an empty stub. At that point, a test that passes is >>>> either useless or has a bug in it. >>> >>> The true extremos work to this rule - you must always start with a failing >>> test, then work to make it pass. >>> >>> In fact, the way we (the programmers where i work) try to work is to write >>> the test first, before even writing the stub of the method, then treat the >>> fact that the test doesn't compile as a failure, and work in from there. >> >> This principle is also very important for bug fixing. Any bug represents >> a defect in the test suite. Fix the test suite first, and verify that >> the new tests fail, before any attempt to fix the underlying bug. > > This discussion brings to mind something I'm faced with at work, for one > client. Although several of the applications I work on have sizeable numbers > of unit tests, they have been allowed to obsolesce, and there have always > been big gaps in coverage. We have a push on to add tests and fix the ones > that are already there. We are also adding functional tests. > > Now, here's the rub - in many cases it's known that the underlying code is > defective - human testers have recorded defects for specific use cases. So > when fixing or adding tests as part of this work, we have to write no small > number of tests in such a way that they will fail, because if they don't fail > then the tests are wrong. :-) Well, at least that makes it easy to know you've got the test right! Norbert Seriously, folks, this is an old point: http://c2.com/cgi-bin/wiki?CaptureBugsWithTests tom -- Tech - No Babble |