From: Arved Sandstrom on
Arne Vajh�j wrote:
> On 24-05-2010 17:27, Arved Sandstrom wrote:
>> Rhino wrote:
>>> Arved Sandstrom <dcest61(a)hotmail.com> wrote in
>>> news:jlvJn.4710$z%6.465(a)edtnps83:
>> [ SNIP ]
>>
>>>> 1. Code reviews/inspections and static code analysis with tools _are_
>>>> a form of testing. Keep them in mind.
>>>>
>>> I see that you've given some good concrete suggestions for tools to
>>> use in upcoming paragraphs. Are any of these static code analysis
>>> tools? If not, can you suggest some good ones? I'd love to have some
>>> feedback on my code via code reviews too but that's a little difficult
>>> to do. It's not realistic to full (non-trivial) classes in the
>>> newsgroup so I just pick and choose especially concerning problems and
>>> post snippets reflecting them. I'm sure that you folks are all too
>>> busy to do detailed walkthroughs of dozens of classes with hundreds of
>>> lines of codes in many of them ;-) I must see if there is a local Java
>>> user group in this area and see if I can develop a few buddies for
>>> doing code walkthroughs....
>> [ SNIP ]
>>
>> FindBugs, if you're not using it already, is a good static code
>> analyzer. In my experience, just one decent tool like that, plus the
>> requisite excellent books like Bloch's "Effective Java" and willingness
>> to spend time reading those books to understand what the analyzer tells
>> you, is all you really need.
>>
>> It's also useful to complement FindBugs with Checkstyle, which latter is
>> a good part of your toolkit for other purposes as well.
>
> Have you tried PMD?
>
> If yes how does that stack up against those two?
>
> Arne

I'm not able to give a strong opinion on PMD, Arne. I tried it to the
extent of downloading it, running it on the command line with some
options (selecting rulesets), and briefly inspecting the reports. The
context was that I'd already routinely been using FindBugs on the same
codebase.

My impression was that it was a competent static analyzer, but seeing as
how I already use the other two, that the extra effort of going through
yet another set of reports outweighed the gains of PMD catching a few
things that FindBugs might not, for example. Lots of overlap, IOW.

AHS
From: Patricia Shanahan on
Tom Anderson wrote:
> On Tue, 25 May 2010, Arne Vajh?j wrote:
>
>> On 21-05-2010 15:10, Tom Anderson wrote:
>>> On Fri, 21 May 2010, Rhino wrote:
>>>> Patricia Shanahan <pats(a)acm.org> wrote in
>>>> news:NtWdndKRZMMkC2vWnZ2dnUVZ_j2dnZ2d(a)earthlink.com:
>>>>> Arved Sandstrom wrote:
>>>>> ...
>>>>>> 5. Your tests are themselves defective. The sad truth is that if the
>>>>>> core source code you just wrote is riddled with defects, then so
>>>>>> probably are your tests. Main take-away here is, be aware that just
>>>>>> because all your unit tests pass, some not insignificant
>>>>>> percentage of
>>>>>> those results are wrong.
>>>>>
>>>>> Although it is not 100% effective, there is some value in writing,
>>>>> and running, the test before implementing the feature. Start testing
>>>>> a method when it is just an empty stub. At that point, a test that
>>>>> passes is either useless or has a bug in it.
>>>>
>>>> Fair enough. I saw an article just the other day that used a buzzword
>>>> I've already forgotten which proposed that this was the right way to
>>>> develop code: write test cases BEFORE you even start the code and then
>>>> retest frequently as the code evolves. I can see some merit in that,
>>>> even if I've forgotten the buzzword already ;-)
>>>
>>> Probably 'test-driven development', aka TDD.
>>>
>>> And it's no more of a buzzword than 'wearing a seat-belt'. It's just a
>>> bloody good idea!
>>
>> I have no doubt that:
>>
>> success(TDD) > avg(success(all methodologies))
>>
>> but I am not quite as convinced that:
>>
>> success(TDD) > avg(success(methodologies with strong focus on unit
>> tests))
>
> You may be right. I tend to think:
>
> TDD == methodologies with strong focus on unit tests
>
> Although that's probably just my biases. What are other methodologies
> that have a strong focus on unit tests? Are they test-first or
> test-after? What is a test-first method that's not TDD?

The way I see it, TDD implies that the design grows entirely from failed
tests. That leads to a situation in which one starts with a very simple
design, one that is known not to meet all the first release
requirements, and gradually adds complexity to the design, refactoring
as needed.

That works extremely well for small projects and projects with rapidly
changing requirements.

As the number of programmers grows, the cost of frequent large scale
design changes also grows. It also becomes impractical for every
programmer to understand all the code well enough to work on it directly.

For large projects with well defined requirements it may be better to
think through a large scale design that partitions the code into modules
that can be built relatively independently by sub-projects. Changes to
the interfaces between the top level modules need to be infrequent. That
requires thinking about the top levels of the final design before the
tests have even been written.

Even within a framework of a large scale design that was worked out
before any tests where written, one can still write tests before the
code they test.

Personally, I like to write unit tests and Javadoc comments at the same
time. Thinking about the tests helps me think about questions I should
be answering in the comments. What should this method do if that
parameter is null? Write a comment documenting the behavior and a test
enforcing it.

Patricia
From: Arne Vajhøj on
On 26-05-2010 04:37, Tom Anderson wrote:
> On Tue, 25 May 2010, Arne Vajh?j wrote:
>
>> On 21-05-2010 15:10, Tom Anderson wrote:
>>> On Fri, 21 May 2010, Rhino wrote:
>>>> Patricia Shanahan <pats(a)acm.org> wrote in
>>>> news:NtWdndKRZMMkC2vWnZ2dnUVZ_j2dnZ2d(a)earthlink.com:
>>>>> Arved Sandstrom wrote:
>>>>> ...
>>>>>> 5. Your tests are themselves defective. The sad truth is that if the
>>>>>> core source code you just wrote is riddled with defects, then so
>>>>>> probably are your tests. Main take-away here is, be aware that just
>>>>>> because all your unit tests pass, some not insignificant
>>>>>> percentage of
>>>>>> those results are wrong.
>>>>>
>>>>> Although it is not 100% effective, there is some value in writing,
>>>>> and running, the test before implementing the feature. Start testing
>>>>> a method when it is just an empty stub. At that point, a test that
>>>>> passes is either useless or has a bug in it.
>>>>
>>>> Fair enough. I saw an article just the other day that used a buzzword
>>>> I've already forgotten which proposed that this was the right way to
>>>> develop code: write test cases BEFORE you even start the code and then
>>>> retest frequently as the code evolves. I can see some merit in that,
>>>> even if I've forgotten the buzzword already ;-)
>>>
>>> Probably 'test-driven development', aka TDD.
>>>
>>> And it's no more of a buzzword than 'wearing a seat-belt'. It's just a
>>> bloody good idea!
>>
>> I have no doubt that:
>>
>> success(TDD) > avg(success(all methodologies))
>>
>> but I am not quite as convinced that:
>>
>> success(TDD) > avg(success(methodologies with strong focus on unit
>> tests))
>
> You may be right. I tend to think:
>
> TDD == methodologies with strong focus on unit tests
>
> Although that's probably just my biases. What are other methodologies
> that have a strong focus on unit tests? Are they test-first or
> test-after? What is a test-first method that's not TDD?

To me it is a 3 level thing:

level 1 = unit tests everything
level 2 = write the unit tests before the code
level 3 = let the unit tests drive the design

I like up to level 1.5 (unit test everything, either write the
unit tests first *or* get another developer to write the
unit tests).

Arne
From: Patricia Shanahan on
Arne Vajh�j wrote:
> On 26-05-2010 04:37, Tom Anderson wrote:
>> On Tue, 25 May 2010, Arne Vajh?j wrote:
>>
>>> On 21-05-2010 15:10, Tom Anderson wrote:
>>>> On Fri, 21 May 2010, Rhino wrote:
>>>>> Patricia Shanahan <pats(a)acm.org> wrote in
>>>>> news:NtWdndKRZMMkC2vWnZ2dnUVZ_j2dnZ2d(a)earthlink.com:
>>>>>> Arved Sandstrom wrote:
>>>>>> ...
>>>>>>> 5. Your tests are themselves defective. The sad truth is that if the
>>>>>>> core source code you just wrote is riddled with defects, then so
>>>>>>> probably are your tests. Main take-away here is, be aware that just
>>>>>>> because all your unit tests pass, some not insignificant
>>>>>>> percentage of
>>>>>>> those results are wrong.
>>>>>>
>>>>>> Although it is not 100% effective, there is some value in writing,
>>>>>> and running, the test before implementing the feature. Start testing
>>>>>> a method when it is just an empty stub. At that point, a test that
>>>>>> passes is either useless or has a bug in it.
>>>>>
>>>>> Fair enough. I saw an article just the other day that used a buzzword
>>>>> I've already forgotten which proposed that this was the right way to
>>>>> develop code: write test cases BEFORE you even start the code and then
>>>>> retest frequently as the code evolves. I can see some merit in that,
>>>>> even if I've forgotten the buzzword already ;-)
>>>>
>>>> Probably 'test-driven development', aka TDD.
>>>>
>>>> And it's no more of a buzzword than 'wearing a seat-belt'. It's just a
>>>> bloody good idea!
>>>
>>> I have no doubt that:
>>>
>>> success(TDD) > avg(success(all methodologies))
>>>
>>> but I am not quite as convinced that:
>>>
>>> success(TDD) > avg(success(methodologies with strong focus on unit
>>> tests))
>>
>> You may be right. I tend to think:
>>
>> TDD == methodologies with strong focus on unit tests
>>
>> Although that's probably just my biases. What are other methodologies
>> that have a strong focus on unit tests? Are they test-first or
>> test-after? What is a test-first method that's not TDD?
>
> To me it is a 3 level thing:
>
> level 1 = unit tests everything
> level 2 = write the unit tests before the code
> level 3 = let the unit tests drive the design
>
> I like up to level 1.5 (unit test everything, either write the
> unit tests first *or* get another developer to write the
> unit tests).

I think it depends on the nature of the project. The last non-trivial
piece of code I wrote was a simulation for my Ph.D. research. That
program existed to be changed. The best runs were those that got me
thinking about new questions, causing further changes to the program.
For that, I was at about level 2.9 - a little thinking about design
before I had a failing test, but not much.

I don't think that would work so well for a project with a few hundred
programmers.

My ideal, which I have never achieved, would be to have two sets of unit
tests, one set written by another developer and one set written by the
same person as the code under test.

Patricia
From: Tom Anderson on
On Wed, 26 May 2010, Patricia Shanahan wrote:

> Arne Vajh?j wrote:
>> On 26-05-2010 04:37, Tom Anderson wrote:
>>
>>> You may be right. I tend to think:
>>>
>>> TDD == methodologies with strong focus on unit tests
>>>
>>> Although that's probably just my biases. What are other methodologies
>>> that have a strong focus on unit tests? Are they test-first or
>>> test-after? What is a test-first method that's not TDD?
>>
>> To me it is a 3 level thing:
>>
>> level 1 = unit tests everything
>> level 2 = write the unit tests before the code
>> level 3 = let the unit tests drive the design
>>
>> I like up to level 1.5 (unit test everything, either write the
>> unit tests first *or* get another developer to write the
>> unit tests).

Ah, i think of test-driven design as meaning the same thing as test-driven
development - they're just different names for the same thing.

Test-driven design as a term is a misnomer anyway. You are doing design
bit-by-bit as you go along, but it's not driven by the tests. It can't be
- if we understand 'design' to mean something like 'architecture' or
'design at a level higher than source code', then it's the design which
drives the tests, because the design is what gives you "i need a class
which embodies such-and-such an idea, and exposes operations this-and-that
to its clients". Having design *driven by* tests is an impossibility.

I suppose test-driven development could be done with a big up-front
design. There's nothing contradictory there. Indeed, even with incremental
design, at every stage you have a picture of what the whole system will
look like when it's finished, even if you're only working on one little
bit of it right now. The only difference between that and up-front design
is that with up-front design, you think about it in more detail before you
start, and you give yourself less flexibility to change it once you do.
That's why i don't see a real distinction between them: even with a big
upfront design, not giving yourself flexibility is a mistake, so with that
out of the way, the only difference is how much thinking you do before you
start. That's clearly a continuum, so there can be no hard and fast
distinction between the two kinds of TDD.

> I think it depends on the nature of the project. The last non-trivial
> piece of code I wrote was a simulation for my Ph.D. research. That
> program existed to be changed. The best runs were those that got me
> thinking about new questions, causing further changes to the program.
> For that, I was at about level 2.9 - a little thinking about design
> before I had a failing test, but not much.
>
> I don't think that would work so well for a project with a few hundred
> programmers.

I'm not sure there are any methodologies at all that work well for a
project with a few hundred programmers, but that's another story.

> My ideal, which I have never achieved, would be to have two sets of unit
> tests, one set written by another developer and one set written by the
> same person as the code under test.

That would be interesting. A while ago, we were working with some
contractors who were not great programmers, and who were especially
hopeless at writing tests (they'd write them, because they knew we wanted
them, but they would often not really test anything). I wondered about
managing the interaction between our teams by having us write the tests,
and having them write the code to make the tests pass. That would have
forced them to write better code, and given us a lot more confidence in
what they were doing.

tom

--
Osteoclasts = monsters from the DEEP -- Andrew