From: Tom Anderson on
On Fri, 21 May 2010, Patricia Shanahan wrote:

> Arved Sandstrom wrote:
> ...
>> 5. Your tests are themselves defective. The sad truth is that if the core
>> source code you just wrote is riddled with defects, then so probably are
>> your tests. Main take-away here is, be aware that just because all your
>> unit tests pass, some not insignificant percentage of those results are
>> wrong.
>
> Although it is not 100% effective, there is some value in writing, and
> running, the test before implementing the feature. Start testing a
> method when it is just an empty stub. At that point, a test that passes
> is either useless or has a bug in it.

The true extremos work to this rule - you must always start with a failing
test, then work to make it pass.

In fact, the way we (the programmers where i work) try to work is to write
the test first, before even writing the stub of the method, then treat the
fact that the test doesn't compile as a failure, and work in from there.

tom

--
09F911029D74E35BD84156C5635688C0 -- AACS Licensing Administrator
From: Rhino on
Patricia Shanahan <pats(a)acm.org> wrote in
news:UNqdndU3ZZ7MPGvWnZ2dnUVZ_h6dnZ2d(a)earthlink.com:

> Rhino wrote:
>> Patricia Shanahan <pats(a)acm.org> wrote in
> ...
>>> You seem to be assuming that a JUnit test requires an expected
>>> result. Don't forget the assertTrue method, which lets you test
>>> arbitrary conditions.
>>>
>> I'm not really familiar with assertTrue - I'm just getting back into
>> Java and JUnit after a gap of a few years and I was never all that
>> fluent with JUnit even before the gap. I'll look at the JUnit docs
>> and see what I can learn there about the purpose and best uses of
>> assertTrue....
>
> I'll modify my suggestion from "Don't forget" to "Learn about".
>
I've already found a basic article on JUnit that talked about the
different sorts of assertXXX methods and looked at the JUnit API to get a
handle on that. It's much clearer now, thanks for the suggestion.

>>
>> Thanks for the suggestion, I'm sure it will be helpful once I
>> understand it better ;-)
>
> Here's a possibly relevant example, a method that tests an
> Iterator<String> for non-empty, strictly increasing, no null elements.
>
> /**
> * Pass a non-empty String iterator with only non-null elements
> * in strictly
> * increasing order.
> */
> private void testIteratorBehavior(Iterator<String> it) {
> assertTrue(it.hasNext());
> String oldElement = it.next();
> assertNotNull(oldElement);
> while (it.hasNext()) {
> String newElement = it.next();
> assertNotNull(newElement);
> assertTrue(newElement.compareTo(oldElement) > 0);
> oldElement = newElement;
> }
> }
>
> Using this sort of technique you can test a structure for conforming
> to some known rules without using an expected value.
>

Thanks you for the interesting suggestion. I hadn't thought of testing to
insure that the sequence was correct or that nulls were excluded and
might have struggled a bit trying to think of ways to ensure that.

Am I right in assuming that it is reasonable to bypass the sequence
testing in the case of my sorted list of Locales, given that I am using a
TreeMap which already guarantees the order? I assume you are just
including these tests to cover situations where I am generating a sorted
list out of thin air (without the benefit of the TreeXXX Collection
classes) so that I can be confident my code worked correctly?

I think I will include a test for nulls in the Set though, just in
case....


--
Rhino
From: Patricia Shanahan on
Tom Anderson wrote:
> On Fri, 21 May 2010, Patricia Shanahan wrote:
>
>> Arved Sandstrom wrote:
>> ...
>>> 5. Your tests are themselves defective. The sad truth is that if the
>>> core source code you just wrote is riddled with defects, then so
>>> probably are your tests. Main take-away here is, be aware that just
>>> because all your unit tests pass, some not insignificant percentage
>>> of those results are wrong.
>>
>> Although it is not 100% effective, there is some value in writing, and
>> running, the test before implementing the feature. Start testing a
>> method when it is just an empty stub. At that point, a test that
>> passes is either useless or has a bug in it.
>
> The true extremos work to this rule - you must always start with a
> failing test, then work to make it pass.
>
> In fact, the way we (the programmers where i work) try to work is to
> write the test first, before even writing the stub of the method, then
> treat the fact that the test doesn't compile as a failure, and work in
> from there.

This principle is also very important for bug fixing. Any bug represents
a defect in the test suite. Fix the test suite first, and verify that
the new tests fail, before any attempt to fix the underlying bug.

Patricia
From: Rhino on
Arved Sandstrom <dcest61(a)hotmail.com> wrote in
news:jlvJn.4710$z%6.465(a)edtnps83:

> Rhino wrote:
>> Eric Sosman <esosman(a)ieee-dot-org.invalid> wrote in
>> news:ht49d0$i91$1(a)news.eternal-september.org:
> [ SNIP ]
>
>>> There's a subtle point there, a conflict between "black box"
>>> and "clear box" testing. The only way you can *know* that a
>>> subclass inherits a non-final method rather than overriding it is to
>>> peek into the subclass' implementation (either by looking at the
>>> source or by using reflection). But what if somebody comes along
>>> next week and decides to override a method you decided not to test,
>>> on the grounds that it was not overridden?
>>>
>> My theory is rather weak so I'm not really up on the meanings of
>> "black box" and "clear box" testing, let alone the subtle differences
>> between them. Also, I'm not writing the code for anyone but myself
>> right now, except that I _would_ like to get one project's code
>> looking as professional as possible so that I could present it to a
>> prospective client or employer as a portfolio of what I can do. (And
>> then imitate that in future projects as well as gradually retrofit
>> other existing projects with what I've learned). With that in mind,
>> would a reasonable employer/client likely find it acceptable that I
>> just tested the methods I wrote and overrode myself in my classes or
>> are they going to see me as the biggest idiot since the development
>> of COBOL if I fail to observe these subtleties?
>
> No word of a lie, just the fact that you're writing tests already gets
> you some serious points with employers or clients. And the huge
> majority of people that matter will be perfectly happy with you
> writing tests for a class that focus on methods that are lexically in
> the class. That's what I do myself.
>
I have to say it is a bit frightening that there are big brownie points
for doing any testing at all: you'd think that would be absolutely
essential to any serious employer or client. I was just hoping to be
doing a reasonable amount of testing to look like I know what I'm doing.
No testing at all didn't seem like an option in any universe I want to
live in :-)

> You perhaps misunderstood Eric - he didn't mean that there is a subtle
> difference between black box and white box testing. There isn't -
> those two techniques are quite different. He simply called the
> specific point he was making subtle.
>
You're right, I did misunderstand him in exactly the way you said.

> You don't need to be a testing god - people make FT careers out of
> understanding testing. But you should know what black box and white
> box testing are, for example. The Wikipedia page on Software Testing
> is not a bad start, IMO. Just bear in mind that their terminology and
> definitions are debatable on some specifics, but this doesn't detract
> from the overall discussion (in particular the first few sentences
> under Non-Functional Testing are whacked, try to gloss over them).
>
Good points. I'll review the Wikipedia definitions and try to remember
the highlights.

>>> One thing you might do is run some of Super's unit tests on Sub
>>> instances. Another might be to include a "sanity check" test in
>>> your Sub, something that reflects on Sub and verifies that the
>>> methods you've chosen not to test are in fact inherited.
>>
>>> Finally, you've got to realize that unit testing, important as
>>> it
>>> is, is not the be-all and end-all of verifying correctness.
>>>
>> I have no problem with that at all. If there are certain things I
>> don't need to cover in unit testing, that's perfectly fine. If you or
>> anyone else reading this could point me a good summary of what is and
>> is not a concern in unit testing, that would be very helpful. Again,
>> I have effectively NO formal training and what much of the on-the-job
>> stuff I have was learned many years ago. That means that my memory of
>> the theory is very incomplete at this point. In short, I don't know
>> what the prevailing theory is on exactly what should be covered by
>> unit testing, acceptance testing, regression testing, et. al. I'm not
>> going to worry about anything beyond unit testing for the moment but
>> if anyone can point me to a general - and hopefully fairly brief and
>> example-laden - discussion of the prevailing theories of testing,
>> that would be very helpful.
>
> You'll end up doing a fair bit of reading and playing with code to get
> a good handle on testing overall, but it won't take all that long to
> get a good grip on the basics.
>
> I will make a few personal recommendations, which shouldn't be taken
> as complete. This is based on going on ten years of J2EE work, so
> depending on what kind of coding you're doing YMMV...some.
>
> 1. Code reviews/inspections and static code analysis with tools _are_
> a form of testing. Keep them in mind.
>
I see that you've given some good concrete suggestions for tools to use
in upcoming paragraphs. Are any of these static code analysis tools? If
not, can you suggest some good ones? I'd love to have some feedback on my
code via code reviews too but that's a little difficult to do. It's not
realistic to full (non-trivial) classes in the newsgroup so I just pick
and choose especially concerning problems and post snippets reflecting
them. I'm sure that you folks are all too busy to do detailed
walkthroughs of dozens of classes with hundreds of lines of codes in many
of them ;-) I must see if there is a local Java user group in this area
and see if I can develop a few buddies for doing code walkthroughs....

> 2. Integration tests are as important as unit tests. In a J2EE web
> application these are indispensable - you may have hundreds or
> thousands of unit tests all pass but still have integration tests not
> pass. You'll hear a lot of people, myself included, refer to a
> specific class of integration tests as functional tests - we mean
> "application functions from the user perspective" when we use
> "functional" in that regard.
>
> Examples of integration/functional tests in a J2EE web app range all
> the way from ensuring that good things happen in the logic when you
> click that "Transmogrify" button all the way to doing a complete pass
> through a use case and making sure that your "Issue Fishing License"
> logic works in Angler Administrator 2010.
>
> Myself I use Selenium IDE/RC in conjunction with the JUnit framework
> to write these kinds of tests.
>
> 3. Code coverage - Huge, IMO. How can you know that your unit tests or
> integration tests (or even human-tester-driven acceptance tests) are
> doing any good unless you know how much of the code is actually being
> exercised? Code coverage is very simple to do, and for starters you
> can't go wrong investigating Emma or Cobertura. These simply
> instrument the Java bytecode, such that when the bytecode is executed
> (by any mechanism) coverage counts by line/branch/method/class/package
> are written to HTML or XML reports.
>
> 4. Carefully consider the issue of test data - test SQL scripts, mock
> data, in-memory databases (see http://www.mikebosch.com/?p=8) etc.
>
> 5. Your tests are themselves defective. The sad truth is that if the
> core source code you just wrote is riddled with defects, then so
> probably are your tests.

Agreed! However careful and thorough you think you are, you're bound to
have made a few unreasonable assumptions, been unaware of various gotchas
in the Java API, etc. etc.

> Main take-away here is, be aware that just
> because all your unit tests pass, some not insignificant percentage of
> those results are wrong.
>
Sad but true....

> As a side note, this is where the higher-level layer of integration
> tests also helps - it can assist in identifying flawed unit tests.
>
Agreed. Those other levels of testing - and the other people we involve
in our testing - make sure that everything anyone on the team can think
of gets covered.

Thanks for all the great suggestions, Arved!

I've been holding off on learning about the other testing tools until I
had a bit more time but maybe I really should get at least basic
familiarity with the key ones NOW. Then I can include those test results
in my "portfolio" and show employers/clients that I have a serious
professional attitude, even if my chops are not entirely as fluent as I
would like yet....

--
Rhino
From: Rhino on
Patricia Shanahan <pats(a)acm.org> wrote in
news:NtWdndKRZMMkC2vWnZ2dnUVZ_j2dnZ2d(a)earthlink.com:

> Arved Sandstrom wrote:
> ...
>> 5. Your tests are themselves defective. The sad truth is that if the
>> core source code you just wrote is riddled with defects, then so
>> probably are your tests. Main take-away here is, be aware that just
>> because all your unit tests pass, some not insignificant percentage of
>> those results are wrong.
>
> Although it is not 100% effective, there is some value in writing, and
> running, the test before implementing the feature. Start testing a
> method when it is just an empty stub. At that point, a test that passes
> is either useless or has a bug in it.
>
> Patricia
>

Fair enough. I saw an article just the other day that used a buzzword I've
already forgotten which proposed that this was the right way to develop
code: write test cases BEFORE you even start the code and then retest
frequently as the code evolves. I can see some merit in that, even if I've
forgotten the buzzword already ;-)

--
Rhino