From: Chewy509 on
Hi Guys,

This one is really for assembler/compiler authors, as well as some of
the other guys that are more evolved in testing. It's not really
related to assembly, but more about the actual assemblers themselves.

How do you guys do testing on your assemblers and/or libraries and
applications? Do you use test suites or test harnesses, scripts with
good/bad input and comparing to known good output, or are their other
methods that you use, eg release often and let the users find the
bugs**?

What about things like complete regression testing, out-of-band testing
(supplying complete garbage input to see how the
assembler/compiler/libraries handle it), source-code auditing, formal
validations, etc...

**I know certain large software companies do this, but then again many
FOSS projects are in the same boat. ;)

--
Darran (aka Chewy509) brought to you by Google Groups!

From: T933191278 on

Chewy509(a)austarnet.com.au ha escrito:

> How do you guys do testing on your assemblers and/or libraries and
> applications?
I use various methods, first i do some small tests when programing to
see if the program
works as expected, second i assemble a sample program with all the
instrucctions and i compare the output with others assemblers y also
make some test with incorrect sources to test error handling ,and third
i use this assembler (octasm) myself ,the test suite is all my programs
and also wait that others users will report bugs.

From: randyhyde@earthlink.net on

Chewy509(a)austarnet.com.au wrote:
> Hi Guys,
>
> This one is really for assembler/compiler authors, as well as some of
> the other guys that are more evolved in testing. It's not really
> related to assembly, but more about the actual assemblers themselves.
>
> How do you guys do testing on your assemblers and/or libraries and
> applications? Do you use test suites or test harnesses, scripts with
> good/bad input and comparing to known good output, or are their other
> methods that you use, eg release often and let the users find the
> bugs**?
>
> What about things like complete regression testing, out-of-band testing
> (supplying complete garbage input to see how the
> assembler/compiler/libraries handle it), source-code auditing, formal
> validations, etc...

Many years ago, I did a fairly complete code coverage test on HLA v1.x.
As time passed (and the source code and language changed), the test
suite became less and less useful. It got to the point where half the
code either didn't compile (because of language changes) or wasn't
testing anything useful (because of source code changes). Ultimately, I
stopped using that suite because it wasn't achieving anything (i.e.,
finding defects in the code). After that point, I used a set of
specific test files for the features I was working on, but no formal
test suite for the entire assembler.

Of course, for each release I usually (unless I'm in a hurry or forget)
compile every HLA sample source file I have on Webster to see if I've
broken anything major. While this doesn't test the *entire* language or
compiler by any stretch of the imagination, it does catch stupid stuff.
And making sure that the example programs still compile is kind of
important. If someone is reading AoA and the sample programs don't
compile, this is a bit of a problem.

For HLA v2.x, I have a very formal regression test suite that I'm
using. Each module has two sets of test files -- one set tests code
coverage, the other checks error conditions. I compile each file and
compare its output against a version that I've checked by hand. That
way I can run these thousands of tests in an automated fashion in about
five minutes (well, so far; as the assembler gets larger I expect that
time to expand).

The code coverage test programs tend to be rather long, trying to cover
as many paths as possible in a single source file (it is not possible
to achieve code coverage with a single source file, however, so there
are several such files per module). The code that tests error
conditions usually tests only *one* condition per source file. Once
you've had one error, any further results become suspect because of
cascading errors. It would be nice to test combinations of errors, but
you get a combinatorial explosion when you try this, and furthermore,
cascading errors with a *single* error condition often produce multiple
error messages, which change as the code is modified; maintaining
single error conditions in all these files is bad enough.

Ultimately, code coverage testing isn't *that* good. But several books
estimate that about 50% of the bugs in common applications exist in
code that has never before executed until the bug was discovered. So if
you really do achieve code coverage (a feat in itself), you'll find
about 50% of the outstanding defects in the code (in theory, at least).

Cheers,
Randy Hyde

From: randyhyde@earthlink.net on

o//annabee wrote:
> >>
> > Which is why I actually *run* these tests.
> > And what, pray tell, testing do *you* do for each release of RosAsm?
> > Cheers,
> > Randy Hyde
>
> LOL. I recompile EVERY thing in my code, maybe 100 times a day :))
> Thats maybe 1000 routines.
>
> You are living in the past Randall Hyde.
>

Compiling the same thing over and over again doesn't prove anything.
You seem to know as much about software testing as Rene.
Which is to say, nothing at all.
Cheers,
Randy Hyde

From: o//annabee on
P? Sat, 11 Mar 2006 00:55:43 +0100, skrev randyhyde(a)earthlink.net
<randyhyde(a)earthlink.net>:

>
> o//annabee wrote:
>> >>
>> > Which is why I actually *run* these tests.
>> > And what, pray tell, testing do *you* do for each release of RosAsm?
>> > Cheers,
>> > Randy Hyde
>>
>> LOL. I recompile EVERY thing in my code, maybe 100 times a day :))
>> Thats maybe 1000 routines.
>>
>> You are living in the past Randall Hyde.
>>
>
> Compiling the same thing over and over again doesn't prove anything.
> You seem to know as much about software testing as Rene.
> Which is to say, nothing at all.

:))

Was it not you who said? :

"Of course, for each release I usually (unless I'm in a hurry or forget)
compile every HLA sample source file I have on Webster to see if I've
broken anything major. While this doesn't test the *entire* language or
compiler by any stretch of the imagination, it does catch stupid stuff.

And making sure that the example programs still compile is kind of
important. "

You claim: "its important", yet "it proves nothing".

If it is important, then it must also be important to know that when using
RosAsm, all your code gets tested in this way _ALL_ the time. Many many
times each day on my PC.

And for you assertment that it "proves nothing"....

Well, it proves that the code compiles..... so that assertion is wrong.

Just like you :)

And you are wrong because such a successful compilation DOES NOT catch
"stupid stuff". It ONLY catches syntax errors.

You have no base. And the sooner you shut up the better. Because even a
beginner asm programmer can arrest you on half the things you parrot from
the scriptkiddies posting to slashdot.


> Cheers,
> Randy Hyde