From: o//annabee on
P? Mon, 13 Mar 2006 14:59:11 +0100, skrev Dragontamer <prtiglao(a)gmail.com>:

>
> o//annabee wrote:
>> P? Mon, 13 Mar 2006 05:33:18 +0100, skrev <Chewy509(a)austarnet.com.au>:
>
>> > PS. As a token of good will, I'm more than happy to update your
>> website
>> > for you, so that's HTML v4.01 Strict w/CSS and also give it a more
>> > modern look at the same time... Once at that point, it shouldn't be
>> too
>> > hard for you to maintain yourself.
>>
>> Thank you very much for this offer. I will have to refuse.
>> Because I am _allergic_ to standards. Sorry.
>
> Such as the GPL standard I assume?
>
> IMO, that is why I don't like the GPL. If you aren't part of the "GPL
> standard",
> aka, have the GPL license, you lose out on benifits.
>
> There is something about this "join us" that I don't like; it is like
> an enforced
> standard of sorts.
>
> *sorry for the sudden change of topic in this off topic topic :)*

The way I see the GPL its an nessesary annoyance. Its the only real voice
to stand up againt patents. Patents are orginized theft. It is the
imprisonment of ideas. Make no mistake. Patents holders are nothing but
criminals. It would be the exact same thing if you could patent pieces of
music or pieces of math. For some reason everybody understand that you
cant patent y=x^2 and force everyone using this equation to pay a licence
fee. But when it comes to computers, this is what is BEING done.

>
> --Dragontamer
>

From: randyhyde@earthlink.net on

o//annabee wrote:
>
> The way I see the GPL its an nessesary annoyance. Its the only real voice
> to stand up againt patents.

The GPL (v2) has little to do with patents. What are you talking about?
It stands up to copyrights, not patents.

> Patents are orginized theft.

Perhaps. But lack of patents is unorganized theft.

> It is the
> imprisonment of ideas.

Quite the opposite really. It is the freedom of ideas. People could
keep their ideas to themselves and not share them at all. The whole
idea of a patent is to get them to place their idea in the public
domain in exchange for a short monopoly on the idea. Perhaps you should
learn a little bit more about patents before you rail on them so
loudly.


> Make no mistake. Patents holders are nothing but
> criminals.

Whatever you say. It really sounds like more of your "the world owes me
a living" attitude, though. You only gripe about patent holders because
*they* thought of something and they don't explicitly let *you* use
that idea.

> It would be the exact same thing if you could patent pieces of
> music or pieces of math. For some reason everybody understand that you
> cant patent y=x^2 and force everyone using this equation to pay a licence
> fee. But when it comes to computers, this is what is BEING done.

Patents are much better than copyrights. Patents generally expire after
about 18 years (plus or minus). Copyrights seem to be going on
perpetually (i.e., everytime the Mickey Mouse copyright is about to
expire, Disney buys a few senators in the US and extends the copyright
period). Fortunately, this has *not* happened with patents. That makes
patents *far* preferable in my opinion.

And the alternative, people keeping their ideas to themselves, is
*very* real. There was a reason the patent system was invented in the
first place.
Cheers,
Randy Hyde

From: bob_jenkins on
randyhyde(a)earthlink.net wrote:

> For HLA v2.x, I have a very formal regression test suite that I'm
> using. Each module has two sets of test files -- one set tests code
> coverage, the other checks error conditions. I compile each file and
> compare its output against a version that I've checked by hand. That
> way I can run these thousands of tests in an automated fashion in about
> five minutes (well, so far; as the assembler gets larger I expect that
> time to expand).
>
> The code coverage test programs tend to be rather long, trying to cover
> as many paths as possible in a single source file (it is not possible
> to achieve code coverage with a single source file, however, so there
> are several such files per module).

A handful of very complicated tests covering as much code as possible
at once with no errors ... that's the right thing to do. It covers not
only all the features, but almost all the pairs of features, triples of
features, etc.

There are tools to be rigorous about covering all pairs of features
(for an explicit set of features) (google "all pairs testing"). They
tell you the combinations, you still need to write the tests by hand.
They reach complete pairwise coverage (of those features) with about
half as many tests as randomly constructed tests of similar complexity.
The most useful thing about those tools turns out to be forcing you to
enumerate all the features and their restrictions, plus suggesting
interesting situations to test, plus actually doing the random choices
(people aren't good at being random).

> The code that tests error
> conditions usually tests only *one* condition per source file. Once
> you've had one error, any further results become suspect because of
> cascading errors. It would be nice to test combinations of errors, but
> you get a combinatorial explosion when you try this, and furthermore,
> cascading errors with a *single* error condition often produce multiple
> error messages, which change as the code is modified; maintaining
> single error conditions in all these files is bad enough.

Simple tests, one for each error, because cascading errors prevent you
from combining multiple errors in the same test ... that's the right
thing to do, too. It's a pain, but it's unavoidable, and it's only
linear with the number of errors in the system.

All looks well here. Stress tests are a third useful category of
tests. Bug testcases are a fourth, differing from the complex tests in
that you can't change the bug testcases (or have to be very careful
about it), but you're free (encouraged even) to replace the complex
tests periodically.

From: randyhyde@earthlink.net on

bob_jenkins(a)burtleburtle.net wrote:
> >
> > The code coverage test programs tend to be rather long, trying to cover
> > as many paths as possible in a single source file (it is not possible
> > to achieve code coverage with a single source file, however, so there
> > are several such files per module).
>
> A handful of very complicated tests covering as much code as possible
> at once with no errors ... that's the right thing to do. It covers not
> only all the features, but almost all the pairs of features, triples of
> features, etc.

While that may be the "right" thing to do, you quickly run into
problems with
combinatorial explosion.


>
> There are tools to be rigorous about covering all pairs of features
> (for an explicit set of features) (google "all pairs testing"). They
> tell you the combinations, you still need to write the tests by hand.

Do you realize how many possible combinations of pairs of features,
triples of features, etc., are possible in an x86 assembly language
program? Particularly when the language is as sophisticated as HLA?
Keep in mind that we're talking about 150,000 to 200,000 lines of code
for the HLA compiler and standard library. While we can talk all day
long about how valuable it is to test lots of different paths in the
code, the practical matter is that the resources don't exist to do that
with HLA.

Obviously, as a functional test, I long ago had a test file that tried
each instruction and each combination of valid operands. But that is
not at all the same thing as what you're suggesting.


> They reach complete pairwise coverage (of those features) with about
> half as many tests as randomly constructed tests of similar complexity.
> The most useful thing about those tools turns out to be forcing you to
> enumerate all the features and their restrictions, plus suggesting
> interesting situations to test, plus actually doing the random choices
> (people aren't good at being random).

Again there is a resource problem. The idea works fine for small
applications, but when a system has thousands of interoperable
features, an O(mn) algorithm (or worse) to generate all the pairs is
intractible. So we have to do the best we can with the resources we
have.


>
> > The code that tests error
> > conditions usually tests only *one* condition per source file. Once
> > you've had one error, any further results become suspect because of
> > cascading errors. It would be nice to test combinations of errors, but
> > you get a combinatorial explosion when you try this, and furthermore,
> > cascading errors with a *single* error condition often produce multiple
> > error messages, which change as the code is modified; maintaining
> > single error conditions in all these files is bad enough.
>
> Simple tests, one for each error, because cascading errors prevent you
> from combining multiple errors in the same test ... that's the right
> thing to do, too. It's a pain, but it's unavoidable, and it's only
> linear with the number of errors in the system.

Of course. That's what my current HLA 2.0 test suite does.

>
> All looks well here. Stress tests are a third useful category of
> tests. Bug testcases are a fourth, differing from the complex tests in
> that you can't change the bug testcases (or have to be very careful
> about it), but you're free (encouraged even) to replace the complex
> tests periodically.

Not so much replace, as extend. Particularly on tests that have caused
a failure in the past. If a section of code has had a bug in the past,
it's a likely candidate for a bug in the future, too.
Cheers,
Randy Hyde

From: bob_jenkins on
> Do you realize how many possible combinations of pairs of features,
> triples of features, etc., are possible in an x86 assembly language
> program? Particularly when the language is as sophisticated as HLA?
> Keep in mind that we're talking about 150,000 to 200,000 lines of code
> for the HLA compiler and standard library. While we can talk all day
> long about how valuable it is to test lots of different paths in the
> code, the practical matter is that the resources don't exist to do that
> with HLA.

You're right, I've never seen pairwise testing actually applied to more
than a few dozen features. But, I've also never seen it tried.
Theoretically ...

Suppose you've got 2^^18 = 262144 lines of code. And suppose every
line is its own feature. And (I'm making it the problem these tools
like best) suppose they're arranged into 2^^17 pairs, where you use one
feature or the other of each pair, and the choice for each pair is
independent. There's 4*(2^^17 choose 2) = 34 billion pairs to cover.
They'd produce 2*log(2^^17)+2 = 36 tests, each with 2^^17 choices made.
Each test would cover (2^^17 choose 2) pairs of features. Each
feature is tested about 18 times. Which doesn't seem much harder to
write than any other way to write tests for 262144 features.