From: David Mark on
Richard Cornford wrote:
> On Feb 18, 6:14 pm, David Mark wrote:
>> Scott Sauyet wrote:
> <snip>
>>> I actually am a fan of test-driven design, but I don't do it
>>> with performance tests; that scares me.
>> I am sure you are _not_ talking about what I am talking about.
> <snip>
>> ... . That's the difference. I don't
>> take test results at face value. You have to understand what you
>> are looking at before you can react to them.
>>
>> In contrast, I see the various "major" efforts resorting to all
>> sorts of unexplicable voodoo based solely on "proof" provided
>> by test results with no understanding at all going into the
>> process. That's what is wrong with "test-driven"
>> design/development.
>
> Another significant issue (beyond understanding the results) is the
> question of designing the right test(s) to apply. Get the test design
> wrong and the results will be meaningless, so not understanding them
> isn't making anything worse.

Agreed. :) And I meant "inexplicable" of course. And speaking of
embarassments (my own). I seem to have broken IE6 for some of the
TaskSpeed tests. I can't get to IE6 at the moment and am having trouble
getting a multi-IE tester installed on my work box. I know if there is
one guy on the planet who will know what I botched (likely recently), it
is you. Any clues while I scramble to get a testbed set up? I glanced
at the three tests mentioned (setHTML, insertBefore and insertAfter),
but nothing jumped out at me as IE6-incompatible.

>
> To illustrate; the conclusions drawn on this page:-
>
> <URL: http://ejohn.org/blog/most-bizarre-ie-quirk/ >
>
> - are totally incorrect because the test used (predictably) interfered
> with the process that was being examined.

No question. That domain is just full of misconceptions. :)

>
> (So is the next step going to be "test-driven test design"? ;-)

It seems like Resig is already there.


From: David Mark on
Scott Sauyet wrote:
> On Feb 14, 11:45 pm, David Mark <dmark.cins...(a)gmail.com> wrote:
>> David Mark wrote:
>>> I've updated the TaskSpeed test functions to improve performance. This
>>> necessitated some minor additions (and one change) to the OO interface
>>> as well. I am pretty happy with the interface at this point, so will
>>> set about properly documenting it in the near future.
>
> So you've learned that test-driven development is not an
> oxymoron? :-)
>
> I actually am a fan of test-driven design, but I don't do it with
> performance tests; that scares me.
>
>>> [http://www.cinsoft.net/taskspeed.html]
>
>> Opera 10.10, Windows XP on a very busy and older PC:-
>>
>> 2121 18624 9000 5172 22248 4846 4360 1109 1266 1189
>> 6140 1876 843* 798*
>>
>> I ran it a few times. This is representative. The two versions flip
>> flop randomly. Usually around a third of the purer tests. :)
>
> I can confirm similar rankings (with generally faster speeds, of
> course) on my modern machine in most recent browsers, with two
> exceptions: First, in Firefox and IE, PureDOM was faster than My
> Library. Second, in IE6, several tests fail in My Library ("setHTML",
> "insertBefore", "insertAfter".) Also note that the flip-flopping of
> the two versions might have to do with the fact that they are pointing
> at the same exact version of My Library (the one with QSA) and the
> same test code. You're running the same infrastructure twice! :-)
>
> This is great work, David. I'm very impressed.
>
> But I do have some significant caveats. Some of the tests seem to me
> to be cheating, especially when it comes to the loops. For instance,
> here is one of the functions specifications:

There's definitely no cheating going on.

>
> "append" : function(){
> // in a 500 iteration loop:
> // create a new <div> with the same critera as 'create'
> // - NOTE: rel needs to be == "foo2"
> // then append to body element (no caching)
> //
> // return then length of the matches from the selector
> "div[rel^='foo2']"
> },
>
> My Library's implementation looks like this:
>
> "append" : function() {
> var myEl = E().loadNew('div', { 'rel':'foo2' }), body =
> document.body;
> for (var i = 500; i--;) {
> myEl.loadClone().appendTo(body);
> }
> return $("div[rel^=foo2]").length;
> },
>
> This definitely involves caching some objects outside the loop.

There is a new element cloned each time. So what if it is a clone and
not a freshly created one. I saw where one of the other library's tests
was doing the same thing with some sort of template object. Who says
you can't clone?

> There
> are a number of such instances of this among the test cases. My
> Library is not alone in this, but most of the other libraries mainly
> just have variable declarations outside the loop, not initialization.
> In part, this is a problem with the Task Speed design and
> specification. It would have been much better to have the testing
> loop run each library's tests the appropriate number of times rather
> than including the loop count in the specification. But that error
> should not be an invitation to abuse. I ran a version with such
> initializations moved inside the loop, and my tests average about a
> 15% performance drop for My Library, in all browsers but Opera, where
> it made no significant difference.
>
> But that is just one instance of a general problem. My Library is not
> alone in coding its tests to the performance metrics. The spec has
> this:
>
> "bind" : function(){
> // connect onclick to every first child li of ever ul
> (suggested: "ul > li")
> //
> // return the length of the connected nodes
> },
>
> but the YUI3 tests perform this with event delegation:
>
> "bind" : function(){
> Y.one('body').delegate('click', function() {}, 'ul > li');
> return Y.all('ul > li').size();
> },
>
> This might well be the suggested way to attach a behavior to a number
> of elements in YUI. There's much to be said for doing it in this
> manner. And yet it is pretty clearly not what was intended in the
> specification; if nothing else, it's an avoidance of what was
> presumably intended to be a loop. There's a real question of doing
> things the appropriate way.

Yes, I've mentioned this specific issue numerous times. Using
delegation when the test is trying to measure attaching multiple
listeners is bullshit (and I wouldn't expect anything less from Yahoo).

>
> To test this, I limited myself to 15 minutes of trying to optimize the
> jQuery tests in the same manner. I moved initialization outside the
> loop and switched to event delegation. After this brief attempt, I
> achieved speed gains between 54% and 169% in the various browsers.
> And I did this without any changes to the underlying library. I'm
> sure I could gain reasonable amounts of speed in some of the other
> libraries as well, but this sort of manipulation is wrong-headed.

You still can't make jQuery touch mine, no matter what you do (unless
you really cheat like return a number without any DOM manipulation!)

>
> Perhaps an updated version of TaskSpeed is in order, but it's hard to
> design a system that can't be gamed in this manner.
>
> Does your host have PHP? It would suggest it would be better to host
> a dynamic version of this, and not rely on static files. It's easy to
> set up, and that also makes it almost trivial to add and remove
> libraries from your tests.

I have ASP and I think it has enough libraries as it is.

>
> Finally, the matter of IE6 is disappointing. This is still a widely
> used browser; I'm surprised you didn't test there before releasing the
> code.

It's not so much a release as a periodically updated page. I broke
something and due to happenstance (my multi-IE box went down recently),
I didn't get to test and find out that I broke it. No big deal, but
certainly an embarassment. If I can get this #$@% IETester toolbar
working (or even find where it went after installation), I'll fix it
instantly. At the moment, I can't see anything obvious that I did in
the code to break IE6, but then it is a lot of code. :)

> You've often pointed out how well My Library performed without
> change when IE8 came out. Well the flip side is that it needs to keep
> doing well at least in environments that are widely used, even as you
> make changes. All the other libraries except qoodoox did fine in IE6,
> even if all of them were ungodly slow.

Obviously, I broke something recently. It's not indicative of some
major shift that has invalidated IE6 as a viable browser. :)

It has always been a rock with IE6. I tested the builder stuff to death
in IE <= 6, just a week or two ago. Granted these, "concise" OO tests
are using interfaces that were added afterward, so perhaps I crossed
some wires. Make no mistake, I will fix whatever I broke in IE6. It's
a fail until that time.

BTW, I *hate* this IETester toolbar. Doesn't appear to do _anything_ in
IE8 on XP. Literally nothing. Installs and then vanishes without a
trace, never to be heard from or seen again. :(

So, if you want to help, give me some reports on _exactly_ what happened
to you in IE6. Was there an error? If so, the TaskSpeed thing creates
sort of a quasi-tooltip to display it.
From: David Mark on
Scott Sauyet wrote:
> On Feb 14, 11:45 pm, David Mark <dmark.cins...(a)gmail.com> wrote:
>> David Mark wrote:
>>> I've updated the TaskSpeed test functions to improve performance. This
>>> necessitated some minor additions (and one change) to the OO interface
>>> as well. I am pretty happy with the interface at this point, so will
>>> set about properly documenting it in the near future.
>
> So you've learned that test-driven development is not an
> oxymoron? :-)
>
> I actually am a fan of test-driven design, but I don't do it with
> performance tests; that scares me.
>
>>> [http://www.cinsoft.net/taskspeed.html]
>
>> Opera 10.10, Windows XP on a very busy and older PC:-
>>
>> 2121 18624 9000 5172 22248 4846 4360 1109 1266 1189
>> 6140 1876 843* 798*
>>
>> I ran it a few times. This is representative. The two versions flip
>> flop randomly. Usually around a third of the purer tests. :)
>
> I can confirm similar rankings (with generally faster speeds, of
> course) on my modern machine in most recent browsers, with two
> exceptions: First, in Firefox and IE, PureDOM was faster than My
> Library. Second, in IE6, several tests fail in My Library ("setHTML",
> "insertBefore", "insertAfter".) Also note that the flip-flopping of
> the two versions might have to do with the fact that they are pointing
> at the same exact version of My Library (the one with QSA) and the
> same test code. You're running the same infrastructure twice! :-)

Not locally I wasn't (which is where I do most of my testing). I
apparently forgot to update one of the files online. It's updated now.
I don't think you'll see any big difference as these aren't
query-intensive tests.
From: David Mark on
David Mark wrote:
> Scott Sauyet wrote:
>> On Feb 14, 11:45 pm, David Mark <dmark.cins...(a)gmail.com> wrote:
>>> David Mark wrote:
>>>> I've updated the TaskSpeed test functions to improve performance. This
>>>> necessitated some minor additions (and one change) to the OO interface
>>>> as well. I am pretty happy with the interface at this point, so will
>>>> set about properly documenting it in the near future.
>> So you've learned that test-driven development is not an
>> oxymoron? :-)
>>
>> I actually am a fan of test-driven design, but I don't do it with
>> performance tests; that scares me.
>>
>>>> [http://www.cinsoft.net/taskspeed.html]
>>> Opera 10.10, Windows XP on a very busy and older PC:-
>>>
>>> 2121 18624 9000 5172 22248 4846 4360 1109 1266 1189
>>> 6140 1876 843* 798*
>>>
>>> I ran it a few times. This is representative. The two versions flip
>>> flop randomly. Usually around a third of the purer tests. :)
>> I can confirm similar rankings (with generally faster speeds, of
>> course) on my modern machine in most recent browsers, with two
>> exceptions: First, in Firefox and IE, PureDOM was faster than My
>> Library. Second, in IE6, several tests fail in My Library ("setHTML",
>> "insertBefore", "insertAfter".) Also note that the flip-flopping of
>> the two versions might have to do with the fact that they are pointing
>> at the same exact version of My Library (the one with QSA) and the
>> same test code. You're running the same infrastructure twice! :-)
>>
>> This is great work, David. I'm very impressed.
>>
>> But I do have some significant caveats. Some of the tests seem to me
>> to be cheating, especially when it comes to the loops. For instance,
>> here is one of the functions specifications:
>
> There's definitely no cheating going on.
>
>> "append" : function(){
>> // in a 500 iteration loop:
>> // create a new <div> with the same critera as 'create'
>> // - NOTE: rel needs to be == "foo2"
>> // then append to body element (no caching)
>> //
>> // return then length of the matches from the selector
>> "div[rel^='foo2']"
>> },
>>
>> My Library's implementation looks like this:
>>
>> "append" : function() {
>> var myEl = E().loadNew('div', { 'rel':'foo2' }), body =
>> document.body;
>> for (var i = 500; i--;) {
>> myEl.loadClone().appendTo(body);
>> }
>> return $("div[rel^=foo2]").length;
>> },
>>
>> This definitely involves caching some objects outside the loop.
>
> There is a new element cloned each time. So what if it is a clone and
> not a freshly created one. I saw where one of the other library's tests
> was doing the same thing with some sort of template object. Who says
> you can't clone?
>
>> There
>> are a number of such instances of this among the test cases. My
>> Library is not alone in this, but most of the other libraries mainly
>> just have variable declarations outside the loop, not initialization.
>> In part, this is a problem with the Task Speed design and
>> specification. It would have been much better to have the testing
>> loop run each library's tests the appropriate number of times rather
>> than including the loop count in the specification. But that error
>> should not be an invitation to abuse. I ran a version with such
>> initializations moved inside the loop, and my tests average about a
>> 15% performance drop for My Library, in all browsers but Opera, where
>> it made no significant difference.
>>
>> But that is just one instance of a general problem. My Library is not
>> alone in coding its tests to the performance metrics. The spec has
>> this:
>>
>> "bind" : function(){
>> // connect onclick to every first child li of ever ul
>> (suggested: "ul > li")
>> //
>> // return the length of the connected nodes
>> },
>>
>> but the YUI3 tests perform this with event delegation:
>>
>> "bind" : function(){
>> Y.one('body').delegate('click', function() {}, 'ul > li');
>> return Y.all('ul > li').size();
>> },
>>
>> This might well be the suggested way to attach a behavior to a number
>> of elements in YUI. There's much to be said for doing it in this
>> manner. And yet it is pretty clearly not what was intended in the
>> specification; if nothing else, it's an avoidance of what was
>> presumably intended to be a loop. There's a real question of doing
>> things the appropriate way.
>
> Yes, I've mentioned this specific issue numerous times. Using
> delegation when the test is trying to measure attaching multiple
> listeners is bullshit (and I wouldn't expect anything less from Yahoo).
>
>> To test this, I limited myself to 15 minutes of trying to optimize the
>> jQuery tests in the same manner. I moved initialization outside the
>> loop and switched to event delegation. After this brief attempt, I
>> achieved speed gains between 54% and 169% in the various browsers.
>> And I did this without any changes to the underlying library. I'm
>> sure I could gain reasonable amounts of speed in some of the other
>> libraries as well, but this sort of manipulation is wrong-headed.
>
> You still can't make jQuery touch mine, no matter what you do (unless
> you really cheat like return a number without any DOM manipulation!)
>
>> Perhaps an updated version of TaskSpeed is in order, but it's hard to
>> design a system that can't be gamed in this manner.
>>
>> Does your host have PHP? It would suggest it would be better to host
>> a dynamic version of this, and not rely on static files. It's easy to
>> set up, and that also makes it almost trivial to add and remove
>> libraries from your tests.
>
> I have ASP and I think it has enough libraries as it is.
>
>> Finally, the matter of IE6 is disappointing. This is still a widely
>> used browser; I'm surprised you didn't test there before releasing the
>> code.
>
> It's not so much a release as a periodically updated page. I broke
> something and due to happenstance (my multi-IE box went down recently),
> I didn't get to test and find out that I broke it. No big deal, but
> certainly an embarassment. If I can get this #$@% IETester toolbar
> working (or even find where it went after installation), I'll fix it
> instantly. At the moment, I can't see anything obvious that I did in
> the code to break IE6, but then it is a lot of code. :)
>
>> You've often pointed out how well My Library performed without
>> change when IE8 came out. Well the flip side is that it needs to keep
>> doing well at least in environments that are widely used, even as you
>> make changes. All the other libraries except qoodoox did fine in IE6,
>> even if all of them were ungodly slow.
>
> Obviously, I broke something recently. It's not indicative of some
> major shift that has invalidated IE6 as a viable browser. :)
>
> It has always been a rock with IE6. I tested the builder stuff to death
> in IE <= 6, just a week or two ago. Granted these, "concise" OO tests
> are using interfaces that were added afterward, so perhaps I crossed
> some wires. Make no mistake, I will fix whatever I broke in IE6. It's
> a fail until that time.
>
> BTW, I *hate* this IETester toolbar. Doesn't appear to do _anything_ in
> IE8 on XP. Literally nothing. Installs and then vanishes without a
> trace, never to be heard from or seen again. :(
>
> So, if you want to help, give me some reports on _exactly_ what happened
> to you in IE6. Was there an error? If so, the TaskSpeed thing creates
> sort of a quasi-tooltip to display it.

1. Got the IETester thing going. Problem was in my set.
2. Tested TaskSpeed in what it considers IE6
3. No issues, but that doesn't prove anything for sure

I did do some object inferencing with the address bar and it sure
appears to be IE6. There's no browser sniffing involved, so ISTM that
it should also work in true-blue IE6. Let me know if that is still not
the case (and give me the error messages or at least the _exact_ list of
tests that are failing).

I wonder if you ran the thing while I had a bad build up there. That
has happened a couple of times in the last week.

Also, running IE5.5 on SlickSpeed at the moment. All of the other
libraries are crashing and burning. Some refused to even _load_. My
Library looks perfect (and pretty speedy) so far.

Anyone else having TaskSpeed issues in IE < 7? I'd be shocked if I
actually broke something (unless it was one of the aforementioned goofs
that was fixed instantly). I have added quite a bit in the last few
weeks, of course. I had previously tested My Library in IE5-8, doing
lots more than queries and had no issues. I wouldn't expect that the
selector engine improvements broke TaskSpeed tests in IE6.

IE5.5 (in tester) just finished SlickSpeed. Perfect and comparatively
fast (as expected) to the couple of others that managed to not throw
exceptions on every test. Running TaskSpeed on that next...
From: Scott Sauyet on
David Mark wrote:
> Scott Sauyet wrote:

>> I actually am a fan of test-driven design, but I don't do it with
>> performance tests; that scares me.
>
> I am sure you are _not_ talking about what I am talking about.  At least
> I hope not.  And what makes you think that these performance tests had
> anything to do with the changes.  FYI, they didn't.

Well, this is how your post that started this thread began:

| I've updated the TaskSpeed test functions to improve performance.
This
| necessitated some minor additions (and one change) to the OO
interface
| as well.

:-)

-- Scott