From: Ivan S on
On Feb 14, 11:31 pm, David Mark <dmark.cins...(a)> wrote:
> Andrew Poulos wrote:
> > On 13/02/2010 11:55 PM, Ivan S wrote:
> >> On Feb 12, 6:21 pm, Scott Sauyet<scott.sau...(a)>  wrote:
> >>>
> >> Latest Opera (10.10) has a problem with Mylib without QSA and Dojo
> >> (all tests returned error).
> Interesting.  First I've heard of that.  I just did a fairly involved
> update, so perhaps I broke something.  What platform.  It appears to be
> fine (if inexplicably slow) on Windows.

XP, but that's fixed now. There was some problem in Opera cache (I'm
not sure why have that happend, because I was first time on that page,
so there was no cache), I cleared it and now tests run for all

There are no large responses for Mylib, except in "contains" selectors
(if I remember correctly, it's slow in most of browsers).
From: Scott Sauyet on
On Feb 14, 5:42 pm, David Mark <dmark.cins...(a)> wrote:
> Andrew Poulos wrote:
> > On 13/02/2010 11:55 PM, Ivan S wrote:
> >> On Feb 12, 6:21 pm, Scott Sauyet<scott.sau...(a)>  wrote:
> >>>
> >> Latest Opera (10.10) has a problem with Mylib without QSA and Dojo
> >> (all tests returned error).
> I see.  It was a new test page.  I assume the error was in the page
> itself and is now fixed. (?)

In a subsequent post [1], Ivan replies that it was an odd issue with
the Opera cache. That page has been static since I announced it here,
and I rather doubt that the URL is one that someone might stumble
across. :-)

> And I see this new test page uses _much_ higher numbers (and different
> units).  I thought I was seeing things before I read the rest of the
> thread.  :)

I'm wondering if I should put something right on the page to point out
the different units; I'd rather not change any more of the original
than is necessary, which is why the totals row is reported in
milliseconds even though the individual items are in microseconds.
But yes, those numbers can be jarring at first!

-- Scott
From: Scott Sauyet on
On Feb 14, 5:48 pm, David Mark <dmark.cins...(a)> wrote:
> Scott Sauyet wrote:

>> Please remember what I've said in this discussion: My Library performs
>> very well, and is among the faster ones in many of the tests I've
>> seen.
> Yes, I think that is self-evident.  Thank you for your honesty in that.

I think you'll find that I'm quite empirical about these things; I
really look for the evidence, which is what got me testing these
claims in the first place.

>> But you've significantly oversold its speed in your original
>> and subsequent posts.
> Not according to my testing, results of which I posted in droves when I
> had the chance.  I test a much wider variety of environments that...
> well, anybody.  And I post the results as I see fit (a lot lately).
> What else do you want?

According to my tests, My Library is getting faster, but is not the
fastest (at least not yet!) in the environments I care about. I
honestly don't give a damn about FF1, as I don't build pages I expect
end users to visit with that browser. My apps are also not yet aimed
at smartphones, but I can see that coming, so it's at least a
legitimate concern. What I mostly care about are the browsers used by
my target audience. IE6,7, & 8, recent versions of FF, Chrome, and
Safari. And although I don't see it much in my logs, I can't resist
checking recent versions of Opera too out of an old loyalty to the

>> My Library is not the undisputed fastest library for SlickSpeed.
> It is as far as I can see.  By far.  But who cares?  The TaskSpeed
> results are more meaningful for reasons that should be obvious.  

Yes, and even the speed results are only one factor of many in
choosing a tool.

> Of
> course, they have their problems too.  The rules for the test functions
> are just not clearly defined, so the individual renditions vary wildly.

My main issue with the TaskSpeed tests is that it becomes far too easy
for test-writers to optimize the code for the tests rather than to use
their libraries as intended. I'm not sure there's any way around
that, because, unlike with the SlickSpeed tests, it takes someone
well versed in the library to write the test code appropriately; and
those people are likely the ones most invested in having the library
perform well in competition.

>> That's all I've said, but I've given
>> significant backup to my words by posting up-to-date tests others can
>> run in their own environments.
> Great.  And I look forward to seeing the results from your version of
> SlickSpeed, but I will want to see the new code first as it appears it
> has been a subject of debate here of late.

I'm not sure if I will put out another version or not. The only issue
I've seen under discussion is the one I raised about the extra work in
the test loop. It most certainly affects the absolute speeds, but
should slow down all libraries an equal amount.

-- Scott
From: Scott Sauyet on
On Feb 14, 6:32 pm, David Mark <dmark.cins...(a)> wrote:
> Scott Sauyet wrote:
> I take issue with the inclusion of tests that are not supported by all
> of the libraries.  Doesn't make a lot of sense as not every library
> claims to support the exact same set of selectors.  For example, 2n and
> 2n + 1 are things I didn't feel like bothering with.  I just can't see
> getting that silly with this stuff.  There are certainly many other CSS3
> selectors that I do not (and will not) support for querying.  If you
> want more than what is supported, write an add-on (it's very easy to do
> as the QSA addition illustrates).

I would definitely prefer not to mess with the list of selectors,
unless it's to add some weighing scheme based upon real-world selector
usage. I didn't write the list; it's the one that was in the early
versions of SlickSpeed and has been copied into many other versions.
If you want to host an altered version, as I said:

>> There is a link in the footer to a zip file containing the PHP code
>> used.

Changing selectors is very easy. There is a text file in the
distribution -- "selectors.list" -- containing one selector per line.
(If you don't have a PHP host available, I'm willing to post a version
with the selectors you choose.) The mechanism to deal with
unavailable selectors is naive, perhaps, but does it's job well
enough: it simply highlights every row where the number of results
vary between libraries, and highlights any individual test that throws
an error. Intentionally or not, the appearance of SlickSpeed did help
coalesce the minimum set of selectors libraries tended to support.
That's a good thing for developers looking to use one or the other.

>> The big news is that in these tests, in all browsers, except IE6 and
>> IE8 in compatibility mode, My Library (QSA) was the fastest for a
>> majority of the selectors.
> I'm not surprised.  Nor am I surprised that broken MSHTML
> implementations are the exceptions.  Those are where the wheels fall off
> for every one of the others.  Put a few choice attribute-related rules
> in and they will predictably break down and veer off the course.  And
> these are obviously selectors they assert to support (and have asserted
> so for some time).  That's not good, especially when Web developers are
> now being conditioned to "forget" IE < 8 (and as we know, IE8
> compatibility mode is roughly equivalent to IE7).

Well this is a test of selector speed. I haven't seen the other
libraries having problems with the attribute-based selectors. I know
there are other significant errors with other parts of attribute

>> But in none was it the overall fastest.  JQuery was the fastest in
>> everything but IE6, where it came in third behind Dojo and MooTools.
>> In many browsers, if two of the selectors were optimized to match the
>> speed of the competition, My Library (QSA) would have been about the
>> fastest overall library.  Those were the two selectors with
>> ":contains": "h1[id]:contains(Selectors)" and
>> "p:contains(selectors)".  In the various IE's there was a different
>> issue, "p:nth-child(even/odd)" were wrong in both versions of My
>> Library, and were significantly slower, too.
> The even/odd discrepancy, which did not show up in my (obviously
> incomplete) testing is a legitimate beef.  Apparently I managed to make
> those both slow and incorrect.  It can happen.  I'll grab your test page
> as an example and fix it when I get a chance.  Will also announce that
> those two are broken in my forum.  As far as I am concerned, the results
> are disqualified until those two are fixed.

That's an odd statement. The results still stand. They have to do
with the code that was available on the day they ran. As things are
fixed and new tests are released, there will be new results. But
MooTools can't disqualify the results because they haven't yet gotten
around to optimizing "tag.class", Nor can you.

> However, the inclusion of not:, 2n, 2n + 1 is misleading as I never
> claimed to support those.  That's why they are absent from my test page..
>  Well, the first was never there in the first place.  I'm _not_ adding
> that.  :)

I don't find much practical use for the general "A n + B" syntax, but
the even/odd "2n"/"2n + 1" selectors have been quite helpful to me.
Almost anytime I use selector engines, though, I find myself using
":not" a fair bit. Obviously it's up to you what you want to support,
but I'd urge you to reconsider.

>> One other place where My Library might be able to do a little catch-up
>> is with some of the most commonly used selectors; jQuery is fastest,
>> and, in some environments, significantly faster than the competition,
>> at "tag" and "#id", which probably account for a large portion of the
>> selectors used in daily practice.
> In my testing, it has been shown to be relatively fast at those sorts of
> simple queries.  The QSA version doesn't hand those off though (at least
> not at the moment).  I've been considering doing that in the QSA add-on..

I'm surprised the native QSA engines aren't faster at these,
especially at "#id". If an external library can do a quick switch to
getElementById, you'd think the native engines could also do this in
order to speed things up.

>> The other point to make is that we've pretty much hit the point where
>> all the libraries are fast enough for most DOM tasks needed, and
>> especially for querying.
> The most important thing to take out of all of this is to not use
> queries for anything.  It's the worst possible strategy, popular or not..

I'm curious as to why you (and others here, I've noted) say this.
I've found queries to be an incredibly useful tool, especially for
dynamic pages. What is the objection?

>> So although there will always be some
>> bragging rights over fastest speed, the raw speeds are likely to
>> become less and less relevant.
> Like I've been saying, the TaskSpeed stuff is more compelling.

It is more compelling, but more subject to manipulation, I'm afraid.
Still it's worth pursuing, as long as we keep in mind that speed is
only one of a number of important concerns.

-- Scott
From: Scott Sauyet on
On Feb 14, 7:23 pm, David Mark <dmark.cins...(a)> wrote:
> Scott Sauyet wrote:
> > Based on the SlickSpeed tests John-David Dalton recently demonstrated,
> > I've created my own new version of SlickSpeed.
> Also, I am not sure why you guys are using API.getEBCS over and over.
> The extra dot operation is unnecessary.  The whole reason the (not
> recommended for GP use) $ is in there is for tests like these.  The
> extra dot operation on each call just clouds the issue as you would
> normally use something like this:-
> var getEBCS = API.getEBCS;
> the start.  The API is not made up of methods in the strict sense..
>   The API is simply a "namespace" object containing the supported
> functions (which vary according to the environment).  They are functions
> that don't refer to - this - at all.

That makes sense, and I'll probably do that in future versions.
Unfortunately, I will have to add it to the mylib.js file, as I don't
want to adjust the simple configuration that exists right now, just
lines like this in a config file:

[MooTools 1.2.4]
file = "mootools-yui-compressed.js"
function = "$$"

[My Library]
file = "mylib-min.js"
function = "API.getEBCS"

But I am bothered by "The whole reason the (not recommended for GP
use) $ is in there is for tests like these." Having code that you
expect to be used for tests but not for general purposes strikes me as
an uncomfortable sort of pandering to the tests.

-- Scott