From: Scott Sauyet on
On Jan 27, 9:02 pm, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
> I'm not sure how myLib can be faster than pure dom???

In my testing, several libraries are at least sometimes faster than
the "Pure DOM" methods, which makes me wonder about the quality of the
implementation of Pure DOM.

-- Scott
From: Andrew Poulos on
On 28/01/2010 3:23 PM, Scott Sauyet wrote:
> On Jan 27, 9:02 pm, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
>> I'm not sure how myLib can be faster than pure dom???
>
> In my testing, several libraries are at least sometimes faster than
> the "Pure DOM" methods, which makes me wonder about the quality of the
> implementation of Pure DOM.

Or maybe its the quality of the coding of "pure DOM" methods? Its
interesting that the author specifically states that the source for
their "pure DOM" methods is unavailable.

Andrew Poulos
From: Scott Sauyet on
On Jan 28, 12:14 am, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
> On 28/01/2010 3:23 PM, Scott Sauyet wrote:
>
>> On Jan 27, 9:02 pm, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
>>> I'm not sure how myLib can be faster than pure dom???
>
>> In my testing, several libraries are at least sometimes faster than
>> the "Pure DOM" methods, which makes me wonder about the quality of the
>> implementation of Pure DOM.
>
> Or maybe its the quality of the coding of "pure DOM" methods?

That's precisely what I meant by the implementation.

> Its interesting that the author specifically states that the
> source for their "pure DOM" methods is unavailable.

I think you misunderstood this quote from the taskspeed page:

| The 'PureDom' tests are written as a minimal abstract utility
| API to accomplish these tasks, and are included as a baseline
| measurement of the compared libraries. It currently is not a
| library available for download or use.

That does not mean that you can't see it. It's simply meant to be an
efficient, library-agnostic bit of code. It is included with the
tests, but it's not available as a stand-alone library in the manner
that Dojo, jQuery, MooTools, My Library, Prototype, qooxdoo, and YUI
are.

The test code is available at

http://dante.dojotoolkit.org/taskspeed/tests/pure-tests.js

and the minimal library used is at

http://dante.dojotoolkit.org/taskspeed/frameworks/webreflection.js

The test code looks like you would expect, with pure DOM code like
this:

(node = a[j]).parentNode.insertBefore(
p.cloneNode(true).appendChild(text.cloneNode(true))
.parentNode, node.nextSibling
);


The library contains a utility object consisting of four functions:
attachEvent, detachEvent, (Array) indexOf, and a small replacement for
or wrapper around querySelectorAll. Actually, that last looks a
little strange to me:

getSimple:document.createElement("p").querySelectorAll&&false?
function(selector){
return this.querySelectorAll(selector);
}:
function(selector){
// lightweight implementation here
}

Am I crazy or does that "&& false" mean that the first branch will
never be chosen? Perhaps that's the culprit?

-- Scott
From: RobG on
On Jan 29, 1:24 am, Scott Sauyet <scott.sau...(a)gmail.com> wrote:
> On Jan 28, 12:14 am, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
>
> > On 28/01/2010 3:23 PM, Scott Sauyet wrote:
>
> >> On Jan 27, 9:02 pm, Andrew Poulos <ap_p...(a)hotmail.com> wrote:
> >>> I'm not sure how myLib can be faster than pure dom???
>
> >> In my testing, several libraries are at least sometimes faster than
> >> the "Pure DOM" methods, which makes me wonder about the quality of the
> >> implementation of Pure DOM.
[...]
> The test code is available at
>
> http://dante.dojotoolkit.org/taskspeed/tests/pure-tests.js
>
> and the minimal library used is at
>
> http://dante.dojotoolkit.org/taskspeed/frameworks/webreflection.js
>
> The test code looks like you would expect, with pure DOM code like
> this:
>
> (node = a[j]).parentNode.insertBefore(
> p.cloneNode(true).appendChild(text.cloneNode(true))
> .parentNode, node.nextSibling
> );

Not exactly what I'd expect. The text node should be appended to the p
earlier so there's no repeated clone, append, step-up-the-DOM.
Optimising as suggested gives a 25% speed boost in Fx and 10% in IE
6.

The same slow logic is used in the make function (my wrapping):


"make": function(){
for(var
d = document, body = d.body,
ul = d.createElement("ul"),
one = d.createElement("li")
.appendChild(d.createTextNode("one"))
.parentNode,
two = d.createElement("li")
.appendChild(d.createTextNode("two"))
.parentNode,
three= d.createElement("li")
.appendChild(d.createTextNode("three"))
.parentNode,
i = 0,
fromcode;
i < 250; ++i
){
fromcode = ul.cloneNode(true);
fromcode.id = "setid" + i;
fromcode.className = "fromcode";
fromcode.appendChild(one.cloneNode(true));
fromcode.appendChild(two.cloneNode(true));
fromcode.appendChild(three.cloneNode(true));
body.appendChild(fromcode);
};
return utility.getSimple
.call(body, "ul.fromcode").length;
},


Note the repetitious clone/append/step-up where a single clone would
have done the job - compare it to the jQuery code used:

$("<ul class='fromcode'><li>one</li><li>two</li><li>three</li></
ul>")

Here all elements are created in one go, so the two are hardly
comparible. The DOM code is doing 4 times the work (but still runs in
half the time of jQuery 1.4). Optimising out the extra work and it
runs about 15% faster in Firefox, and twice as fast in IE 6.

Note also that a selector is used to count the nodes added to the
document at the end and that the speed of this count is included in
the test. Why is selector speed allowed to influence tests of element
creation speed?


> The library contains a utility object consisting of four functions:
> attachEvent, detachEvent, (Array) indexOf, and a small replacement for
> or wrapper around querySelectorAll. Actually, that last looks a
> little strange to me:
>
> getSimple:document.createElement("p").querySelectorAll&&false?
> function(selector){
> return this.querySelectorAll(selector);
> }:
> function(selector){
> // lightweight implementation here
> }
>
> Am I crazy or does that "&& false" mean that the first branch will
> never be chosen?

Good find.


>Perhaps that's the culprit?

Regardless, it doesn't seem sensible to use a lightweight selector
engine when the intention is to compare selector engines to "pure
DOM" (which should mean ad hoc functions). There only selectors in the
test are:

1. ul.fromcode
2. div.added

A simple switch statement would have done the trick. The "pure DOM"
code doesn't leverage the browser-native getElementsByClassName method
if available, a for loop and RegExp is used always. Nor does it
leverage the fact that DOM collections are live, it gets the
collection every time. This is critical as a selector query is
included in nearly all the tests, so its performance affects tests
where it is not the feature being tested.

There are a number of optimisations that could quite easily be added
to "pure DOM", and the tests themselves do not accurately target the
features they are trying to test in some (many?) cases.


--
Rob
From: Andrea Giammarchi on
I love people keep thinking about how many cheats i could have added
to PureDOM ... I used native everything at the beginning and people
complained about the fact "obviously libraries have better selector
engines" ...

I have nullified querySelectorAll on purpose (which does NOT produce a
live object in any case, so all latest considerations about cached
live objects are superfluous adn nothing new, I have posted about this
stuff ages ago in WebReflection) and people keep thinking I am an
idiot, rather than simply remove that &&false which is clearly a
statement "nullifier" (as ||true is a statement "forcer").

That was the easiest way to test both getSimplle and native method ...
but you guys are too clever here to get this, isn't it?

About tricky code to speed up some appendChild and the BORING
challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
li><li>two</li><li>three</li></ul>") )
I don't think after a year of libraries still behind there's much more
to say about these topics.

If you want to trick PureDOM convinced you can speed it up, well, you
have discovered HOT WATER!!! Good Stuff, Hu?

The meaning of PureDOM is simple: to provide a comparative basic
manual approach and 'till now it demonstrated that no framework is
able to perform daily tasks faster than native DOM, or at least those
considered in TaskSpeed.

If you read tasks carefully, you will realize that if a task says: ...
and for each node, insert this text: "whatever" ...
the PureDOM code simply creates EACH NODE, and it inserts FOR EACH
NODE the text "whatever" ... bt let's cheat and feel cool, that's the
point, right?

Everybody else got it but this ML is still talking about PureDOM and
how badly it is etc etc ... well, use your best practices when
performances matter, and please stop wasting your time talking about
PureDOM or at least be decent enough to understand what is it and why
it's like that.

Finally, the day a library will be faster, I'll remove the fake
selector engine, I will implement proprietary IE way to append nodes
(insertAdjacentNode faster in many cases) and I bet PureDOM will still
outperform ... and that day somebody will talk about joined arrays for
faster strings via innerHTML ... I am sure about it!!!

Now, after all this, what have we learned today about JS? Me, nothing
for sure, yet another boring pointless discussion over PureDOM, I
receive a "warning" weekly basis about how people would have better
cheated in PureDOM ... uh, and don't forget the last test with
document.body.innerHTML = "" which is faster, right?

Best Regards, and thanks for asking before blaming