From: Scott Sauyet on
On Feb 4, 11:40 am, Andrea Giammarchi <andrea.giammar...(a)gmail.com>
wrote:
> If there is an author, you ask to the author, you don't write
> sentences possibly wrong or pointless, no?

Only if you know who the author is and how to contact him or her. I
know your name and have seen you around various groups over the years,
but until today did not know you were responsible for the PureDOM
implementation in TaskSpeed. The TaskSpeed page says about the
PureDOM methods that, "It currently is not a library available for
download or use," and the implementation of the tests and
webreflection.js gave no pointers to their author.

> If I don't understand something, or I think there is a typo, I don't
> necessary start the campaign against that developer and how many
> errors he wrote ... we are programmer, aren't we?

Obviously you're seeing something very different in this thread than I
am. I have certainly seen no campaign against you, and I certainly
have not participated in one.

> if(1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&1&&false) is always false, ABC
> if(0||0||0||0||0||0||0||0||0||0||0||true) is always true, ABC
>
> If there is a &&false at the end and this is confusing ... well, I
> guess we should consider to use jQuery, right?

I have no idea what you mean by this.

I did assume that the &&false had been once used to comment out the
branch for some quick testing. Without a comment in the code, though,
I assumed it's continued existence was a mistake.

> Let's move over ... if I create li nodes for each created ul it is
> because I am respecting the task, I AM NOT CHEATING
>
> If you write in a row, internally via innerHTML and jQuery, of course
> it's faster, isn't it? But that is the jQuery way, not the PureDOM
> one, which aim is to PERFORM TASK SPEED TASKS without cheating at all.

That's a somewhat different perspective than I had considered. I
really thought about PureDOM as the baseline, the thing nothing could
beat in terms of speed because everything else would (along their own
tortured paths) eventually be calling the same methods that PureDOM
called. Your explanation makes sense, though. It's not a performance
baseline but a standards-based baseline. You're calling only those
methods that the spec require DOM implementations to have, is that
right?


> Why after a year or more people still don't get PureDOM is a mystery
> to me, this is why I get bored after the first line of comment.

Do you understand where my mistaken impression came from, then? On
the Taskspeed page, we have this: "The 'PureDom' tests are written as
a minimal abstract utility API to accomplish these tasks, and are
included as a baseline measurement of the compared libraries." Since
the only measurements involved are the speeds, it seems a natural
conclusion that it's providing a speed baseline, and at least a good
guess that it should by its nature be the fastest possible.


> Got my point? I hope so

I think so. But it would have been much easier to hear without the
extreme defensiveness.

-- Scott
From: Scott Sauyet on
On Feb 4, 11:45 am, Andrea Giammarchi <andrea.giammar...(a)gmail.com>
wrote:
> and whcih library outperform PureDOM?

I posted my results earlier in the thread:

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-01-27a/results/

Dojo outperformed it in Safari, qooxdoo did in Chrome, Opera, and
Safari, and My Library did in Chrome and Safari. None of the speed
differences were huge.


> Aren't we talking about that
> post with my massive comment inside removed/filtered?

I don't know what you're talking about here.

-- Scott
From: Andrew Poulos on
On 5/02/2010 3:45 AM, Andrea Giammarchi wrote:
>
>> Cheers,
>>
>> -- Scott
>
> and which library outperform PureDOM? Aren't we talking about that
> post with my massive comment inside removed/filtered?

Under TaskSpeed, on Vista SP 2 with Safari 4.04 "My Library" was faster
than PureDom - 168 versus 246.

Andrew Poulos


From: Richard Cornford on
On Jan 29, 2:28 am, RobG wrote:
> On Jan 29, 1:24 am, Scott Sauyet wrote:
>> On Jan 28, 12:14 am, Andrew Poulos wrote:
>>> On 28/01/2010 3:23 PM, Scott Sauyet wrote:
>>>> On Jan 27, 9:02 pm, Andrew Poulos wrote:
>>>>> I'm not sure how myLib can be faster than pure dom???
>
>>>> In my testing, several libraries are at least sometimes faster
>>>> than the "Pure DOM" methods, which makes me wonder about the
>>>> quality of the implementation of Pure DOM.
> [...]
>> The test code is available at
>
>> http://dante.dojotoolkit.org/taskspeed/tests/pure-tests.js
>
>> and the minimal library used is at
>
>> http://dante.dojotoolkit.org/taskspeed/frameworks/webreflection.js
>
>> The test code looks like you would expect, with pure DOM code like
>> this:
>
>> (node = a[j]).parentNode.insertBefore(
>> p.cloneNode(true).appendChild(text.cloneNode(true))
>> .parentNode, node.nextSibling
>> );
>
> Not exactly what I'd expect. The text node should be appended to
> the p earlier so there's no repeated clone, append,
> step-up-the-DOM. Optimising as suggested gives a 25% speed boost
> in Fx and 10% in IE 6.
>
> The same slow logic is used in the make function (my wrapping):
>
> "make": function(){
> for(var
> d = document, body = d.body,
> ul = d.createElement("ul"),
> one = d.createElement("li")
> .appendChild(d.createTextNode("one"))
> .parentNode,
> two = d.createElement("li")
> .appendChild(d.createTextNode("two"))
> .parentNode,
> three= d.createElement("li")
> .appendChild(d.createTextNode("three"))
> .parentNode,
> i = 0,
> fromcode;
> i < 250; ++i
> ){
> fromcode = ul.cloneNode(true);
> fromcode.id = "setid" + i;
> fromcode.className = "fromcode";
> fromcode.appendChild(one.cloneNode(true));
> fromcode.appendChild(two.cloneNode(true));
> fromcode.appendChild(three.cloneNode(true));
> body.appendChild(fromcode);
> };

Why the superfluous EmptyStatement after the Block of the - for - loop?

> return utility.getSimple
> .call(body, "ul.fromcode").length;
> },
>
> Note the repetitious clone/append/step-up where a single clone would
> have done the job - compare it to the jQuery code used:
>
> $("<ul class='fromcode'><li>one</li><li>two</li><li>three</li></
> ul>")
>
> Here all elements are created in one go, so the two are hardly
> comparible. The DOM code is doing 4 times the work (but still runs
> in half the time of jQuery 1.4). Optimising out the extra work and
> it runs about 15% faster in Firefox, and twice as fast in IE 6.

The criticism of extra work here is perhaps a little unjust. The JQuery
version is itself inside a loop and will perform its operation 250
times, presumably because doing it once would be so quick that no
meaningful timing could be made.

The JQuery version of "make" is:-

|"make": function(){
| for(var i = 0; i<250; i++){
| $(
| "<ul class='fromcode'><li>one</li><li>two</li><li>three</li></ul>"
| ).attr("id", "setid" + i).appendTo("body");
| }
| return $("ul.fromcode").length;
|}

So the task being measured is the creation of the UL and its three LI
children (implying their being appended to the UL, and similar for their
text node children), the setting of the ID attribute and appending the
structure to the BODY of the document. While the pure DOM version
carries the overhead of initially creating a set of nodes for cloning
that are themselves never actually used, building the structure inside
the loop seems reasonable.

On the other hand, Your criticism does justly apply to other tests, for
example the "insertbefore" test. Here the JQuery version is:-

| "insertbefore" : function(){
| return $(".fromcode a").before("<p>A Link</p>").length;
| }

- so the operation being timed here is inserting a pre-defined DOM
branch before A element descendent of each element with the CLASS
'fromcode'. There is no looping to exaggerate the timing, and so the DOM
version could reasonably be optimised in order to reproduce the entire
process that is being asked of the other libraries. That is, the common
P element structure could be created once and then deep-cloned for each
insertion, rather than having javascript code repeated the assembling of
the structure on each iteration of the loop.

> Note also that a selector is used to count the nodes added to the
> document at the end and that the speed of this count is included in
> the test. Why is selector speed allowed to influence tests of
> element creation speed?

Yes that is bad. It probably follows from listing the test code from a
selector speed test framework, where knowing how many elements were
found using the selector is pretty important.

It makes some sense to verify that the various test operations have done
the thing that is being asked of them, as if some test fails to achieve
what is expected of it then it doesn't really matter how long it took to
do that. But any verification process should not be included in the
tests themselves, except under the possible condition that _precisely_
the same verification code is used for each version of each test. Better
though to have any verification applied after the timings, else they
would confuse the question of what it is that is being timed.

>> The library contains a utility object consisting of four functions:
>> attachEvent, detachEvent, (Array) indexOf, and a small replacement
>> for or wrapper around querySelectorAll. Actually, that last
>> looks a little strange to me:
>
>> getSimple:document.createElement("p").querySelectorAll&&false?
>> function(selector){
>> return this.querySelectorAll(selector);
>> }:
>> function(selector){
>> // lightweight implementation here
>> }
>
>> Am I crazy or does that "&& false" mean that the first branch will
>> never be chosen?
>
> Good find.

Yes, it means that in environment that support - querySelectorAll -,.
and with libraries that employ it where available, the DOM code is going
to be suffering by comparison.

>> Perhaps that's the culprit?
>
> Regardless, it doesn't seem sensible to use a lightweight selector
> engine when the intention is to compare selector engines to "pure
> DOM" (which should mean ad hoc functions). There only selectors in
> the test are:
>
> 1. ul.fromcode
> 2. div.added
>
> A simple switch statement would have done the trick. The "pure
> DOM" code doesn't leverage the browser-native
> getElementsByClassName method

getElementsByClassName is relatively recent so if used it would need
an emulation for older environments.

> if available, a for loop and RegExp is used always. Nor does it
> leverage the fact that DOM collections are live,

The live nature of collections proved to be the cause of an interesting
characteristic of the tests; that on IE (and very particularly IE 6) the
pure DOM "sethtml" is massively worse than any of the alternatives. The
code is:-

| "sethtml": function(){
| var div = document.body.getElementsByTagName("div"), i = 0, node;
| while(node = div[i++])
| node.innerHTML = "<p>new content</p>"
| ;
| return div.length;
| }

- and it is difficult to see how any alternative could be doing less
that that code is (with the presumption that code that does more should
take longer to do it). But it turns out that it is the 'live' node list
that is returned from - getElementsByTagName -, combined with the fact
that some of the DIVs are nested and so assigning to - innerHTML -
effectively deletes nodes and so forces updating of the 'live' list,
that seriously slows this test down in IE.

Adding:-

| getArrayByTagName:function(node, tag){
| var obj = node.getElementsByTagName(tag);
| var c, ar = [];
| if((c = obj.length)){
| do{
| ar[--c] = obj[c];
| }while(c);
| }
| return ar;
| }

- to the - utility - object in webreflection, and replacing the
"sethtml" code with:-

"sethtml": function(){
var div = utility.getArrayByTagName(document.body, "div");
var i, node;
var res = 0;
if((i = div.length)){
do{
if((node = div[--i]).parentNode.tagName != 'DIV'){
++res;
node.innerHTML = "<p>new content</p>";
}
}while(i);
}
return res;
}

- managed to get IE performance up to that of the other libraries. The
main difference being that the an ordinarily Array is being used in
place of the 'live' node list (plus the DIVs that are nested inside
other DIVs do not get their - innerHTML - set, as that is pointless).

> it gets the
> collection every time. This is critical as a selector query is
> included in nearly all the tests, so its performance affects
> tests where it is not the feature being tested.

Yes, this is a recurrent problem with the tests code. It seems to be
trying to compare versions of a particular approach to browser scripting
without questioning whether that whole approach is necessary.

However, a few things about the 'plain DOM' selector function;

An example reads:-

utility.getSimple.call(body, "ul.fromcode")

- and what this function does is, given an element (body in the above
case), find all of its decedents that are UL elements and have/include
the class 'fromcode'.

Why the use of - call -? What is the reason for not passing the
reference to the element as an argument and replacing its two uses of -
this - with the name of that parameter.

Why use the 'selector' "ul.fromcode" when an argument pattern for such a
function could go - getDecedentsByTagNameWithClass(body, 'ul',
'fromcode') - ? This seems a much better fit with what is needed from
the function, and saves a - split - and a - call - call for every
invocation (while not removing the option of branching to use -
querySelectorAll -, by combining the string arguments into an
appropriate selector).

> There are a number of optimisations that could quite easily be
> added to "pure DOM", and the tests themselves do not accurately
> target the features they are trying to test in some (many?)
> cases.

Unfortunately there is no clear statement of what features they are
trying to test. An example of a test that is amenable to optimisation is
the "finale" method, for which the pure-tests.js code is:-

| "finale": function(){
| var body = document.body, node;
| while(node = body.firstChild)
| body.removeChild(node)
| ;
| return body.getElementsByTagName("*").length;
| }

An "optimised" version might go:-

"finale": function(){
var body = document.body;
body.parentNode.replaceChild(body.cloneNode(false), body);
return document.body.childNodes.length;
}

- and give a better than 50% performance increase in many environments.
Here the removal of the children of the body is substituted by replacing
the body element with a shallow clone of itself (as that will have no
children). It would also be feasible to replace the body element with a
new body element (as that also will have no children). Creating a new
body element has issues in the general case, if the existing body has
(non-default) attributes the new replacement will not have those
attributes, and so the body of the document would effectively change as
a side effect removing its children. Similarly - cloneNode - has issues,
mostly relating to event listeners, and whether they get
transferred/copied to the cloned node. Thus, again in the general case,
replacing a body with a shallow clone of itself in order to get the side
effect of clearing the body's children risks modifying the body element
itself.

However, the issues that exist in the general case are issues for
general code, and may or may not apply in specific cases (the general
purpose library must consider them, but they are not inherent in browser
scripting). Here the test document's body element has no attributes, and
it has no event listeners, so we are fine. We can optimise the process
of clearing the body element in this document by replacing it with
either a new body element or a shallow clone of itself.

Richard.

From: RobG on
On Feb 5, 1:22 am, Andrea Giammarchi <andrea.giammar...(a)gmail.com>
wrote:
> I love people keep thinking about how many cheats i could have added
> to PureDOM ... I used native everything at the beginning and people
> complained about the fact "obviously libraries have better selector
> engines" ...

It would help greatly if there was a clear statement about the purpose
of the tests. Finding a description of what each test is supposed to
do is not easy, eventually I discovered that it is in on GitHub in
sample-tests.js. It would be better if that was made more easily
available and called something more obvious, such as "test
description" or "test specification".


> I have nullified querySelectorAll on purpose

Then a comment in the source would have been helpful. The way it was
nullified gave the impression that it was not intentional.


> (which does NOT produce a
> live object in any case, so all latest considerations about cached
> live objects are superfluous adn nothing new,

I know QSA doesn't return live collections, but other DOM methods
using getElementsBy... do. Richard's comment in regard to IEs
performance and live collections is interesting.


> I have posted about this
> stuff ages ago in WebReflection) and people keep thinking I am an
> idiot, rather than simply remove that &&false which is clearly a
> statement "nullifier" (as ||true is a statement "forcer").

I had never heard of WebReflection until I came across TaskSpeed, I
see now it is your blog. If there is important information or
discussion there relating to TaskSpeed, why not put a link to it on
the TaskSpeed page?

And I don't think you're an idiot based on looking at the code (I
suppose you wrote it), but I do think there was a lack of attention to
detail. That can be excused if you didn't expect it to be as popular
as it is and gain the attention it has, but given that it has been up
for some time now, it would have been helpful if those issues had been
addressed.


> That was the easiest way to test both getSimplle and native method ...
> but you guys are too clever here to get this, isn't it?

If you don't want to use QSA, don't include a branch for it, then no
one is confused.


> About tricky code to speed up some appendChild and the BORING
> challenge VS innerHTML (e.g. $("<ul class='fromcode'><li>one</
> li><li>two</li><li>three</li></ul>") )
> I don't think after a year of libraries still behind there's much more
> to say about these topics.

My point was that the code itself is not performing equivalent
operations - the pureDOM version does many more create/append
operations.

> If you want to trick PureDOM convinced you can speed it up, well, you
> have discovered HOT WATER!!! Good Stuff, Hu?

It's not a "trick", just obvious that the operations that each test
performs should be equivalent. It is the operations that are being
tested, not the trickiness of the programmer.


> The meaning of PureDOM is simple: to provide a comparative basic
> manual approach

Thank you for explaining that, but it's differnt to what you say
below.


> and 'till now it demonstrated that no framework is
> able to perform daily tasks faster than native DOM, or at least those
> considered in TaskSpeed.
>
> If you read tasks carefully, you will realize that if a task says: ...
> and for each node, insert this text: "whatever" ...
> the PureDOM code simply creates EACH NODE, and it inserts FOR EACH
> NODE the text "whatever" ... bt let's cheat and feel cool, that's the
> point, right?

Then that is what all the tests should do. The library's version of
create should be called once for each node and the text appended
separately, then each node appended to its parent. If jQuery (for
example) is allowed to use a string of HTML to create a document
fragment of all the nodes n times, then the pureDOM method should be
able to create the same structure once and clone it n times. Otherwise
you are not testing the same thing.


> Everybody else got it

Where?

> but this ML is still talking about PureDOM and
> how badly it is etc etc ...

This group is about javascript, so you will likely only get comments
about pureDOM here. If you posted links to other discussion threads
and included some clarifying documentation (or at least links to it)
on the TaskSpeed page then we might have "got it" too.


> well, use your best practices when
> performances matter, and please stop wasting your time talking about
> PureDOM or at least be decent enough to understand what is it and why
> it's like that.

If you make that information accessible, I won't need to guess your
intentions.

>
> Finally, the day a library will be faster, I'll remove the fake
> selector engine, I will implement proprietary IE way to append nodes
> (insertAdjacentNode faster in many cases) and I bet PureDOM will still
> outperform ... and that day somebody will talk about joined arrays for
> faster strings via innerHTML ... I am sure about it!!!

I don't see an issue with pureDOM using innerHTML if fits the criteria
for a test.


> Now, after all this, what have we learned today about JS? Me, nothing
> for sure, yet another boring pointless discussion over PureDOM, I
> receive a "warning" weekly basis about how people would have better
> cheated in PureDOM ... uh, and don't forget the last test with
> document.body.innerHTML = "" which is faster, right?

I don't think anything that is done in pureDOM is a cheat since I have
no criteria on which to base that opinion. If the point of pureDOM is
to use *only* W3C specified DOM interfaces, then fine, eschew
innerHTML. But that is not stated anywhere and above you said that the
point of pureDOM was to "provide a comparative basic manual approach".

So which is it?

One of the main claims about libraries is that they smooth over
browser quirks. It might be interesting to develop a suite of
"QuirksSpeed" tests of known differences between browsers to determine
how well each library manages to overcome them. The overall speed is
likely not that important, quirks accommodated and correctly dealt
with likely are.


--
Rob