From: Garrett Smith on
On 6/9/2010 4:33 AM, David Mark wrote:
> On Jun 9, 3:15 am, Garrett Smith<dhtmlkitc...(a)gmail.com> wrote:
>> On 5/30/2010 7:39 AM, David Mark wrote:> On May 30, 2:05 am, Garrett Smith<dhtmlkitc...(a)gmail.com> wrote:
>>>> On 5/25/2010 4:09 AM, David Mark wrote:
>>
>>>>> Garrett Smith wrote:
>>>>>> On 5/24/2010 6:57 PM, David Mark wrote:
>>>>>>> Garrett Smith wrote:
>>>>>>>> On 5/24/2010 2:11 PM, David Mark wrote:
>>>>>>>>> Garrett Smith wrote:
>>>>>>>>>> On 5/22/2010 1:25 PM, David Mark wrote:
>>>>>>>>>>> Ry Nohryb wrote:
>>>>>>>>>>>> On May 22, 5:13 pm, Stefan Weiss<krewech...(a)gmail.com> wrote:
>>>>>>>>>>>>> On 22/05/10 16:22, Johannes Baagoe wrote:
>>
>>>>>>>>>>>>>> Dmitry A. Soshnikov :
>>
[...]

>>
>>>>> Positions too:
>>
>>>>> http://www.cinsoft.net/position.html
>>
>>>> Where are the unit tests?
>>
>>> That's all you ever say. Where is your understanding of the basic
>>> logic. IIRC, that one was posted to refute your assertion that
>>> computed styles should be used to determine positions. At the time,
>>> you seemed to be the only one who didn't get it.
>>
>> It would be helpful to look at them to see what was being tested. You
>> wrote that you had tests, so where are they?
>

No tests? I read earlier that you had unit tests. Either you do or you
don't.

> Groan. All I want to know is what is the name of the guy on first
> base.
>
>> Given time, I'd like to
>> look into them.
>
> You have looked at them. In fact, I put up those two primers partly
> for your benefit. And I can't believe they didn't help.
>

I haven't seen them.

I saw a demo of your function. Is that what you refer to as a "unit
test". You can call it that, if you like, but don't be surprised if
people who know what a unit test are puzzled by your flagrantly
deceptive misuse of the term.

>>
>> My offsetTop knowledge has has waned in the last 2 years; the number of
>> problems with that and friends are too much to retain,
>
> You don't have to retain *any* of it. Zero.

I'm not convinced. I'd want to see a test where the element has top:
auto and BODY has a margin.

I've also seen the case where it failed in MSIE in the example from
thread "getComputedStyle where is my LI". It got inconsistent results in
different versions of IE.

If the burden of proof is on you to prove that the code works, you've
failed to do that.

If the burden of proof is no me to prove that it doesn't, I've succeeded
in one case.

I don't think about
> their quirks either. I create equations that account for *any* quirks
> by factoring them out of the answer. It's really not that complicated
> if you think about it (and read my many examples and previous
> explanations).
>

>> however I know
>> enough not to trust anything that uses them, not without tests and
>> testing all the edge cases I mentioned in my other reply. A good test
>> can provide quicker verification than a demo.
>
> You would rather trust something you know is broken and/or absent
> (e.g. getComputedStyle/currentStyle)? My solutions work whether the
> properties in question are broken or not. And I've tested in
> virtually every major browser that has come out this century (and a
> few from the last century).
>

getComputedStyle is broken in implementations, but the problems are
avoidable by defining left/top values in the stylesheet.

In contrast, offsetTop/Left/Parent have divergent behavior.

The approach I have taken is to follow the specification and write
tests. It is not infallible, as getCOmputedStyle has problems. I am
somewhat optimistic that the edge cases where it fails -- which are
avoidable by specifying a top and left value in the stylesheet -- are
being fixed.

>>
>>
>>
>>>>>> So, if that element is matched in the author's selector query
>>>>>> "img[width=600]" the query would not be doing what he says it does.
>>>>>> Namely, it would not match "all the images whose width is 600px".
>>
>>>>> Yes, it's all nonsense. Don't rely on these silly query engines (or
>>>>> their documentation).
>>
>>>>> http://www.cinsoft.net/slickspeed.html
>>
>>>> Pass on that.
>>
>>> Whether you pass or not, it is quite a demonstration of the futility
>>> of the query-based libraries.
>>

A demonstration of futility - I agree.

>> A little. I'd rather more see expected results and failed results; speed
>> is secondary to correctness.
>
> Expected results would be a good addition and I plan to add those.
> For now, note that the two non-fantasy columns (e.g. mine), which
> query with and without QSA agree in the latest major browsers in all
> rendering modes. Then go back one version. Then another. Then
> another. As they all agree, you can be pretty damned sure that the
> answers are expected as that's what I did before I published them.
> Now, look at all of the columns to the left. A horror show, right?
> Blacked out squares everywhere. Exceptions thrown in browsers that
> just came out a couple of years ago, miscounts in browsers that came
> out yesterday. jQuery 1.4 disagrees with 1.3, which disagrees with
> 1.2. Dojo gets more answers wrong than right in Opera 9. And so on,
> and so on... Lots of flailing and destruction and still nothing close
> to a solid foundation for the "major" libraries and frameworks.
> Somehow those things are seen as an "easier" way to do the most
> important task in browser scripting (e.g. read documents).
>

The idea of copying an API without knowing what the expected outcome of
its core method is ludicrous. Even worse when the main goal of the API
is popularity.

>
>>
>> The first column would be a good place for that. For examaple:
>>
>> +------------------+----------+------------------+
>> | Selector | Expected | NWMatcher 1.2.1 |
>> | div[class^=dia] | Error | ??? |
>> +------------------+----------+------------------+
>>
>> Who has any idea what nonstandard selectors should do? Based on what?
>> jQuery documentation? The libraries all copied the jQuery library and
>> the jQuery library has always had vague documentation and inconsistent
>> results across browsers an between versions of jQuery.
>
> Understand that CSS selector queries came long before the standard

The Internet *is* like an idiot amplifier, isn't it?

> specification. In fact, there would be no such specification if not
> for such scripts. The specification is not retroactive and the tests
> in question predate the documentation on the W3C site as well.
>

You've still not read the CSS 2.1 specification and your ignorance is
shining.

>
>>
>> What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
>
> Obviously, the same thing that they would do with quotes around them.

No, not obviously; not at all. Unquoted, they're nonstandard and
proprietary.

Read the code and you'll see that unquoted, they resolve properties.
Quoted, they use a standard interface. In jQuery, anyway.

> You are focusing on irrelevant minutiae. The point is that jQuery and
> the like often trip over such queries (with or without the quotes).
> How can that be? Well, see my discussion here with Resig around
> Halloween 2007 (or search the archive for about a hundred follow-ups,
> usually involving Matt Kruse who spent years swearing such obvious
> mistakes were to be expected). That should answer all of the
> questions. If only it had for them. :(
>
>
>> other than throw an error? To answer that, you need documentation. The
>> most obvious place to look for documentation would be the w3c Selectors
>> API,
>
> Ugh. Chicken and the egg. The code and tests predate the writeup.

If you're referring to the W3C Selectors API as "the writeup", then
you're right, however, consider that to be irrelevant here; Selectors
come from CSS2. Get it?

> Get it?
>

Me get what? Where selectors come from? Or your ignorance regarding
that? It's pretty clear that I get both. How about you?

I've posted links to the CSS 2.1 specification I don't know how many
times. You usually reply by saying it's irrelevant in some colorful form
(e.g. "off in the weeds", "barking up the wrong tree").

Read down below where I wrote "(quoted from the CSS2.1 specification)".

And if you want to know why the section of CSS 2.1 that defines
"identifier" was quoted, the pertinent part of the spec that makes
reference to the term `identifier` is in CSS 2.1 is described in
attribute values.

CSS 2.1[2] states:

| Attribute values must be identifiers or strings.

So now it is necessary to see what an identifier is and I already cited
CSS2.1 definition of identifier.

<http://www.w3.org/TR/CSS2/selector.html#matching-attrs>

So that is a W3C Candidate Recommendataion 2009; W3C Candidate
Recommendation 08 September 2009. That means it's still a draft, and so
cannot be normatively cited. e.g. "It is inappropriate to cite this
document as other than work in progress."

However, an official specification that preceded it is CSS 2, dated from
1998 and contained the same text, verbatim. You can copy that text
above, go to the 1998 draft of CSS 2, and paste the value into your
browser's "find" feature and see that it has been unchanged since 1998.

<http://www.w3.org/TR/2008/REC-CSS2-20080411/selector.html#q10>

>
>> and that will tell you that an error must be thrown because the
>> attribute value is neither an identifier nor a string.
>
> This isn't one of those Terminator movies. The W3C can't go back in
> time and abort jQuery, SlickSpeed and the like.
>

I see you are making reference to Terminator movies and the w3c. I don't
see any relevance to anything in this thread.

>>
>> MSDN documentation, is wrong, too
>
> Wouldn't shock me.
>

If you don't read it, no, it wouldn't.

>>
>> http://msdn.microsoft.com/en-us/library/aa358822%28VS.85%29.aspx
>> | att Must be either an Identifier or a String.
>> | val Must be either an Identifier or a String.
>>
>> `att` should be an attribute name, not an identifier or a string.
>
> Depends on what you consider to be right. Clearly you don't have a
> grasp on that yet, so I won't bother investigating the alleged mistake
> on MSDN. I presume they are documenting their rendition of QSA. Just
> as they previously documented their rendition of innerHTML,
> offsetHeight, etc. When and if these things get written up as
> recommendations by the W3C, it will not render the previous behavior
> and documentation "wrong".
>

I hear you saying again that I don't understand.

What I understand is that the MSDN documentation referenced above
conflicts with the CSS2.1 specification and its predecessor CSS2, both
cited above.

`att` cannot be, as MSDN states it must, "a String." It must be the name
of the attribute. The MSDN article calls this an HTML feature and goes
on to list nonstandard HTML.

Syntax:
+-------------+------------------------+
| HTML | [att=val] { sRules } |
| Scripting | N/A |
+-------------+------------------------+

Although they call this an HTML feature, they really mean it is a CSS
feature. The document is linked from:

<URL:
http://msdn.microsoft.com/en-us/library/cc351024%28VS.85%29.aspx#attributeselectors
>

I also read that you've made a presumption that MSDN's about IE's
implementation of NodeSelector. Do I understand you correctly?

As stated numerous times on this NG, including threads that you have
replied to, offsetHeight has made it into a w3c recommendation. It is
not a matter of "when and if"; It is called CSSOM. It was a massive f**k
up by Anne van Kesteren. The details have been discussed here and have
involved you, Lasse, and me.

>>
>> In CSS, identifiers (including element names, classes, and IDs in
>> selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
>> characters U+00A1 and higher, plus the hyphen (-) and the underscore
>> (_); they cannot start with a digit, or a hyphen followed by a digit.
>> Identifiers can also contain escaped characters and any ISO 10646
>> character as a numeric code (see next item). For instance, the
>> identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".
>>
>> (quoted from the CSS2.1 specification).
>>

Did you read that?

>> What happens if jQuery removes support for a particular selector?
>
> They only support about half of them at this point I think. Certainly
> they are nowhere near compliance with CSS3 (which they disingenuously
> claim to be). And their script is not 24K by any comparison rooted in
> reality (i.e. prior to GZIP). They simply feed tripe to those naive
> enough to swallow it. Did you know My Library is roughly 42K after
> compression? The whole thing (which does 100 times more than jQuery,
> including queries). :)
>

The script is 166k before minification. jQuery.com claims 155k. And of
course, if they used proper space formatting (not tabs), it would be a
lot larger.

>> Haven't they done that in the past for nth-of-type or attribute style
>> selectors, XPath, and @attr?
>
> nth-of-type? No idea what they did with that. IIRC, mine is the only
> one I know of that attempts to support those (and as I've recently
> realized, I didn't quite nail it). The SlickSpeed tests can be
> confusing to the uninitiated as most of the libraries now hand off to
> QSA, so it may appear that unsupported selectors work. Of course,
> that's a big part of the problem. Neophytes like to test in just the
> latest browsers and avoid IE as much as possibly. IE8 has QSA. IE9
> will likely augment its capabilities in that department. So you can
> have applications that appear to work if tested just in IE8 standards
> mode and/or IE9 but have no shot at all in Compatibility View or IE<
> 8. That's why jQuery 1.2.6 (and not 1.2.1 as that was a misprint)
> remains on the test page along side 1.3.x and 1.4.x. That's the last
> one that didn't hand off queries to the browser.
>

IE8 has QSA but not in quirks mode and not in IE7 mode (as by using
EmulateIE7 in meta tag).

>
>>
>> What should a:link match? Should it throw an error? What should be the
>> expected result set of `null`?
>
> What do you think it should match? It won't match anything in any
> query engine I know of. No coincidence it is not featured on any of
> the test pages.
>

Remember that jQuery tries to use QSA; a:link matches links where QSA
does not throw errors.

>>
>> If that is not what you want it to do; if you want something other than
>> an error, you need to state what and why; you need documentation.
>
> Who are you talking to? I've said (and demonstrated) from the start
> that these query engines are a waste of time. Lack of documentation
> is the least of the worries.
>
>>
>> A better test would be side-by-side comparison of NodeSelector. A good
>> test case might be to take the w3c NodeSelector interface and rewrite it
>> to make sure it fails for all the invalid syntax that is allowed in
>> these things.
>
> Nope. More wasted time.
>

Testing things against a standard might seem like a waste of time if the
specification is not understood and if reading it is perceived as a
waste of time.

To me, comparing APIs that aren't clearly specified seems like a waste
of time. It misses the point of what an API is for.

Seems our opinions differ here.

>>
>> I think it is time to wake up. For you and for all web developers.
>
> Me?! You are just parroting my lines years later. Where do you get
> the gall?
>

I am not parroting your lines.

"My Library" was never a good idea. Start with no unit tests and copy
other library query selectors? That is what all the other libraries are
doing. The only point in that is to try and attract more users, and that
is something you have tried to do, too.

Public APIs are forever. Notice that the one I wrote was AFL for nearly
two years. I made a lot of mistakes and did not want to be tied to a
public (=permanent) API. That is what all of the other libraries did.

In contrast, you did not learn from the others' mistakes. You actually
copied the API approach and then advocated it as superior.

>>
>> You've written a long test that makes assertions about a loosely defined
>> interface -- the query function, that is specified in documentation that
>> is incredibly vague, and was aptly titled with an enigmatic identifier
>> -- the dollar function.
>
> Wrong. MooTools wrote that stupid test years ago. And all of those
> stupid libraries that it was written to test used the "$" function. I
> just added more test cases and gave my getEBCS function a "$" alias.
> Where have you been? These are discussions from over two years ago.
>

I see on your page: "Additional tests have been added."

Did MooTools add those additional tests or did you?


>> That such an interface would be mimicked so many
>> times over, and with variations, is alarming and I think indicates a
>> problem with the industry.
>
> Mine is not mimicry. It is parody to make a point. And I think it is
> (finally) getting through loud and clear.
>

Copying APIs like that is stupid. Is that your point? Because if it is,
then we agree on that. Not entirely, though; one of my points of
contention is on the wild deviations of W3C specifications being
published from 1998-2010, so wild that it appears that none have even
read the specifications before writing and subsequently publishing a
public (=permanent) API.

My other point of contention is that publishing public APIs, as the
library authors have done, is not something that is to be undertaken
without clear goals and understanding, as the library authors do not
have. The APIs that have been published, jQuery, Mootools, Dojo, and
most others, have caused significant and substantial harm to the web.
They have done so by creating inconsistency and instability but also by
appeasing to the ignorant developer who is unwilling to read the
pertinent specifications.

Most any library can seem attractive to allowing the developer to
quickly solve problems by using familiar-looking css selector syntax,
while providing attractive and impressive demos that can be
copy'n'pasted and modified.

It sounds attractive, but unfortunately most likely not the right
approach for solving a given set of requirements. Even if the selectors
APIs worked correctly and quickly, they would still be misused as they
are today, often to do things like add an event handler to a list of
objects or toggle the styles of a list of elements instead of using
style cascades and event delegation, both of which are faster and
usually result in much simpler code.

Garrett
From: David Mark on
On Jun 9, 2:18 pm, Garrett Smith <dhtmlkitc...(a)gmail.com> wrote:
> On 6/9/2010 4:33 AM, David Mark wrote:> On Jun 9, 3:15 am, Garrett Smith<dhtmlkitc...(a)gmail.com>  wrote:
> >> On 5/30/2010 7:39 AM, David Mark wrote:>  On May 30, 2:05 am, Garrett Smith<dhtmlkitc...(a)gmail.com>    wrote:
> >>>> On 5/25/2010 4:09 AM, David Mark wrote:
>
> >>>>> Garrett Smith wrote:
> >>>>>> On 5/24/2010 6:57 PM, David Mark wrote:
> >>>>>>> Garrett Smith wrote:
> >>>>>>>> On 5/24/2010 2:11 PM, David Mark wrote:
> >>>>>>>>> Garrett Smith wrote:
> >>>>>>>>>> On 5/22/2010 1:25 PM, David Mark wrote:
> >>>>>>>>>>> Ry Nohryb wrote:
> >>>>>>>>>>>> On May 22, 5:13 pm, Stefan Weiss<krewech...(a)gmail.com>         wrote:
> >>>>>>>>>>>>> On 22/05/10 16:22, Johannes Baagoe wrote:
>
> >>>>>>>>>>>>>> Dmitry A. Soshnikov :
>
> [...]
>
>
>
> >>>>> Positions too:
>
> >>>>>http://www.cinsoft.net/position.html
>
> >>>> Where are the unit tests?
>
> >>> That's all you ever say.  Where is your understanding of the basic
> >>> logic.  IIRC, that one was posted to refute your assertion that
> >>> computed styles should be used to determine positions.  At the time,
> >>> you seemed to be the only one who didn't get it.
>
> >> It would be helpful to look at them to see what was being tested. You
> >> wrote that you had tests, so where are they?
>
> No tests? I read earlier that you had unit tests. Either you do or you
> don't.

You keep chattering about unit tests. I never know what you are
referring to. I remember the recent "you keep talking about unit
tests" comment, but that was my line. I presumed you meant My
Library. This test page we are talking about is not part of My
Library. Just a proving ground for a replacement for
API.getElementPositionStyle, which I will soon be deprecating.

>
> > Groan.  All I want to know is what is the name of the guy on first
> > base.
>
> >> Given time, I'd like to
> >> look into them.
>
> > You have looked at them.  In fact, I put up those two primers partly
> > for your benefit.  And I can't believe they didn't help.
>
> I haven't seen them.

Of course you have. We've been discussing them here for days.

>
> I saw a demo of your function. Is that what you refer to as a "unit
> test".

That's one of the primers! And no, I never called it a "unit tests".
You are the one that keeps chattering about unit tests, not me.

> You can call it that, if you like, but don't be surprised if
> people who know what a unit test are puzzled by your flagrantly
> deceptive misuse of the term.

Groan. Back on third base. :)

>
>
>
> >> My offsetTop knowledge has has waned in the last 2 years; the number of
> >> problems with that and friends are too much to retain,
>
> > You don't have to retain *any* of it.  Zero.
>
> I'm not convinced. I'd want to see a test where the element has top:
> auto and BODY has a margin.

If you had the slightest clue what we were talking about, you'd know
that BODY margin is irrelevant. As for automatic top, left, right,
bottom. That's the whole point. It actually works whether you define
the styles in your CSS or not.

>
> I've also seen the case where it failed in MSIE in the example from
> thread "getComputedStyle where is my LI". It got inconsistent results in
> different versions of IE.

Groan again. I already replied to that line. Once again, you have no
clue what you are testing or what results to expect. Zero. Of course
it can return different numbers in different browsers, rendering
modes, etc. That doesn't mean the results are wrong. Do you
understand what makes a result right for these functions? I explained
it just an hour ago in response to an identical suggestion.

>
> If the burden of proof is on you to prove that the code works, you've
> failed to do that.

LOL.

>
> If the burden of proof is no me to prove that it doesn't, I've succeeded
> in one case.

You most assuredly have not. You don't even know what you are trying
to prove.

>
> I don't think about
>
> > their quirks either.  I create equations that account for *any* quirks
> > by factoring them out of the answer.  It's really not that complicated
> > if you think about it (and read my many examples and previous
> > explanations).
>
> >> however I know
> >> enough not to trust anything that uses them, not without tests and
> >> testing all the edge cases I mentioned in my other reply. A good test
> >> can provide quicker verification than a demo.
>
> > You would rather trust something you know is broken and/or absent
> > (e.g. getComputedStyle/currentStyle)?  My solutions work whether the
> > properties in question are broken or not.  And I've tested in
> > virtually every major browser that has come out this century (and a
> > few from the last century).
>
> getComputedStyle is broken in implementations, but the problems are
> avoidable by defining left/top values in the stylesheet.

Wrong. That's a very spotty solution and not always possible. For
instance, you may not wish to use pixel units or define all of the
coordinates.

>
> In contrast, offsetTop/Left/Parent have divergent behavior.

That are fully accounted for by my (very simple) equations. How many
times?!

>
> The approach I have taken is to follow the specification and write
> tests.

All together: IE doesn't have getComputedStyle.

> It is not infallible, as getCOmputedStyle has problems.

Does it ever. Including absence in IE.

> I am
> somewhat optimistic that the edge cases where it fails -- which are
> avoidable by specifying a top and left value in the stylesheet -- are
> being fixed.

Who cares what is in the process of being fixed in some unspecified
number of browsers? My solution works in anything. Again, basic math
is infallible.

>
>
>
> >>>>>> So, if that element is matched in the author's selector query
> >>>>>> "img[width=600]" the query would not be doing what he says it does.
> >>>>>> Namely, it would not match "all the images whose width is 600px".
>
> >>>>> Yes, it's all nonsense.  Don't rely on these silly query engines (or
> >>>>> their documentation).
>
> >>>>>http://www.cinsoft.net/slickspeed.html
>
> >>>> Pass on that.
>
> >>> Whether you pass or not, it is quite a demonstration of the futility
> >>> of the query-based libraries.
>
> A demonstration of futility - I agree.

Other than the last two columns of course. :)

>
> >> A little. I'd rather more see expected results and failed results; speed
> >> is secondary to correctness.
>
> > Expected results would be a good addition and I plan to add those.
> > For now, note that the two non-fantasy columns (e.g. mine), which
> > query with and without QSA agree in the latest major browsers in all
> > rendering modes.  Then go back one version.  Then another.  Then
> > another.  As they all agree, you can be pretty damned sure that the
> > answers are expected as that's what I did before I published them.
> > Now, look at all of the columns to the left.  A horror show, right?
> > Blacked out squares everywhere.  Exceptions thrown in browsers that
> > just came out a couple of years ago, miscounts in browsers that came
> > out yesterday.  jQuery 1.4 disagrees with 1.3, which disagrees with
> > 1.2.  Dojo gets more answers wrong than right in Opera 9.  And so on,
> > and so on...  Lots of flailing and destruction and still nothing close
> > to a solid foundation for the "major" libraries and frameworks.
> > Somehow those things are seen as an "easier" way to do the most
> > important task in browser scripting (e.g. read documents).
>
> The idea of copying an API without knowing what the expected outcome of
> its core method is ludicrous. Even worse when the main goal of the API
> is popularity.

What are you talking about now? I've always recommended *against*
using *any* query engine. I created one to show how simple it is to
do so and therefore silly to rely on futile efforts like jQuery.

>
>
>
> >> The first column would be a good place for that. For examaple:
>
> >> +------------------+----------+------------------+
> >> | Selector         | Expected | NWMatcher 1.2.1  |
> >> | div[class^=dia]  | Error    |    ???           |
> >> +------------------+----------+------------------+
>
> >> Who has any idea what nonstandard selectors should do? Based on what?
> >> jQuery documentation? The libraries all copied the jQuery library and
> >> the jQuery library has always had vague documentation and inconsistent
> >> results across browsers an between versions of jQuery.
>
> > Understand that CSS selector queries came long before the standard
>
> The Internet *is* like an idiot amplifier, isn't it?

Yes. God knows.

>
> > specification.  In fact, there would be no such specification if not
> > for such scripts.  The specification is not retroactive and the tests
> > in question predate the documentation on the W3C site as well.
>
> You've still not read the CSS 2.1 specification and your ignorance is
> shining.

We are talking about selector *queries* (e.g. Selectors API).

>
>
>
> >> What can "td[colspan!=1]" or "div[class!=madeup]" be expected to do,
>
> > Obviously, the same thing that they would do with quotes around them.
>
> No, not obviously; not at all. Unquoted, they're nonstandard and
> proprietary.

Again, there was no standard for selector queries when these things
were created.

>
> Read the code and you'll see that unquoted, they resolve properties.
> Quoted, they use a standard interface. In jQuery, anyway.

jQuery is botched. They don't even know what their own code does with
attributes. That's one of the things I demonstrated (years ago).

>
> > You are focusing on irrelevant minutiae.  The point is that jQuery and
> > the like often trip over such queries (with or without the quotes).
> > How can that be?  Well, see my discussion here with Resig around
> > Halloween 2007 (or search the archive for about a hundred follow-ups,
> > usually involving Matt Kruse who spent years swearing such obvious
> > mistakes were to be expected).  That should answer all of the
> > questions.  If only it had for them.  :(
>
> >> other than throw an error? To answer that, you need documentation. The
> >> most obvious place to look for documentation would be the w3c Selectors
> >> API,
>
> > Ugh.  Chicken and the egg.  The code and tests predate the writeup.
>
> If you're referring to the W3C Selectors API as "the writeup", then
> you're right, however, consider that to be irrelevant here; Selectors
> come from CSS2. Get it?

I get that you are grasping for straws at this point (on two different
fronts).

>
> > Get it?
>
> Me get what? Where selectors come from? Or your ignorance regarding
> that? It's pretty clear that I get both. How about you?

Oh brother. Of course CSS selector queries use CSS selectors. So
what?

>
> I've posted links to the CSS 2.1 specification I don't know how many
> times. You usually reply by saying it's irrelevant in some colorful form
> (e.g. "off in the weeds", "barking up the wrong tree").

Like with the positioning stuff? Yes, you always resort to quoting
specs when confused.

>
> Read down below where I wrote "(quoted from the CSS2.1 specification)".
>
> And if you want to know why the section of CSS 2.1 that defines
> "identifier" was quoted, the pertinent part of the spec that makes
> reference to the term `identifier` is in CSS 2.1 is described in
> attribute values.
>
> CSS 2.1[2] states:
>
> | Attribute values must be identifiers or strings.
>
> So now it is necessary to see what an identifier is and I already cited
> CSS2.1 definition of identifier.
>
> <http://www.w3.org/TR/CSS2/selector.html#matching-attrs>
>
> So that is a W3C Candidate Recommendataion 2009; W3C Candidate
> Recommendation 08 September 2009. That means it's still a draft, and so
> cannot be normatively cited. e.g. "It is inappropriate to cite this
> document as other than work in progress."

Indeed. So stop talking about "standards" for query engines. And no,
they don't have to follow CSS2.1 exactly. They never have. Not CSS3
either, despite claims to the contrary.

>
> However, an official specification that preceded it is CSS 2, dated from
> 1998 and contained the same text, verbatim. You can copy that text
> above, go to the 1998 draft of CSS 2, and paste the value into your
> browser's "find" feature and see that it has been unchanged since 1998.
>
> <http://www.w3.org/TR/2008/REC-CSS2-20080411/selector.html#q10>

What a waste of time.

>
>
>
> >> and that will tell you that an error must be thrown because the
> >> attribute value is neither an identifier nor a string.
>
> > This isn't one of those Terminator movies.  The W3C can't go back in
> > time and abort jQuery, SlickSpeed and the like.
>
> I see you are making reference to Terminator movies and the w3c. I don't
> see any relevance to anything in this thread.
>
>
>
> >> MSDN documentation, is wrong, too
>
> > Wouldn't shock me.
>
> If you don't read it, no, it wouldn't.

That doesn't make any sense.

>
>
>
> >>http://msdn.microsoft.com/en-us/library/aa358822%28VS.85%29.aspx
> >> | att Must be either an Identifier  or a String.
> >> | val Must be either an Identifier or a String.
>
> >> `att` should be an attribute name, not an identifier or a string.
>
> > Depends on what you consider to be right.  Clearly you don't have a
> > grasp on that yet, so I won't bother investigating the alleged mistake
> > on MSDN.  I presume they are documenting their rendition of QSA.  Just
> > as they previously documented their rendition of innerHTML,
> > offsetHeight, etc.  When and if these things get written up as
> > recommendations by the W3C, it will not render the previous behavior
> > and documentation "wrong".
>
> I hear you saying again that I don't understand.
>
> What I understand is that the MSDN documentation referenced above
> conflicts with the CSS2.1 specification and its predecessor CSS2, both
> cited above.
>
> `att` cannot be, as MSDN states it must, "a String." It must be the name
> of the attribute. The MSDN article calls this an HTML feature and goes
> on to list nonstandard HTML.
>
> Syntax:
> +-------------+------------------------+
> | HTML        | [att=val] { sRules }   |
> | Scripting   | N/A                    |
> +-------------+------------------------+
>
> Although they call this an HTML feature, they really mean it is a CSS
> feature. The document is linked from:
>
> <URL:http://msdn.microsoft.com/en-us/library/cc351024%28VS.85%29.aspx#attr...
>  >
>
> I also read that you've made a presumption that MSDN's about IE's
> implementation of NodeSelector. Do I understand you correctly?
>
> As stated numerous times on this NG, including threads that you have
> replied to, offsetHeight has made it into a w3c recommendation. It is
> not a matter of "when and if"; It is called CSSOM. It was a massive f**k
> up by Anne van Kesteren. The details have been discussed here and have
> involved you, Lasse, and me.
>
>
>
> >> In CSS, identifiers (including element names, classes, and IDs in
> >> selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646
> >> characters U+00A1 and higher, plus the hyphen (-) and the underscore
> >> (_); they cannot start with a digit, or a hyphen followed by a digit.
> >> Identifiers can also contain escaped characters and any ISO 10646
> >> character as a numeric code (see next item). For instance, the
> >> identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F".
>
> >> (quoted from the CSS2.1 specification).
>
> Did you read that?

I don't care about that. Queries are a bad idea. End of story. My
creation of a query engine over a weekend two and a half years ago
notwithstanding. Get that?

>
> >> What happens if jQuery removes support for a particular selector?
>
> > They only support about half of them at this point I think.  Certainly
> > they are nowhere near compliance with CSS3 (which they disingenuously
> > claim to be).  And their script is not 24K by any comparison rooted in
> > reality (i.e. prior to GZIP).  They simply feed tripe to those naive
> > enough to swallow it.  Did you know My Library is roughly 42K after
> > compression?  The whole thing (which does 100 times more than jQuery,
> > including queries).  :)
>
> The script is 166k before minification.


That's just as irrelevant as measuring compressed. The bigger the
better even (likely more comments).

> jQuery.com claims 155k.

They lie about everything.

> And of
> course, if they used proper space formatting (not tabs), it would be a
> lot larger.

They use tabs?! :(

>
> >> Haven't they done that in the past for nth-of-type or attribute style
> >> selectors, XPath, and @attr?
>
> > nth-of-type?  No idea what they did with that.  IIRC, mine is the only
> > one I know of that attempts to support those (and as I've recently
> > realized, I didn't quite nail it).  The SlickSpeed tests can be
> > confusing to the uninitiated as most of the libraries now hand off to
> > QSA, so it may appear that unsupported selectors work.  Of course,
> > that's a big part of the problem.  Neophytes like to test in just the
> > latest browsers and avoid IE as much as possibly.  IE8 has QSA.  IE9
> > will likely augment its capabilities in that department.  So you can
> > have applications that appear to work if tested just in IE8 standards
> > mode and/or IE9 but have no shot at all in Compatibility View or IE<
> > 8.  That's why jQuery 1.2.6 (and not 1.2.1 as that was a misprint)
> > remains on the test page along side 1.3.x and 1.4.x.  That's the last
> > one that didn't hand off queries to the browser.
>
> IE8 has QSA but not in quirks mode and not in IE7 mode (as by using
> EmulateIE7 in meta tag).

No kidding. :) Or you can just say Compatibility View (however it is
invoked).

>
>
>
> >> What should a:link match? Should it throw an error? What should be the
> >> expected result set of `null`?
>
> > What do you think it should match?  It won't match anything in any
> > query engine I know of.  No coincidence it is not featured on any of
> > the test pages.
>
> Remember that jQuery tries to use QSA; a:link matches links where QSA
> does not throw errors.

Again, my line. I've been saying it ever since "Sizzle" came out.
What a con to hand off queries to the browser, knowing that the
results would vary wildly from the fall back. Basically, QSA put all
of the "major" query engine in an untenable position. They tried a
big deception and apparently the masses bought it.

>
>
>
> >> If that is not what you want it to do; if you want something other than
> >> an error, you need to state what and why; you need documentation.
>
> > Who are you talking to?  I've said (and demonstrated) from the start
> > that these query engines are a waste of time.  Lack of documentation
> > is the least of the worries.
>
> >> A better test would be side-by-side comparison of NodeSelector. A good
> >> test case might be to take the w3c NodeSelector interface and rewrite it
> >> to make sure it fails for all the invalid syntax that is allowed in
> >> these things.
>
> > Nope.  More wasted time.
>
> Testing things against a standard might seem like a waste of time if the
> specification is not understood and if reading it is perceived as a
> waste of time.

Nobody should care about queries at this point. I know I don't.

>
> To me, comparing APIs that aren't clearly specified seems like a waste
> of time. It misses the point of what an API is for.
>
> Seems our opinions differ here.

No, you just misinterpret everything. It's impossible to carry on a
conversation.

>
>
>
> >> I think it is time to wake up. For you and for all web developers.
>
> > Me?!  You are just parroting my lines years later.  Where do you get
> > the gall?
>
> I am not parroting your lines.

Certainly you are.

>
> "My Library" was never a good idea.

LOL. The ideas it promoted were good enough for everyone else to
steal, blog about, etc. Where have you been?

> Start with no unit tests

Start with no unit tests? What does that even mean. It most
assuredly has unit tests (and has for some time).

> and copy
> other library query selectors?

Nope. The (optional and discouraged) query module was never the
point.

> That is what all the other libraries are
> doing.

All of the other libraries are slowly, painfully evolving to look like
mine (as predicted).

> The only point in that is to try and attract more users, and that
> is something you have tried to do, too.

What the hell are you talking about now?

>
> Public APIs are forever. Notice that the one I wrote was AFL for nearly
> two years. I made a lot of mistakes and did not want to be tied to a
> public (=permanent) API. That is what all of the other libraries did.


So it all comes back to your knock-off. Whatever.

>
> In contrast, you did not learn from the others' mistakes. You actually
> copied the API approach and then advocated it as superior.

Copied what API approach? My (dynamic) API is nothing like the rest
of them. How could you miss that as I stressed that point back in the
CWR days (fall of 2007). Then there are the advanced feature testing
techniques, which were also unheard of at the time. Superior? You
bet. ;)


>
>
>
> >> You've written a long test that makes assertions about a loosely defined
> >> interface -- the query function, that is specified in documentation that
> >> is incredibly vague, and was aptly titled with an enigmatic identifier
> >> -- the dollar function.
>
> > Wrong.  MooTools wrote that stupid test years ago.  And all of those
> > stupid libraries that it was written to test used the "$" function.  I
> > just added more test cases and gave my getEBCS function a "$" alias.
> > Where have you been?  These are discussions from over two years ago.
>
> I see on your page: "Additional tests have been added."

Yes. Truth in advertising. :)

>
> Did MooTools add those additional tests or did you?

See above.

>
> >> That such an interface would be mimicked so many
> >> times over, and with variations, is alarming and I think indicates a
> >> problem with the industry.
>
> > Mine is not mimicry.  It is parody to make a point.  And I think it is
> > (finally) getting through loud and clear.
>
> Copying APIs like that is stupid.

Please stop blithering about the query engine.

[...]

I don't have time for any more of this nonsense.