From: Scott Sauyet on
On Feb 9, 2:20 am, David Mark <dmark.cins...(a)> wrote:
> Well, for example, Resig hides his head in the sane, refusing to even
> read this group.

Well, that is a beautiful typo! I'll bet John Resig would be willing
to agree that avoiding this group is avoiding insanity. :-)

-- Scott
From: David Mark on
On Jan 23, 7:58 pm, joedev <joe.d.develo...(a)> wrote:
> On Jan 24, 7:09 am, Thomas 'PointedEars' Lahn <PointedE...(a)>
> wrote:
> > David Mark wrote:
> > > Garrett Smith wrote:
> > >> Matt Kruse wrote:
> > >>> With so many globals, I would suggest giving them full names as well
> > >>> as the single-letter identifiers. E===Element, etc
> > >> That would conflict with any code that uses:-
> > >> Element.prototype.myFunc = [...]
> > > Yes.  It will likely end up as MyElement, MyForm, MyImage, MyDocument,
> > > etc.
> > > var myEl = MyElement('#test');
> > Or better myLib.getElement(...), as (structurally) suggested before?
> +1 Please do use a namespace

Well, I did (API) for all but the aforementioned constructors. I
haven't decided what I want to do with them yet. They are optional,
of course.
From: David Mark on
On Jan 23, 8:05 pm, Scott Sauyet <scott.sau...(a)> wrote:
> On Jan 23, 4:12 pm, Andrew Poulos <ap_p...(a)> wrote:
> > On 24/01/2010 4:17 AM, Scott Sauyet wrote:
> >> When I called your bluff, you tried to change the argument.  Do you
> >> really think many people care how your code runs on ancient browsers
> >> in FF1?  When's the last time you say that browser in your logs?
> > IE 6 is 10 years old and my corporate clients still use it, and won't
> > upgrade any time soon, so ancient browsers are important.
> Oh yes, I always test with IE6.  I did forget to post those results,
> because I didn't have it on the machine I used yesterday.
>     934  1105  1771  3825  1113.
> Obviously MooTools falls down a bit and Prototype even more.  The rest
> were comparable.
> >> If you want to say that the difference in performance is mostly a non-
> >> issue, I will certainly agree with you.  But you are the one who made
> >> a point of bragging about it, and then pointed your latest library
> >> against much older versions of the competition.
> > I have colleagues who work for large development houses that *do* use
> > the "common" js libraries on a day-to-day basis. I learnt very early on
> > to not ask how their latest js project was going because of the violent
> > denunciation of whatever bug they had just discovered and were trying to
> > workaround. (I've been too "afraid" to ask why they don't drop using a
> > library).
> I think it's worth testing older libraries in various environments.
> What I objected to is the self-aggrandizing manner in which David
> Marks promoted the spped of his library, upgrading his library in the
> tests to the latest version, but leaving the other libraries with two-
> year old versions.  You know he didn't expect anyone to notice.

That's complete bullshit. I hadn't done anything to that page in
years until you started focusing on it. I don't even consider it a
particularly compelling test as running queries over and over is not
standard practice for an application. You misconstrued that my
comments about speed were strictly related to SlickSpeed.

And again, testing QSA vs. QSA is ridiculous. They are all _fast
enough_ with QSA. Of course, many browsers in use today cannot handle
QSA, so the "slow lanes" are more important comparisons.

> When I post some results he responds by saying that I'm testing the
> wrong thing.

You were. Testing QSA vs. a library is stupid. Others have mentioned
that too. ;)

> Either the browsers are too recent or the computer is
> too fast.  It's nonsense, of course.

I never said anything like that, except in reference to being "fast
enough" in brand new PC's, which is not exactly a badge of honor.
What about the millions of users who do not buy new PC's every six
months? There's lots of them out there.
From: David Mark on
On Jan 23, 9:35 pm, Andrew Poulos <ap_p...(a)> wrote:


> To repeat, the people that I know that use the "common" js libraries are
> unhappy with all of them.

And for very good reason. For years, these efforts have promoted
themselves as "fixing Javascript" and that they "smooth out" cross-
browser quirks. In reality, it's all been a fraud (or delusion) on
their parts. My Library (and the CWR project before it) came about
after it was determined that these scripts were not solving anything,
but rather "punting" on everything (e.g. sniffing the UA string to
make it look like their designs were realized).

Then there is the outrageous "test-driven" development (as referenced
by one of Resig's posts a couple of years back). What it translates
to is a bunch of people who have no idea how to go about cross-browser
development, using unit tests to shape their designs. It's
programming by observation, not understanding.

They should solve the problems first (without consulting the baseless
UA string), then use unit tests to confirm their solutions are viable
in as many environments as possible. Using the "crystal ball"
approach is folly and results in patchworks that are never really done.
From: David Mark on
On Jan 24, 9:50 pm, Scott Sauyet <scott.sau...(a)> wrote:
> On Jan 24, 6:38 pm, RobG <rg...(a)> wrote:
> > On Jan 25, 8:44 am, Scott Sauyet <scott.sau...(a)> wrote:
> > > On Jan 23, 9:35 pm, Andrew Poulos <ap_p...(a)> wrote:
> > [...]
> >>> Perhaps you *are* testing the wrong thing. Sorry if I missed it but I
> >>> didn't notice your defence of your claim.
> >> I'm not sure at this point "testing the wrong thing" has any
> >> significant meaning.
> > Testing is pointless if you don't have any criteria to establish what
> > the testing means. Speed is usually the last criterion to be
> > considered, more important ones are:
> Right, which makes it strange for David to claim that I was testing
> the wrong things.  How in the world could he *know* what tests are
> meaningful for me?
> >> I posted the results of my tests of his library
> >> in recent versions of the major browsers on a developer's Window's
> >> machine.  In what way could that be the wrong thing to test?
> > The users of such libraries are visitors to web sites. Testing
> > performance on a developers machine on a LAN (or with client and
> > server on the same box) is completely the wrong environment.
> Actually, I'm not doing much front-end development right now.  But
> there's a good chance I'll be doing so soon, for the corporate
> intranet at my job.  The project will have 50 - 100 users, most on IE8
> or FF3.5, but some probably on Chrome.  I will try to ensure that it
> will work in Opera and Safari as well.  I will probably be able to
> assume that the users will have Windows XP or Windows 7, and my
> machine is the type the company is using to replace old ones.  I can't
> assume they will be as powerful as mine, but I also don't need to
> worry about 500MHz, single-core processors.
> >> Additional tests on older machines, or other browsers are equally
> >> legitimate.
> > You mean essential.
> Yes, but there are limits to what's worth testing for any particular
> user.  I'm certainly not expecting this to even look reasonable in
> IE3.
> >> But people looking to use one library or another should
> >> know how they perform in the environments in which they expect the
> >> libraries to run.
> > Precisely, which is why results from a developer's machine mean very
> > little.
> Unless... :-)
> For me ancient processors and FF1 means very little.
> > [ ... ]
> > The slickspeed tests are designed for one purpose only: to test the
> > speed of CSS selectors. If the "major libraries" fork into browser-
> > native QSA branches and don't use their CSS selector engines, then
> > what is being tested? The tests themselves don't even use a suitable
> > document, they use a document essentially picked at random.
> > If the tests were to have any real meaning, the test document should
> > be specifically designed to test several scenarios for each selector,
> > such as a group of elements close together, some widely separated in a
> > shallow DOM and others in a deep and complex DOM. It may be that a
> > particular library comes up trumps in one type of DOM but not in
> > another. There should also be edge cases of extremely complex
> > selectors that may never occur in reality, but test the abiltiy of the
> > engine to correctly interpret the selector and get the right elements.
> > Speed may be a very low priority in such cases.
> There's a lot to be said for that.  But there's also a lot to be said
> for a process that weighs the speeds of the selectors depending upon
> the likely common usage of each.  A test that weighs these equally has
> some clear-cut issues:
>     span.highlight
>     #myDiv ul ul li:nth-child(7n + 3)
> >> I think it's great that David is bringing another library into the
> >> fray.  Certainly the survivors of the last rounds of the competition
> >> have a great number of flaws, and the competition should help improve
> >> all of them.  But if he starts by making over-exaggerated claims about
> >> his library, he is doing everyone (himself included!) a great
> >> disservice.
> > No one's perfect. But subjective criteria like "is the architect a
> > nice guy" don't rate too highly in my selection criteria. I've worked
> > with a number of self-opinionated arseholes who were, never-the-less,
> > very good at their job. I much preferred working with them to the Mr.
> > Nice Guys who were barely competent but great to talk to over a
> > beer.  :-)
> Oh, I'd always prefer to work with someone competent but less
> likable.  However, I would hesitate to commit to using his library in
> any production environment until there are people helping support it
> that seem willing to admit to their faults and honestly interested in
> helping users through their problems.
> If it's a one-man show, then I
> want that one man to be someone whose responses to requests for help,
> to suggestions, and to critiques are helpful rather than abusive.

No need to speculate.