From: Antony Scriven on
On Mar 7, 12:08 am, Michael Haufe wrote:

> On Mar 1, 10:39 am, Dr J R Stockton
> <reply1...(a)merlyn.demon.co.uk> wrote:
>
> > The time for an empty loop should be carefully measured
> > and subtracted.
>
> This approach would assume that the implementation in
> question doesn't optimize away the loop as a useless
> construct.

How? It's incrementing a variable. --Antony
From: Lasse Reichstein Nielsen on
Antony Scriven <adscriven(a)gmail.com> writes:

> On Mar 7, 12:08�am, Michael Haufe wrote:
>
> > On Mar 1, 10:39�am, Dr J R Stockton
> > <reply1...(a)merlyn.demon.co.uk> wrote:

> > This approach would assume that the implementation in
> > question doesn't optimize away the loop as a useless
> > construct.
>
> How? It's incrementing a variable. --Antony

If it's a local variable (and you *really* shouldn't use global
variable in a loop, or write benchmark code that runs at top-level),
and it's not read again afterwards, and it's possible to see that the
loop always terminates, then it's a safe optimization to remove the
entire loop.

I.e.
function test() {
var x = 42;
for (var i = 0; i < 1000000; i++) { x = x * 2; }
}

This entire function body can safely be optimized away.
Whether a JavaScript engine does the necessary analyzis to determine
that is another question, but it's a possible optimization.

Quite a lot of stupid micro-benmarks can be entirely optimized away
like this.

/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'

From: Jorge on
On Mar 7, 11:46 am, Lasse Reichstein Nielsen <lrn.unr...(a)gmail.com>
wrote:
>
> Quite a lot of stupid micro-benmarks can be entirely optimized away
> like this.

Ermm, *cough* *cough*.
--
Jorge.
From: Lasse Reichstein Nielsen on
Jorge <jorge(a)jorgechamorro.com> writes:

> On Mar 7, 11:46�am, Lasse Reichstein Nielsen <lrn.unr...(a)gmail.com>
> wrote:
>>
>> Quite a lot of stupid micro-benmarks can be entirely optimized away
>> like this.
>
> Ermm, *cough* *cough*.

I'm really taking a pot-shot at the people making "performance
benchmark suites" that claim to have lasting relevance and to test
something relevant to actual use.
If they are merely a bunch of micro-benchmarks, they are far too
easy to optimize for - without actually helping the performance
of real applications. Like this "dead variable/dead code" elimination
that wouldn't do anyting for a real programt that only compute things
it actuall needs.

Benchmarks should be built with a *goal*. It should measure something
relevant that you want to optimize for, and then you can use the
benchmark as measure of the success of your optimizations.
And, preferably, it should compute a result (and check the result!),
and not do stupid things that no right-minded programmer would
do (like running entirely in top-level code or use global variables
where local ones would suffice). That just means that you end up
optimizing for stupid code instead of good code.


Another thing to make is a "speed test". It just measures how the
current browsers are doing something *right now*. That's also a
perfectly fine thing to do, for doing comparisons. It's not a problem
if it's just a micro-benchmark, because if it is hit by some optimzation
that skews the result, you can just rewrite it until it isn't, and
measure what you need.
It's just normally not suitable for being promoted as a benchmark -
something people should measure themselves against. It's likely to not
stand the test of time.


/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'

From: Jorge on
On Mar 7, 1:16 pm, Lasse Reichstein Nielsen <lrn.unr...(a)gmail.com>
wrote:
> Jorge <jo...(a)jorgechamorro.com> writes:
> > On Mar 7, 11:46 am, Lasse Reichstein Nielsen <lrn.unr...(a)gmail.com>
> > wrote:
>
> >> Quite a lot of stupid micro-benmarks can be entirely optimized away
> >> like this.
>
> > Ermm, *cough* *cough*.
>
> I'm really taking a pot-shot at the people making "performance
> benchmark suites" that claim to have lasting relevance and to test
> something relevant to actual use.
> If they are merely a bunch of micro-benchmarks, they are far too
> easy to optimize for - without actually helping the performance
> of real applications. Like this "dead variable/dead code" elimination
> that wouldn't do anyting for a real programt that only compute things
> it actuall needs.
>
> Benchmarks should be built with a *goal*. It should measure something
> relevant that you want to optimize for, and then you can use the
> benchmark as measure of the success of your optimizations.
> And, preferably, it should compute a result (and check the result!),
> and not do stupid things that no right-minded programmer would
> do (like running entirely in top-level code or use global variables
> where local ones would suffice). That just means that you end up
> optimizing for stupid code instead of good code.
>
> Another thing to make is a "speed test". It just measures how the
> current browsers are doing something *right now*. That's also a
> perfectly fine thing to do, for doing comparisons. It's not a problem
> if it's just a micro-benchmark, because if it is hit by some optimzation
> that skews the result, you can just rewrite it until it isn't, and
> measure what you need.
> It's just normally not suitable for being promoted as a benchmark -
> something people should measure themselves against. It's likely to not
> stand the test of time.

Of course. When you said "stupid micro-benmarks" this one came to my
mind:
http://groups.google.com/group/comp.lang.javascript/msg/84fd9cbba33b9edd
--
Jorge.