From: Dragontamer on
Betov wrote:
> "Dragontamer" <prtiglao(a)gmail.com> écrivait
> news:1139446015.373135.312150(a)g14g2000cwa.googlegroups.com:
>
> > BU_ASM->Introduction to the Assembly Rebirth by Rene
> >=========The sub-line of such claims is always that Assembly is just
> >good for
> > producing inline routines in order to improve the HLLs productions
> > performance quality and to overcome their limitations, but that
> > writing Applications in Assembly would be a strange idea coming from
> > guys unable to use ''the proper tool for the proper thing''.
> >
> > These ideas are basically unfounded and stupid from every point of
> > view:
> >
> > *If Assembly interest is the speed of produced Code, with the actual
> > Processors performances, the argument is very weak, and the Processors
> > evolutions will go on and on, pushing to stupid and dirty programming
> > methods.
> >
> > *If Assembly interest is at the size of produced Code, the argument is
> > completely ridiculous, given the actual Hard Disks capacities and
> > given the modern OSes Memory Manager's performances.
> >
> >=========
> > But all that aside, what is it? In your last post, you imply
> > that speed is a major advantage for writing in assembly.
> > While in BU_ASM, you claim it it is a weak argument.
> >
> > And I know you say time and time again that you don't
> > write in assembly for speed, but for readability.
> >
> > So which is it?
> >
> > --Dragontamer
>
> Speed is given for "almost free" to the Assembly Programmers
> because they have a direct access to Strategy Optimization,
> that is the only Optimization that matters.

This has been a sticking point for a while with your
assembly language stuff. Can you give an example of
Strategy Optimization? "War story" or something?

> As opposed to the ones playing the fool with HLLs and Code
> Level Optimized Routines, what i claim is based on the
> materials, i am giving away, that is, RosAsm, which is the
> fastest of the actual Assemblers, whereas i never did any
> kind of Code Level Optimization effort.

I take it you haven't been reading much recently on Software
Engineering... The first rule of optimization is don't...

It has been well known for years you shouldn't optimize your
code. The majority of the industry (in my eye anyway) seems
to spend no time optimizing, regardless of their language.

> So, in order the significative points are:
>
> * Readability, because if you can't read a Source you loose
> control on the levels, faster than you wrote it.

I agree on this point.

And with that said: more people can read C, and less
people can read Assembly. By shear population alone, C
is "more readable" by the population :-p

As for a more serious answer, readability is a difficult subject
where you would have to take into consideration a huge
amount of factors, including size of language, tutorials
and code base avaliable, an individual's expectations, and
more. (Some find that the = operator in many languages
is unreadable, because math-oriented people view
this as "equals" and not "assignment"...)

Overall, "readability" will get confounded with experiance. A C++
user may expect ObjC to automatically call the deconstructor,
while Assembly programmers expect nothing to be done
before and after a function is called. (constructor/deconstructor
calls make no sense to an assembly programmer).

"Features" become "idiotic language design", vice versa.

So all in all, "Readability" is a m00t point, and must really
be taken by the individual to form his or her own opinion on
the matter.

> * Strategy Optimization. With that one, speed gain is not
> a matter of saving, say, 5% of time. It is a matter of
> suppressing the bottles' necks, by human logic. What is not
> theorically impossible to achieve in HLLs, but what is
> de-facto almost never found in HLLs, [They use Languages
> that _HIDE_. Period], for the very simple reason that 99%
> of the HLLers have no idea about what they are doing.

Why use "logic" when you already got the tools?

http://www.cs.utah.edu/dept/old/texinfo/as/gprof.html#SEC5

These code profilers work with C, C++, TCL, Python, Perl, Scheme,
etc. etc.

I dunno about you, but it seems things are exactly the opposite:
Tools exist to find out the bottlenecks (say, where your code spends
20% of its time) for HLLs, as more compile time information is there.

There is even a tool for counting how many lines of your code were
executed for C.

http://www.network-theory.co.uk/docs/gccintro/gccintro_81.html

While I'm sure there are assembly level tools for this, just I'm not
sure if it would be able to give you the same amount of information,
such as functioncalls or parameters. Even then, the tools would
have to be built to a specific assembler because every assembler
does things a little differently.

Now if you mean something else by this Strategy optimization,
please tell me again. Because if it is this "profiling" stuff, it has
already been done in nearly all HLLs.

--Dragontamer

From: randyhyde@earthlink.net on

Dragontamer wrote:
> >
> > Speed is given for "almost free" to the Assembly Programmers
> > because they have a direct access to Strategy Optimization,
> > that is the only Optimization that matters.
>
> This has been a sticking point for a while with your
> assembly language stuff. Can you give an example of
> Strategy Optimization? "War story" or something?

The only example I remember him using in this newsgroup was how he left
some feature out of a program because it was expensive to implement
(i.e., fully implementing displacement size optimization in x86
instructions). Hardly an "strategic optimization" if you don't achieve
what the specs call for (reasonable optimization, comparable to what
other assemblers produce; full optimization is not likely to happen, as
the problem is NP-Complete, but Rene's results are quite poor).

Basically, he trades a crappy optimization algorithm that runs fast for
a decent optimization algorithm that takes a little longer. Cool, if
having a fast algorithm is your primary goal and producing optimized
code is of little consequence. Then again, FASM, which does a *much*
better job than RosAsm at optimizing code sequences is better than 2x
faster on all the tests I've run, so it doesn't seem like Rene's
optimization was all that "strategic".


>
> > As opposed to the ones playing the fool with HLLs and Code
> > Level Optimized Routines, what i claim is based on the
> > materials, i am giving away, that is, RosAsm, which is the
> > fastest of the actual Assemblers, whereas i never did any
> > kind of Code Level Optimization effort.
>
> I take it you haven't been reading much recently on Software
> Engineering... The first rule of optimization is don't...

I have read a lot on software engineering over the past 20 years. I
don't find that rule anywhere. I do read a lot about "Premature
optimization is the root of all evil." But keep in mind that this
statement was made over 30 years ago, when optimization meant something
entirely different than it does today.

>
> It has been well known for years you shouldn't optimize your
> code.

Uh, and for what reason? This is a new one to me and I would suggest
that this isn't particularly well-known.

Certainly it is the case that *marketing pressures* have kept people
from going back and cleaning up their code, particularly during the
late 1990s and early 2000's when processors were doubling in speed
every couple of years. But today that progress has stagnated and you
cannot rely upon the side effects of Moore's law to let you write code
that half the speed it really needs to be, and still do okay when the
product ships by virtue of faster CPUs.


> The majority of the industry (in my eye anyway) seems
> to spend no time optimizing, regardless of their language.

Could it be that the majority of the industry grew up when CPUs were
doubling in speed every year or two and they never bothered to *learn*
how to optimize their code? Granted, CPUs are relatively fast today.
But unless people learn to optimize their code (or take advantage of
new facilities, such as learning concurrent programming -- a form of
optimization), they're not going to be capable of writing the next
generation of applications that require more CPU cycles than today's.


>
> I dunno about you, but it seems things are exactly the opposite:
> Tools exist to find out the bottlenecks (say, where your code spends
> 20% of its time) for HLLs, as more compile time information is there.

???
Ever used VTune?

>
> There is even a tool for counting how many lines of your code were
> executed for C.

Check out the trace facility in HLA. (Shameless plug for James). It
gives you the ability to count (and otherwise process) each individual
statement execution in an assembly language program. I'm not an expert
at VTUNE, but I'm pretty sure it gives you this ability too. Of course,
the profiler in Visual Studio works great with MASM (and MASM output
from HLA -- another shameless plug :-).

> While I'm sure there are assembly level tools for this, just I'm not
> sure if it would be able to give you the same amount of information,
> such as functioncalls or parameters.

It gives you different information. But certainly function calls are
accumulated in various profilers that work fine with assembly language
programs.


> Even then, the tools would
> have to be built to a specific assembler because every assembler
> does things a little differently.

Not if the tool works at the binary level, or uses standard debugging
output in the object code file (e.g., STABS).


>
> Now if you mean something else by this Strategy optimization,
> please tell me again. Because if it is this "profiling" stuff, it has
> already been done in nearly all HLLs.

Actually, the *big* problem with profiling and optimizing after a
program is written is that you're stuck with the overall architecture.
The phrase "premature optimization is the root of all evil" was penned
in the days when people would start working about register allocation
and instruction scheduling when they started writing their first lines
of code. And there is no real reason for that level of optimization
when coding first begins. On the other hand, too many people have
interpreted that statement to mean "I don't need to worry about
optimization at all -- when the program's complete and performance
sucks, then I'll just use a profiler to find the hot spots and fix
those."

The problem with that theory, of course, is that the "20% of the code
where the program spends 80% of its execution time" is rarely located
in one place. It is almost never the case that you can surgically
operate on that 20% without attacking a fair chunk of the other 80%.
Worse, by the time the program (of any decent size) reaches the point
where you're profiling it, it's *far* too late to solve the performance
issues by rearchitecting the design. Yet a new design (with better
algorithms, what I believe Rene is calling "strategic optimization") is
usually what's required to get significantly better performance.

Of course, one thing to keep in mind is that program size (and, in
particular, how much memory a program uses during a typical execution)
is one of the *huge* factors in determining program performance today.
I have profiled HLA, for example, and found that one of the *main*
reasons it runs slower than smaller assemblers is because it is large
and it tends to touch a lot of memory during execution. As a result,
the cache is often getting thrashed and that means that HLA v1.x is
running about an order of magnitude slower than it would if everything
was sitting inside the cache. Indeed, running HLA on a newer processor
with a *lot* of cache memory improves performance tremendously. One of
the reasons that HLA v2.0 (in development) runs *considerably* faster
is not because it's written in assembly, but because the working set
that HLA v2.0 employs is *much* smaller than the working set HLA v1.x
uses. As such, the assembler (such as it exists today) is running
almost totally out of cache. This makes a *big* difference.

The problem with lazy programmers, who don't bother optimizing, is that
they not only use poor algorithms that consume lots of time, but they
don't optimize their data structures, either. As such, their programs
consume lots of memory and the cache gets thrashed.

Instruction scheduling and other such optimizations (that you'd do with
a profiler) really are great tools when all the other factors (better
algorithms, good memory usage, etc) are already taken care of. But
attempting to run a profiler on a poorly architected system is like
putting a band-aid on a severed limb. It's not going to help much.
That's probably why people around you believe that optimization isn't
worth doing. They tried to optimize their program at the wrong point in
the development cycle and discovered that they couldn't achieve much by
it.
Cheers,
Randy Hyde
P.S. I agree that few programmers today bother to optimize their code.
As I said, this is largely because they never learned how to do it
properly in the first place.

From: Dragontamer on

randyhyde(a)earthlink.net wrote:
> Dragontamer wrote:
>> Betov wrote:
> > > As opposed to the ones playing the fool with HLLs and Code
> > > Level Optimized Routines, what i claim is based on the
> > > materials, i am giving away, that is, RosAsm, which is the
> > > fastest of the actual Assemblers, whereas i never did any
> > > kind of Code Level Optimization effort.
> >
> > I take it you haven't been reading much recently on Software
> > Engineering... The first rule of optimization is don't...
>
> I have read a lot on software engineering over the past 20 years. I
> don't find that rule anywhere. I do read a lot about "Premature
> optimization is the root of all evil." But keep in mind that this
> statement was made over 30 years ago, when optimization meant something
> entirely different than it does today.

[The First Rule of Program Optimization] Don't do it.

[The Second Rule of Program Optimization---For experts only] Don't do
it yet.

Michael Jackson
Michael Jackson Systems Ltd.

I've heard "Premature optimization is the root of all evil", but this
one also
convays the same concept. Perhaps it is because I cut off the 2nd rule
that you didn't remember it :)

> > It has been well known for years you shouldn't optimize your
> > code.
>
> Uh, and for what reason? This is a new one to me and I would suggest
> that this isn't particularly well-known.

It came out wrong. As in: you shouldn't optimize your code unless
you know what you are doing. :-p Basic concept still there, optimizing
should always be the last thing you do, if at all.

> > I dunno about you, but it seems things are exactly the opposite:
> > Tools exist to find out the bottlenecks (say, where your code spends
> > 20% of its time) for HLLs, as more compile time information is there.
>
> ???
> Ever used VTune?

Nah, heard it is an awesome tool however.

[snip]

> The problem with lazy programmers, who don't bother optimizing, is that
> they not only use poor algorithms that consume lots of time, but they
> don't optimize their data structures, either. As such, their programs
> consume lots of memory and the cache gets thrashed.

That isn't "optimization" in my eyes, thats a design flaw. :-/

> Instruction scheduling and other such optimizations (that you'd do with
> a profiler) really are great tools when all the other factors (better
> algorithms, good memory usage, etc) are already taken care of. But
> attempting to run a profiler on a poorly architected system is like
> putting a band-aid on a severed limb. It's not going to help much.
> That's probably why people around you believe that optimization isn't
> worth doing. They tried to optimize their program at the wrong point in
> the development cycle and discovered that they couldn't achieve much by
> it.

Heh, doing anything with a poor architecture isn't exactly pretty,
including
fixing one up.

--Dragontamer

From: Charles A. Crayne on
On 9 Feb 2006 16:33:55 -0800
"Dragontamer" <prtiglao(a)gmail.com> wrote:

:[The First Rule of Program Optimization] Don't do it.

The version I like best is:

"Coding without design leads to optimizing without end."

-- Chuck
From: o///annabee on
P? 9 Feb 2006 12:05:21 -0800, skrev Dragontamer <prtiglao(a)gmail.com>:

> Betov wrote:

>
> This has been a sticking point for a while with your
> assembly language stuff. Can you give an example of
> Strategy Optimization? "War story" or something?

My first asm Strategy Optimization was to change from Delphi to RosAsm,
two years and 1 month ago.
So far this is the best example I have got.



>
> --Dragontamer
>

First  |  Prev  |  Next  |  Last
Pages: 14 15 16 17 18 19 20 21 22 23 24 25
Prev: Check out POASM
Next: Bad habits