From: Herbert Kleebauer on
Rod Pemberton wrote:
> "Frank Kotler" <fbkotler(a)verizon.net> wrote in message

> > The source is 16 bits...
>
> Where's your proof? Nasm64developer said so?
>
> Viktor: "Lucian is dead."
> Singe: "According to whom?"

Intel/AMD define the processor architecture, including the
instruction set and opcode map. But the symbolic representation
of an instruction is not defined by the processor architecture
(and therefore not by Intel/AMD) but by the author of the assembler.
This symbolic representation should be as logic as possible (which
therefore excludes the use of the symbolic representation used by
Intel/AMD in the processor manuals).

Now, a selector has a size of 16 bit. So if you have to specify
a register which holds the selector, it's "logical" to use the name
of a 16 bit register (lsl eax, bx) and not a 32 bit register name
(lsl eax, ebx).

But even more logical it would be, to put the size to the
instruction and not to the register;

ldsl.l r3,r0


It's the same as with the shift instruction. You use

"shl eax,cl" and not "shl eax,ecx" because cl better
fits to the 5 bit shift count than ecx. But also here
the logical way would be:

lsl.l r2,r0
From: Chuck Crayne on
On Thu, 21 Aug 2008 02:50:31 -0400
"Rod Pemberton" <do_not_have(a)nohavenot.cmm> wrote:

> But, to me, it seems that the
> opcode map fails to make much sense for source register disassembly
> for LAR and LSL by not fitting the cpu's register model.

True enough, but what you refer to as the "register model" doesn't
make much sense either. Even in the emulated IA-32 hardware model,
there are no 8-bit or 16-bit registers. And in the x86_64 model,
there are no 32-bit registers. When we refer to the "AH register",
for example, what we really mean is the low order 8 bits of the EAX
register. [Or in my case, the low order 8 bits of the RAX register.)
Nevertheless, the notation is useful, because it clearly indicates the
number of bits affected. Thus, it makes sense to me to disassemble a
register operand which Intel specifies as Ew as a 16-bit register.

As to the opcode map, for many years now, Intel has been walking a
tight-wire between the desire to take advantage of new technology
and the industry demand for machine level compatibility. As a result,
the opcode map has long since lost all resemblance to the actual
hardware. Today's actual cpu register model is an implementation
dependent number of 128-bit registers, with the move to 256-bit
registers promised in the next year or two.

The NASM team, having pretty much gotten its act together with respect
to the x86_64 architecture and its associated REX prefixes, is now
learning to cope with the 128-bit and 256-bit AVX extensions and their
associated VEX prefixes.

Is it any wonder then that none of us are particularly concerned about
conforming to a register model which is now a quarter-century old?

--
Chuck
http://www.pacificsites.com/~ccrayne/charles.html


From: rio on
"Alexei A. Frounze" <alexfrunews(a)gmail.com> ha scritto nel messaggio
news:2d7b1027-9f13-4602-b810-b9b0813070c2(a)k36g2000pri.googlegroups.com...

> I was only involved in two things, which I thought were bad ('cause I
> needed them to work:):
> 1. include paths for INCBIN and the direction of the path traversal.
> I've been credited for this somewhere in the doc.
> 2. broken symbolic debug information for Turbo Debugger. I don't
> remember if this actually got fixed or reverted to what it was. The
> problem AFAIR was that some of the state was saved in variables
> declared as static and some NASM's functions were relying on those to
> keep the state between the calls. Somebody very smartly decided to
> "fix" these ugly static variables and just dropped static effectively
> turning the variables into local ones. The wrong fix broke the
> symbolic info.

For the problem of use the Borland Debugger

nasmw -fobj file.asm
bcc32 -v file.obj

and in the file.asm there should be
"
section _DATA use32 public class=DATA
global f
global _main

section _TEXT use32 public class=CODE
f:
ret

...start: ; here should start with ..start:
_main: ; it seems the linker search _main
ret
"

all symbols that the debugger has to see, must be global.
If the "f" is not global it is not seen.

> I sent a bunch of messages on this to nasm-devel at lists... and
> exchanged some info with Frank. The messages were from July 2003 and
> their subject line contained:
> a) Nasm + Borland ( was Once more time - outbin.c (fwd))
> b) NASM's Borland Debug Info Output
> I think I wasn't credited for this and that's because probably it
> didn't get fixed...
> Can somebody look this up in the old mail and then in the NASM source?
>
> Alex



From: Rod Pemberton on
"Chuck Crayne" <ccrayne(a)crayne.org> wrote in message
news:20080821183358.2b265275(a)thor.crayne.org...
> On Thu, 21 Aug 2008 02:50:31 -0400
> "Rod Pemberton" <do_not_have(a)nohavenot.cmm> wrote:
>
> > But, to me, it seems that the
> > opcode map fails to make much sense for source register disassembly
> > for LAR and LSL by not fitting the cpu's register model.
>
> True enough, but what you refer to as the "register model" doesn't
> make much sense either. Even in the emulated IA-32 hardware model,

I'm not sure why you used "emulated" in this sentence...

> there are no 8-bit or 16-bit registers.

True, if I remove "emulated"...

> And in the x86_64 model,
> there are no 32-bit registers.

OK. (BTW, it still seems to fit with the model I'm using... 8/16, 8/32, now
8/64... but, I'm not current on 64...)

>When we refer to the "AH register",
> for example, what we really mean is the low order 8 bits of the EAX
> register. [Or in my case, the low order 8 bits of the RAX register.)

True, if I reject the (obvious) typo... If not a typo: False.

> Nevertheless, the notation is useful, because it clearly indicates the
> number of bits affected.

But, it's not just a matter of size, the encoding is important. In the bits
used to encode an instruction, a 16-bit instruction can encode only two
sizes of register: 8-bit and 16-bit. 32-bit instruction: 8-bit and 32-bit.
The mode and overrides determine if the non-8-bit register will be 16-bit or
32-bit. Not sure what holds for 64-bit.

> Thus, it makes sense to me to disassemble a
> register operand which Intel specifies as Ew as a 16-bit register.

But, it's not really a 16-bit only register due to the encoding. It's a
16-bit register for 16-bit mode and 32-bit register for 32-bit mode -
depending on the mode and overrides. The instruction descriptions confirm
that this is the case. And, the Intel (since at least 2003 or so) manuals
explicitly state that 32-bit mode reads 32-bit registers although they
discard the unused upper 16-bits. I.e., it's really acting like Ev, not
like Ew. From the manual descriptions, this is the case for the 386 also.

> Is it any wonder then that none of us are particularly concerned about
> conforming to a register model which is now a quarter-century old?

First, I suspect that 16-bit and 32-bit code for BIOS, video BIOS, PCI, will
be around for a long time...

Second, although Intel changed the instruction set for one of their cpu's,
AMD64 kept the instruction set mostly the same. I.e., the quarter-century
old register model is preserved because the instructions that operate on
those registers were preserved.


Rod Pemberton

From: Herbert Kleebauer on
Rod Pemberton wrote:
> "Herbert Kleebauer" <klee(a)unibwm.de> wrote in message

> > Intel/AMD define the processor architecture, including the
> > instruction set and opcode map. But the symbolic representation
> > of an instruction is not defined by the processor architecture
> > (and therefore not by Intel/AMD)
>
> I disagree. (I've said that to you previously...)

To say it with your own words:

Where's your proof? ......... said so?

Viktor: "Lucian is dead."
Singe: "According to whom?"


> > but by the author of the assembler.
>
> I know. (You've stated that previously...)

You know? And you still disagree? Then maybe any further discussion
will not make much sense.


> > This symbolic representation should be as logic as possible (which
> > therefore excludes the use of the symbolic representation used by
> > Intel/AMD in the processor manuals).
>
> And, now we're onto your favorite topic: syntax for HK's Windela. :-)

No. The topic is: Intel syntax is illogical. I very rarely posted
Windela/Lindela code lately, I mostly use NASM code (see my source
code posting from yesterday). Sadly the NASM macro system isn't powerful
enough to really "make" it a logical syntax.


> > Now, a selector has a size of 16 bit. So if you have to specify
> > a register which holds the selector, it's "logical" to use the name
> > of a 16 bit register (lsl eax, bx) and not a 32 bit register name
> > (lsl eax, ebx).
>
> Okay, lets work through your example for data which isn't a selector. Now,
> an ASCII char has a size of 7 bit. So if you have to specify a register
> which holds the ASCII char, it's "logical" to use the name of a 7 bit
> register ... Hmm, that doesn't seem to work.

Therefore mostly the _name_ of a byte register is used to access an ascii
value stored in a register (sub al,'0'). But sometimes it is convenient to
use the _name_ of a 32 bit register (e.g. when using a look up table
mov al,[esi+eax] ). But this only makes sense, when the 32 bit register
really contains a 7 bit ascii value, which means the upper three bytes
are zero. To speak from a selector in a 32 bit register also only makes
sense when the upper word is zero.

But you still mix up the name of a register with the register itself
(which is a result of the illogical Intel syntax). The 32 bit x86
architecture has 8 (7 without the sp) general purpose registers.
Which one is used is mostly coded in a three bit field in the instruction
(which can hold values 0 - 7 and therefore it would be logical to call
them register0 - register7). There are not eight 32 bit (eax, edx, ..) and
eight 16 bit (ax, dx, ..) and eight 8 bit registers (ak, dl, ...) but only
this eight 32 bit registers. You use the name of a (not existing) 16 bit
register in the symbolic representation of an instruction to make it
obvious that only the lower half of the 32 bit register is used in the
operation ( add ax,dx ). It would be much more logical to use the names
of the real existing registers eax,edx (or even much better r0,r1) and
use an operator which makes clear, that only the lower halve of the
registers are used ( add.w r1,r0 ).

But if you prefer the virtual register names to specify which part
of the register is used, then you have to use "lsl eax, bx" to make
clear that only the lower halve of the ebx register is used in this
instruction.



> Perhaps, we're looking at the
> problem in reverse: trying to fit data of a specific size to a standard
> register size and then when the instruction doesn't support that size
> register attempting to forcing the instruction to use that size. Shouldn't
> we be looking at the problem the other way around: seeing what sizes of
> register the instruction supports and then decide on one that will also fit
> the data? What I'm trying to say is that when it comes to selectors
> everyone is trying to match the register to the data instead of trying to
> match the data to the register. This seems to be the opposite of what one
> normally does.

Sorry, that doesn't make any sense. The virtual names of the non existing
16 and 8 bit registers are introduced by the Intel syntax only to denote
the part of the existing 32 bit register which is used by the instruction.
If "lsl eax, bx" only uses the lower halve of ebx but you still want to use
the name ebx instead of bx, then you claim that it wasn't a good idea
to introduce the virtual 16 and 8 bit register names at all (which I
fully support).


> > But even more logical it would be, to put the size to the
> > instruction and not to the register;
>
> I have no real issues with this. (I've mentioned this previously... too.)
>
> > ldsl.l r3,r0
>
> The things I like to see here (most of which I've mentioned previously...
> also):
>
> 1) standard Intel instruction names

Why? What makes the names Intel has chosen so preferable? But don't
say: "because they are from Intel".

> 2) a character between the instruction name and sizes to make recognizing
> the instruction easy

There is a character between name and size: the '.'

> 3) same register order as Intel, not Motorola...

Intel uses the a three bit field for register numbering (0-7),
the cpu only understands this number an not eax,ebx,ecx,...


> 4) standard Intel register names *OR* some method other than r# so I know
> which r# corresponds to eax,ax,al... etc.

Why should the eight physically existing register correspond to ascii
strings like "eax", "ebx", ... . Forget about this strings, the don't
make any sense. Which names do you use for the additional registers
in 64 bit mode?



> E.g., (changing instruction from lds to adc...)

> But, since there is some redundancy and you don't like redundancy, if "ra"
> is eax,ax,al:
>
> adc.ll ra,rb
> adc.ww ra,rb
> adc.bb ra,rb (b for lower byte)
> adc.hh ra,rb (h for upper byte)
> adc.hb ra,ra

That's even more illogical than the Intel way. You still specify
the size of the operands (.ww, .ll) but append it to the operator.
You should specify the size of the operation (.w, .l) and not of the
operands (there are only 32 bit registers, no 8 or 16 bit registers).


> I think that is fairly clean. I can still tell that "a" is for eax,ax,al,ah
> and "b" is for ebx,bx,bl,bh without having to have continuously read a full

Why do you always insist that something like eax, ax, al exist at all.
All that exist are flip-flops (32 of them forming a register) and this
registers are addressed by a three bit number 000 - 111. It doesn't make
any sense to associate r0 with eax, r0 is register 0 but "eax" is an ascii
string without any meaning.


> > It's the same as with the shift instruction. You use
> >
> > "shl eax,cl" and not "shl eax,ecx" because cl better
> > fits to the 5 bit shift count than ecx.
>
> CL is hardcoded. It's not part of the bits used to encode/decode the
> instruction.

Like r0 is hard coded in :
24 01 and.b #1,r0

But there is also:

80 e0 01 and.b #1,r0
80 e2 01 and.b #1,r1
80 e1 01 and.b #1,r2

So, if Intel had made a second shift instruction where the
shift count register is free selectable, then you would use

shl eax,cl for the implicit count register ecx, but

shl eax,eax
shl eac,edx
shl eax,ecx for the free selectable count register?

If not, why would you like to use "lsl eax, ebx"?