From: Rod Pemberton on
<NathanCBaker(a)gmail.com> wrote in message
news:0f1a2bfd-2804-4188-90a4-30fb429df36e(a)f40g2000pri.googlegroups.com...
> > > ............end
> > > ..........end
> > > ........end
> > > ......end
> > > ....end
> > > ..end
> > > end
> >
> > > One of those in the middle was the 'end' associated with the 'while'.
> >
> > So? It looks well indented... You should've been able to align the
> > columns. If not, before the entire block, set somevar=0. Inside the
> > "while", set somevar=1. Prior to each "end", check somevar for 1 and if
so,
> > print a message specific to that "end". From past conversations, it
seems
> > that these problems seem to plague you. You need to learn to use
"printf"
> > or it's equivalent as your only debugger.
> >
>
> Your employer sure must enjoy paying you overtime. My solution used a
> digital computer that I was born with. I simply added a finger for
> every "begin"-style token and removed a finger for every "end" token
> as I scanned down the code -- stopped whenever I had the same number
> of fingers on a "end" as I did for the "while". I guess it is called
> "Agile programming" for "dexterity" reasons. :)

What? That's far more work than looking at the aligned column's. I guess
you get paid "extra"... ;)

> > > > Coal for Randall...
> >
> > > A programmer's manual for Rod.
> >
> > Apparently not, since I still don't know what values "true" and "false"
> > represent. I don't know whether booleans are logically anded with one. I
> > don't know how "edx" became a boolean without a declaration as such or
why
> > it would be legal to use it in that way if HLA has any type system at
all.
> > And, I still don't know why HLA has, apparently, at least 3 32-bit
types...
> >
>
> What you cannot locate in the HLA manual, you can usually find it in
> "Art of Assembly" and vice versa. Keep looking...

Did you miss the point that these really should be determinable from the
code? If not, it's a poorly designed language. Besides, I don't program in
HLA... So, RTFM isn't an acceptable answer. While there are reasons for me
to ask the questions, there aren't any reasons for me to RTFM. But, those
who did RTFM, like you, and he who wrote TFM should know the answers.


Rod Pemberton


From: robertwessel2 on
On Dec 4, 4:00 am, "Rod Pemberton" <do_not_h...(a)nohavenot.cmm> wrote:
> Ok, I didn't look this all up again.  It's from memory...  IIRC, the C
> standard requires a "C char" must be large enough to represent the entire
> character set.  It must be at least the size of the minimum addressable
> number of bits, which, IIRC, is called a "C byte".  If a "C byte" is too
> small to represent the entire character set, a "C char" can also be multiple
> "C bytes".  However, minimum sizes of types are declared elsewhere, with 8
> bits, IIRC, being the smallest size for a "C char".


A C char and byte are the same size, and the terms are largely
synonymous. There is no smaller unit of addressable storage in C than
a char, and chars must be at least 8 bits, and must be able to store
at least the values 0-255 in unsigned format, and -127 to +127 in
signed form.

If the native "byte" of an implementation is smaller than that, a
conforming implementation will have to use multiples of that (or some
other scheme) to implement a C char of at least the minimum size, but
there will be no way to address that smaller type without some sort of
language extension.

Multi-byte characters in C are something completely different, and are
intended for supporting things like Unicode which needs more than
eight bit chars, and are actually built on one of the larger integer
types.
From: NathanCBaker on
On Dec 4, 11:29 pm, "Rod Pemberton" <do_not_h...(a)nohavenot.cmm> wrote:
> <NathanCBa...(a)gmail.com> wrote in message
>
> news:0f1a2bfd-2804-4188-90a4-30fb429df36e(a)f40g2000pri.googlegroups.com...
>
>
>
> > > > ............end
> > > > ..........end
> > > > ........end
> > > > ......end
> > > > ....end
> > > > ..end
> > > > end
>
> > > > One of those in the middle was the 'end' associated with the 'while'.
>
> > > So? It looks well indented... You should've been able to align the
> > > columns. If not, before the entire block, set somevar=0. Inside the
> > > "while", set somevar=1. Prior to each "end", check somevar for 1 and if
> so,
> > > print a message specific to that "end". From past conversations, it
> seems
> > > that these problems seem to plague you. You need to learn to use
> "printf"
> > > or it's equivalent as your only debugger.
>
> > Your employer sure must enjoy paying you overtime.  My solution used a
> > digital computer that I was born with.  I simply added a finger for
> > every "begin"-style token and removed a finger for every "end" token
> > as I scanned down the code -- stopped whenever I had the same number
> > of fingers on a "end" as I did for the "while".  I guess it is called
> > "Agile programming" for "dexterity" reasons.  :)
>
> What?  That's far more work than looking at the aligned column's.  I guess
> you get paid "extra"...  ;)
>

If the issue could have been solved by simply "looking at the aligned
columns" as you suggest, then the grand minds of CompSci wouldn't have
had as much motivation to invent the superior languages that make use
of distinctive end tokens.

>
>
>
>
> > > > > Coal for Randall...
>
> > > > A programmer's manual for Rod.
>
> > > Apparently not, since I still don't know what values "true" and "false"
> > > represent. I don't know whether booleans are logically anded with one.. I
> > > don't know how "edx" became a boolean without a declaration as such or
> why
> > > it would be legal to use it in that way if HLA has any type system at
> all.
> > > And, I still don't know why HLA has, apparently, at least 3 32-bit
> types...
>
> > What you cannot locate in the HLA manual, you can usually find it in
> > "Art of Assembly" and vice versa.  Keep looking...
>
> Did you miss the point that these really should be determinable from the
> code?  If not, it's a poorly designed language.

One cannot determine these answers by simply looking at C code
either. Are you saying that C is a poorly designed language simply
because one must read a manual in order to understand it?

>  Besides, I don't program in
> HLA...  So, RTFM isn't an acceptable answer.  While there are reasons for me
> to ask the questions, there aren't any reasons for me to RTFM.  But, those
> who did RTFM, like you, and he who wrote TFM should know the answers.
>

I cannot believe that any of those answers are un-known. Surely, if a
manual page has been inadvertently deleted, or if Randy has forgotten
to document some obscure point, then it is obvious that disassembly
should provide some enlightenment.

Nathan.


From: NathanCBaker on
On Dec 4, 1:29 pm, Herbert Kleebauer <k...(a)unibwm.de> wrote:
> NathanCBa...(a)gmail.com wrote:
> > On Dec 4, 7:06 am, Herbert Kleebauer <k...(a)unibwm.de> wrote:
>
> > > > So, you agree with me that 'mov( 0, eax )' is better readable?
>
> > The 'mov( 0 ... " is self-commenting.
>
> > > Why should this be better readable than
>
> > >   sub.l r0,r0
> > >   eor.l r0,r0
>
> > This requires:
>
> >    sub.l r0,r0  ;  store 0 in r0, this is not declaring a sub-routine.
> >    eor.l r0,r0  ;  store 0 in r0, this is not a tail-less donkey.
>
> If it isn't obvious that e.g. "5-5" is the same as "0" then it's
> better to not do assembly programming.
>

In Algebra, 'func( x ) = func( x ) - func( x )' does not make any
sense.

But 'func( x ) = 0' does.

> > > No ,I would do it in real assembly (not a single library call):
>
> > > The binary:
>
> > > @echo off
> > >         trap    #$21
> > >   ...
> > >         trap    #$21
> > >   ...
> > >         trap    #$21
>
> > So, those lines do not initiate the execution of *any* library code
> > what-so-ever, huh?  Those are magic boxes that the CPU takes care of
> > all by itself, huh?  Millions of ASM programmers around the world just
> > learned something new today.  :)
>
> I really hope that not a single one of the Millions of assembly
> programmer has learned something new (note: a HLA a programmer
> isn't an assembly programmer). If you call a library routine,
> then the called code is executed in the same context as the calling
> code. That means, there is no difference in calling the library
> routine or embedding the identical instruction sequence directly
> within your code (all the library code does, is to save you the
> time to write the code yourself). A software interrupt (trap #$21)
> changes the CPU mode to supervisor mode so privileged instructions
> can be executed. You can spend as much time as you like, you will
> never be able to write a code sequence in your user program which
> does the same as the interrupt routine, because you are not allowed
> to execute the necessary privileged instructions.

So, your program avoids calling "library code" by calling the superior
"Library code" instead??

Nathan.
From: Rod Pemberton on
<robertwessel2(a)yahoo.com> wrote in message
news:00b99e16-304e-4ecc-a6e2-d193025dd4de(a)q9g2000yqc.googlegroups.com...
On Dec 4, 4:00 am, "Rod Pemberton" <do_not_h...(a)nohavenot.cmm> wrote:
> > Ok, I didn't look this all up again. It's from memory... IIRC, the C
> > standard requires a "C char" must be large enough to represent the
entire
> > character set. It must be at least the size of the minimum addressable
> > number of bits, which, IIRC, is called a "C byte". If a "C byte" is too
> > small to represent the entire character set, a "C char" can also be
multiple
> > "C bytes". However, minimum sizes of types are declared elsewhere, with
8
> > bits, IIRC, being the smallest size for a "C char".
>
>
> A C char and byte are the same size,

False.

> and the terms are largely
> synonymous.

True. But, that's because microprocessors dominated. And, they
"defacto-ly" represent a byte as 8-bits which fits well with C byte and C
char.

> There is no smaller unit of addressable storage in C than
> a char, and chars must be at least 8 bits,

True.

> and must be able to store
> at least the values 0-255 in unsigned format, and -127 to +127 in
> signed form.

Not sure. Don't care. IMO, the signed and unsigned values are not
relevant. As stated, it must be 8-bits or more and large enough to
represent the entire character set. I.e., if the character set is 0-127,
the 8-bits only need to represent 7-bits of information: 0-127. I.e., if
the character set is 0 to 65536 or -32768 to 32768, then the C char must be
at least 16-bits, even though a C byte may only be 8-bits.

> If the native "byte" of an implementation is smaller than that, a
> conforming implementation will have to use multiples of that (or some
> other scheme) to implement a C char of at least the minimum size,

True. While I believe the second para is true, as you've stated the first
paragraph, it presents a problem with the second paragraph, specifically
with my "False" response above.

I.e., there is a contradiction:
> A C char and byte are the same size,

In the context of para #2, a C char and a C byte aren't the same size. As
you stated, a C char will be some multiple of C bytes. E.g., if the
smallest native addressable unit is 4-bits, that's a C byte. And a C char
must be at least 8-bits, therefore it's at least two C bytes. E.g., if the
smallest native addressable unit is 9-bits, that's a C byte. And a C char
must be at least 8-bits, therefore it's one C byte of 9-bits.

> but
> there will be no way to address that smaller type without some sort of
> language extension.

As you stated, there is no need to:

> There is no smaller unit of addressable storage in C than
> a char

The important part is the "in C" clarification...


Rod Pemberton


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
Prev: Win32 non blocking console input?
Next: hugi compo #29