From: MitchAlsup on
On Jun 19, 5:27 pm, Brett Davis <gg...(a)yahoo.com> wrote:
> In article
> <ac56ee8b-beb0-47fb-a3eb-b6ef415cd...(a)k39g2000yqd.googlegroups.com>,
>
>  MitchAlsup <MitchAl...(a)aol.com> wrote:
> > On Jun 18, 12:17 am, Andy 'Krazy' Glew <ag-n...(a)patten-glew.net>
> > wrote:
> > > All other things being equal, I would rather build a RISC, perhaps
> > > a 16-bit a+=b RISC as described above.
>
> > > But all other things are not equal.  Out-of-order is a big leveller..  
> > > Although, if you wanted to have lots of simple
> > > cores, you might want to give up x86.
> > All things being equal, I would like the "Instruction set Wars" to die
> > so the designers can get on with improving the underlying
> > architectures.
>
> I will take the other side of that argument. ;)

Fair enough

<snip>

> New breakthrough instruction set designs are coming, whether you want
> them or not, and x86 will do its best to Borg the important bits.

There may be new instructions comming along, but my jist was not
directed to inserting more instructions into an already existant
instruction set, but whether x86 or RISC (or whatever) was the better
architecture. Any Instruction set, that survives for long, will have
more instructions crafted to fit in the holes left from previous
incarnations of those machines--this is NOT what I was adressing in
the instruction set wars.

Designing instruction sets is very different than figuring out how to
fit new instruction semantics into existing instruction formats or how
to recognize a special case and break open a whole new (lets just say)
byte of opcode space to do new things within.

Mitch
From: Jeremy Linton on
On 6/19/2010 9:12 PM, Andrew Reilly wrote:
> On Sat, 19 Jun 2010 21:29:17 +0100, nmm1 wrote:
>
>> In article<1jkcsdg.28nffa169emgN%nospam(a)ab-katrinedal.dk>,
>> =?ISO-8859-1?Q?Niels_J=F8rgen_Kruse?=<nospam(a)ab-katrinedal.dk> wrote:
>>> Andy 'Krazy' Glew<ag-news(a)patten-glew.net> wrote:
>>>
>>>> Unfortunately, there are several different types of integer overflow.
>>>> E.g. the overflow conditions for each of the following are different

> Apparently wording in the C standard lets them do that. The "best"
> alternative I've found so far is to use extra arithmetic precision.
> Quite a small value of "best". When even addition requires the use of in-
> line assembly to produce useful results, the language is dead.


Well, with C++, even though you have to mix inline assembly you can
define your own integer types and explicitly add the overflow checking
(or any other checking). I did this a few years back for a large integer
template I wrote. The final bignum could then throw a C++ exception for
things like overflow. The base classes were pretty ugly (mixed template
and asm) but the final code was very readable. Amusingly enough,
explicit condition checking didn't really appear to affect performance
(vs unchecked code). Modern x86's and g++ seemed to be able to mask an
extra flags/register comparison mixed in with a lot of ALU operations.

Since that piece of code, and the usefulness of inline assembly in
64-bit C++. I've wondered about certain vendors refusal to add inline
assembly to their 64-bit compilers. GCC's extended asm syntax is totally
non portable, but sometimes fantastically useful.





From: Ken Hagan on
On Fri, 18 Jun 2010 20:28:37 +0100, R. Matthew Emerson <rme(a)clozure.com>
wrote:

> I gave a lightning talk about this at the recent International Lisp
> Conference. The slides and one-pager from the proceedings are at
> http://www.clozure.com/~rme/.

Presumably the orthodox reply is that the micro-architecture is so
divorced from the ISA that you'd get similar performance from emulating a
RISCy load-store architecture. Use ESI to point to some "lisp registers",
EDI to point to the "lisp stack" and the rest for fairly conventional
purposes. The "registers" will be more or less resident in the L1 cache,
along with recent parts of the "stack", and the L1 latency is so much
lower than the memory wall that no-one notices.

Mitch/nedbrek: Is this a fair summary of your position?

If not, then RME would appear to have a point. Stealing the DR flag is
certainly evidence that he *feels* short of readily addressable bits!
From: Terje Mathisen "terje.mathisen at on
nedbrek wrote:
> lea r1 =&[r2 + r3]
>
> from (the general form):
> lea r1 =&[r2<< {0,1,2,3} + r3 + imm]
>
> I don't have any proof that a compiler will actually emit it... :)

They will indeed use LEA for a = b+c, as well as all the various forms
of a = b + n*c, where n is a power of two or has a limited number of set
bits.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Anton Ertl on
Andrew Reilly <areilly---(a)bigpond.net.au> writes:
>The reference to C is interesting, because I've recently had the
>experience of encountering a C compiler that actively thwarted the usual
>idiom for signed overflow detection. That is something like:
>
>where x and y are int:
>if (y >= 0) { if (x + y < x) signed_overflow(); } else { if (x + y > x)
>signed_underflow(); }
>
>The compiler in question (can't remember whether it was a recent gcc or
>one of the ARM compilers) came up with this beauty:
>
>warning: assuming signed overflow does not occur when assuming that (X +
>c) < X is always false

Recent gccs certainly do that. At least recent versions know that
they are miscompiling* and produce a warning. Earlier versions just
miscompile such code to nothing silently. Of course, if a programmer
writes "x+y < x", they could not have meant anything other than "1",
no? In theory (i.e., according to the C standard which the gcc
maintainers use for justifying this miscompilation) you can work
around this behaviour by doing something like

"((signed)(((unsigned)x)+((unsigned)y)))<((signed)x)"

But when I tried something like this, at least one gcc version still
miscompiled the code.

*) GCC maintainers usually tend to use some language that blames the
programmer when the compiler does something other than intended by the
programmer. But now and then they have trouble with the
double-thinking, resulting in the use of "miscompile" for such
behaviour even by a GCC maintainer on one occasion; I find it a very
appropriate word.

>Apparently wording in the C standard lets them do that. The "best"
>alternative I've found so far is to use extra arithmetic precision.
>Quite a small value of "best". When even addition requires the use of in-
>line assembly to produce useful results, the language is dead.

Yes, the current bunch of GCC maintainers and comp.lang.c regulars
behave like Pascal refugees who exact their revenge on C. First
standardize a Pascal-like subset of the language, then actively
sabotage all programs outside that subset, so that only Pascal-style C
is still compiled correctly.

We either need a new low-level language, or a compiler for full C
(including low-level features), not just the standardized subset.

>This is the same sort of epic compiler fail as eliding (x<<16)>>16 (once
>a common idiom to sign-extend 16-bit integers) on the grounds that the
>standard doesn't require anything in particular to happen when signed
>integers are shifted.

They do that? Ugh!

- anton
--
M. Anton Ertl Some things have to be seen to be believed
anton(a)mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html